In a move to fortify the safety and security of artificial intelligence (AI), Google has expanded its Vulnerability Rewards Program (VRP), offering compensation to researchers who identify potential threats specific to generative AI systems. The decision aims to address unique concerns arising from generative AI, including issues like unfair bias, model manipulation, and misinterpretation of data, commonly referred to as “hallucinations,” according to statements by Google’s Laurie Richardson and Royal Hansen.
The expanded program encompasses various categories, such as prompt injections, leakage of sensitive data from training datasets, model manipulation, adversarial perturbation attacks triggering misclassification, and model theft.
Google had previously established an AI Red Team in July as part of its Secure AI Framework (SAIF) to combat threats against AI systems. As part of its commitment to secure AI, Google is also working to bolster the AI supply chain through open-source security initiatives, including Supply Chain Levels for Software Artifacts (SLSA) and Sigstore. These initiatives employ digital signatures, such as those provided by Sigstore, enabling users to verify the integrity of software and detect tampering. Additionally, metadata like SLSA provenance provides detailed insights into software composition, aiding consumers in ensuring license compatibility, identifying vulnerabilities, and detecting advanced threats.
This announcement coincides with OpenAI’s introduction of an internal Preparedness team dedicated to monitoring and protecting generative AI against catastrophic risks. The team’s focus spans cybersecurity threats to potential chemical, biological, radiological, and nuclear (CBRN) risks.
In a collaborative effort, Google, OpenAI, Anthropic, and Microsoft have jointly established a $10 million AI Safety Fund. This fund is specifically geared towards fostering research in the field of AI safety, underlining the industry’s commitment to advancing the responsible development and deployment of artificial intelligence technologies.
Found this news interesting? Follow us on Twitter and Telegram to read more exclusive content we post.