Seven of the world’s leading tech giants – Google, OpenAI, Meta, Amazon, Anthropic, Inflection, and more – have joined forces with the United States government under President Joe Biden’s administration. The shared objective? To effectively address the inherent risks linked with artificial intelligence (AI) technology by implementing comprehensive, safe, and responsible AI development measures.
According to the latest report by IANS, these tech companies have vowed to conduct rigorous security testing on AI systems and provide public access to the results. This move signifies a bold stride towards enhancing transparency, increasing accountability, and fostering trust among users and the general public.
In a recent assembly at the White House, President Biden underscored the importance of AI innovation conducted responsibly. He emphasized the transformative influence AI would wield on global lifestyles and pointed out the imperative role of the stakeholders in directing this innovation with the utmost responsibility and safety.
“AI must serve the greater good of society. To achieve this, it is essential to construct and implement these potent new technologies with due diligence,” stated Nick Clegg, Meta’s President of Global Affairs.
In the interest of creating an open and accountable AI environment, Clegg also stressed on the need for tech companies to unveil the workings of their AI systems, and to collaborate extensively with other sectors such as the government, academia, and civil society.
As part of their transparency commitment, the companies will introduce watermarks on their AI-generated content, allowing users to recognize such content readily. They will also commit to regular public disclosures on the capabilities and limitations of their AI technologies.
Moreover, in order to address risks related to AI such as bias, discrimination, and invasion of privacy, the companies have pledged to undertake pertinent research. This research is deemed critical to the development of AI technologies that are ethical and responsible.
OpenAI has emphasized that the watermarking agreement will necessitate companies to devise tools or APIs to confirm if particular content was generated using their AI systems. Google had already pledged to implement similar disclosures earlier this year.
Adding to the progressive moves, Meta has recently announced its decision to open-source its large language model, Llama 2, making it readily accessible to the global research community.