The consortium’s security practices include red-teaming, capability evaluations, risk management, and watermarking synthetic content.
The Biden administration has announced the formation of the US AI Safety Institute Consortium (AISIC), including top artificial intelligence companies among 200 involved entities, to promote the safe development and deployment of generative AI. Commerce Secretary Gina Raimondo revealed the consortium on Thursday. The consortium includes OpenAI, Alphabet’s Google, Anthropic, Microsoft, Meta Platforms, Apple, Amazon.com, Nvidia, Palantir, Intel, JPMorgan Chase, and Bank of America among its ranks.
Raimondo said that the US government will play an important role in establishing standards and developing tools necessary to mitigate risks while maximising the benefits of artificial intelligence. The AISIC will operate under the U.S. AI Safety Institute (USAISI). It will also include major corporations like BP, Cisco Systems, IBM, Hewlett Packard, Northrop Grumman, Mastercard, Qualcomm, Visa, and key academic institutions and government agencies.
This group is tasked with implementing priority actions from President Biden’s AI executive order issued in October. It focuses on developing guidelines for various safety and security practices, including red-teaming, capability evaluations, risk management, and watermarking synthetic content.
AI technology needs to be safer, given its potential to generate text, photos, and videos from open-ended prompts, raising concerns about job displacement, election interference, and even existential threats to humanity.
While the Biden administration moves forward with these safety measures, legislative efforts to address AI at the congress level have faced challenges, with proposed legislation stalling despite numerous discussions and proposals.