Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI have formally announced commitments to new standards in security, trust and safety after meeting with Joe Biden at the White House. Mr. Biden said, “we must be clear eyed and vigilant about the threats emerging technologies can pose – don’t have to but can pose – to our democracy and our values.”
The key components the companies agreed to until there is more formal legislation announced were:
Ensuring Products are Safe Before Introducing Them to the Public
· The companies commit to internal and external security testing of their AI systems before their release. This testing, which will be carried out in part by independent experts, guards against some of the most significant sources of AI risks, such as biosecurity and cybersecurity, as well as its broader societal effects.
· The companies commit to sharing information across the industry and with governments, civil society, and academia on managing AI risks. This includes best practices for safety, information on attempts to circumvent safeguards, and technical collaboration.
Building Systems that Put Security First
· The companies commit to investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. These model weights are the most essential part of an AI system, and the companies agree that it is vital that the model weights be released only when intended and when security risks are considered.
· The companies commit to facilitating third-party discovery and reporting of vulnerabilities in their AI systems. Some issues may persist even after an AI system is released and a robust reporting mechanism enables them to be found and fixed quickly.
Earning the Public’s Trust
· The companies commit to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system. This action enables creativity with AI to flourish but reduces the dangers of fraud and deception.
· The companies commit to publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use. This report will cover both security risks and societal risks, such as the effects on fairness and bias.
· The companies commit to prioritizing research on the societal risks that AI systems can pose, including on avoiding harmful bias and discrimination, and protecting privacy. The track record of AI shows the insidiousness and prevalence of these dangers, and the companies commit to rolling out AI that mitigates them.
· The companies commit to develop and deploy advanced AI systems to help address society’s greatest challenges. From cancer prevention to mitigating climate change to so much in between, AI—if properly managed—can contribute enormously to the prosperity, equality, and security of all.
The White House administration further affirmed that it would continue to work with allies and partners to establish a strong international framework to govern the development and use of AI. Voluntary commitments with Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK have already been made. The United States also is seeking to ensure that these commitments support and complement Japan’s leadership of the G-7 Hiroshima Process—as a critical forum for developing shared principles for the governance of AI—as well as the United Kingdom’s leadership in hosting a Summit on AI Safety, and India’s leadership as Chair of the Global Partnership on AI.
An Blueprint for an AI Bill of Rights was prior published to safeguard Americans’ rights and safety, and U.S. government agencies have ramped up their efforts to protect Americans from the risks posed by AI, including through preventing algorithmic bias in home valuation and leveraging existing enforcement authorities to protect people from unlawful bias, discrimination, and other harmful outcomes.
The Biden administration has made it clear that innovation does not come at the expense of American’s rights and safety.
The big challenge we face is that these statements can be interpreted differently by the voluntary technology stakeholders and the stakeholders will have a strong interest in protecting their own intellectual property interests.
The EU is further ahead and poised to adopt AI laws this year spurring on the US regulators to introduce legislation. All of this comes at the acute risk of heavy legislation that will enable adversaries like China to race into dominance in AI.
Creating a balance will be key, and we will no doubt get some of this right, and some of this very wrong. That’s what innovation is always about – iterating and learning – but keeping a noble values centric vision front and center.
WhiteHouse: Fact Sheet on Voluntary AI Commitments