top of page

Microsoft's commitments to advance safe, secure, and trustworthy AI

Today, Microsoft is announcing its support for new voluntary commitments crafted by the Biden-Harris administration to help ensure that advanced AI systems are safe, secure, and trustworthy. By endorsing all of the voluntary commitments presented by President Biden and independently committing to several others that support these critical goals, Microsoft is expanding its safe and responsible AI practices, working alongside other industry leaders. 


By moving quickly, the White House’s commitments create a foundation to help ensure the promise of AI stays ahead of its risks. We welcome the President’s leadership in bringing the tech industry together to hammer out concrete steps that will help make AI safer, more secure, and more beneficial for the public. 


Guided by the enduring principles of safety, security, and trust, the voluntary commitments address the risks presented by advanced AI models and promote the adoption of specific practices – such as red-team testing and the publication of transparency reports – that will propel the whole ecosystem forward. The commitments build upon strong pre-existing work by the U.S. Government (such as the NIST AI Risk Management Framework and the Blueprint for an AI Bill of Rights) and are a natural complement to the measures that have been developed for high-risk applications in Europe and elsewhere.

Microsoft look forward to their broad adoption by industry and inclusion in the ongoing global discussions about what an effective international code of conduct might look like. 


Microsoft's commitments to advance safe, secure, and trustworthy AI

Microsoft’s additional commitments focus on how we will further strengthen the ecosystem and operationalize the principles of safety, security, and trust. From supporting a pilot of the National AI Research Resource to advocating for the establishment of a national registry of high-risk AI systems, we believe that these measures will help advance transparency and accountability.


Microsoft also committed to broad-scale implementation of the NIST AI Risk Management Framework, and adoption of cybersecurity practices that are attuned to unique AI risks. They know that this will lead to more trustworthy AI systems that benefit not only their customers, but the whole of society. 


It takes a village to craft commitments such as these and put them into practice at Microsoft.


"I would like to take this opportunity to thank Kevin Scott, Microsoft’s Chief Technology Officer, with whom I co-sponsor our responsible AI program, as well as Natasha Crampton, Sarah Bird, Eric Horvitz, Hanna Wallach, and Ece Kamar, who have played key leadership roles in our responsible AI ecosystem"


"As the White House’s voluntary commitments reflect, people must remain at the center of our AI efforts and I’m grateful to have strong leadership in place at Microsoft to help us deliver on our commitments and continue to develop the program we have been building for the last seven years. Establishing codes of conduct early in the development of this emerging technology will not only help ensure safety, security, and trustworthiness, it will also allow us to better unlock AI’s positive impact for communities across the U.S. and around the world."

bottom of page