President Joe Biden emphasized the need for new laws and regulations to manage the risks associated with artificial intelligence (AI). Companies such as Amazon, Google, Meta, Microsoft, and others have voluntarily agreed to implement safety measures before releasing their AI products.
Commitment to Safety
These tech giants, along with ChatGPT maker OpenAI and startups Anthropic and Inflection, have committed to conducting security testing. The testing will involve independent experts to mitigate major risks such as biosecurity and cybersecurity.
A Promising Step
President Biden commended the companies for their commitments, considering it a promising step. However, he acknowledged that there is still much work to be done collectively. He announced his intention to take executive action in the coming weeks to support responsible innovation and position America as a leader in AI development.
Congress’s Role in Regulation
Senate Majority Leader Chuck Schumer has emphasized the unique role Congress must play in regulating AI technology. He outlined a framework for the U.S. government to adopt a tougher stance on AI and plans to hold nine AI insight forums featuring experts in the fall.