Realizing the promise of artificial intelligence by managing its risks will require “some new laws, regulations and oversight,” President Joe Biden said Friday, even as several tech companies agreed to new AI safeguards.
Amazon
AMZN,
Google
GOOG,
GOOGL,
Meta
META,
Microsoft
MSFT,
and other companies voluntarily agreed to steps meant to ensure that their AI products are safe before they release them.
See: Tech giants including Microsoft and Google agree to White House AI safeguards
Those four tech giants, along with ChatGPT maker OpenAI and startups Anthropic and Inflection, have committed to security testing, to be “carried out in part by independent experts,” to guard against major risks, such as to biosecurity and cybersecurity, the White House said in a statement.
Speaking at the White House, Biden hailed the companies’ commitments as “a promising step.”
“But then, we have a lot more work to do together,” the president said. He said he will take executive action in the weeks ahead “to help America lead the way toward responsible innovation.”
Senate Majority Leader Chuck Schumer has argued that Congress must play a unique role in regulating AI technology, and in June he outlined a framework he thinks the U.S. government should follow as it takes a tougher stance on AI.
Key Words: AI could further an ‘erosion of the middle class’ unless Congress takes action, Schumer says
The New York Democrat is planning a series of nine AI insight forums that will feature experts and take place in the fall, according to Axios.
Now read: With AI, Congress is once again ‘far behind the curve’ on legislation, strategist says
The Associated Press contributed.