Senators Want ChatGPT-Level AI to Require a Government License

The US government should create a new body to regulate artificial intelligence—and restrict work on language models like OpenAI’s GPT-4 to companies granted licenses to do so. That’s the recommendation of a bipartisan duo of senators, Democrat Richard Blumenthal and Republican Josh Hawley, who launched a legislative framework yesterday to serve as a blueprint for future laws and influence other bills before Congress.

Under the proposal, developing face recognition and other “high risk” applications of AI would also require a government license. To obtain one, companies would have to test AI models for potential harm before deployment, disclose instances when things go wrong after launch, and allow audits of AI models by an independent third party.

The framework also proposes that companies should publicly disclose details of the training data used to create an AI model and that people harmed by AI get a right to bring the company that created it to court.

The senators’ suggestions could be influential in the days and weeks ahead as debates intensify in Washington over how to regulate AI. Early next week, Blumenthal and Hawley will oversee a Senate subcommittee hearing about how to meaningfully hold businesses and governments accountable when they deploy AI systems that cause people harm or violate their rights. Microsoft president Brad Smith and the chief scientist of chipmaker Nvidia, William Dally, are due to testify.

A day later, senator Chuck Schumer will host the first in a series of meetings to discuss how to regulate AI, a challenge Schumer has referred to as “one of the most difficult things we’ve ever undertaken.” Tech executives with an interest in AI, including Mark Zuckerberg, Elon Musk, and the CEOs of Google, Microsoft, and Nvidia, make up about half the almost-two-dozen-strong guest list. Other attendees represent those likely to be subjected to AI algorithms and include trade union presidents from the Writers Guild and union federation AFL-CIO, and researchers who work on preventing AI from trampling human rights, including UC Berkeley’s Deb Raji and Humane Intelligence CEO and Twitter’s former ethical AI lead Rumman Chowdhury.

Anna Lenhart, who previously led an AI ethics initiative at IBM and is now a PhD candidate at the University of Maryland, says the senators’ legislative framework is a welcome sight after years of AI experts appearing in Congress to explain how and why AI should be regulated.

“It’s really refreshing to see them take this on and not wait for a series of insight forums or a commission that’s going to spend two years and talk to a bunch of experts to essentially create this same list,” Lenhart says.

But she’s unsure how any new AI oversight body could host the broad range of technical and legal knowledge required to oversee technology used in many areas from self-driving cars to health care to housing. “That’s where I get a bit stuck on the licensing regime idea,” Lenhart says.

The idea of using licenses to restrict who can develop powerful AI systems has gained traction in both industry and Congress. OpenAI CEO Sam Altman suggested licensing for AI developers during testimony before the Senate in May—a regulatory solution that might arguably help his company maintain its leading position. A bill proposed last month by senators Lindsay Graham and Elizabeth Warren would also require tech companies to secure a government AI license but only covers digital platforms above a certain size.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Yours Headline is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a Comment