Categories: AI News

7 A.I. Companies Will Agree to Safeguards, Biden Administration Says

Seven leading AI companies in the United States have agreed to voluntary safeguards on the technology’s development, the White House announced Friday, promising to manage the risks of new tools even as they compete with the potential of artificial intelligence.

The seven companies – Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI – formally announced their commitment to new standards in the areas of safety, security and trust in a meeting with President Biden at the White House on Friday afternoon.

“We need to be clear and vigilant about the threats emerging from emerging technologies that may — not necessarily but may pose — to our democracy and our values,” Mr. Biden said in brief remarks from the Roosevelt Room of the White House.

“This is a serious responsibility. We need to get it right,” he said, flanked by executives from the companies. “And there’s huge, huge upside potential as well.”

The announcement comes as companies race to outdo each other with versions of AI that offer powerful new ways to create text, photos, music and video without human input. But the leaps in technology have prompted fears about the spread of disinformation and dire warnings of a “risk of extinction” as self-aware computers advance.

The voluntary safeguards are just an early, tentative step as Washington and governments around the world rush to put legal and regulatory frameworks in place for the development of artificial intelligence. The agreements include testing products for security risks and using watermarks to ensure consumers can see AI-generated material.

Friday’s announcement shows the urgency of the Biden administration and lawmakers to respond to rapid technological advances, even as lawmakers struggle to regulate social media and other technologies.

“In the coming weeks, I will continue to take executive action to help America lead the way toward responsible change,” Mr. Biden said. “And we will work with both parties to create appropriate laws and regulations.”

The White House offered no details on the president’s upcoming executive order that will deal with a larger problem: How to control the ability of China and other competitors to acquire new artificial intelligence programs, or the components used to develop them.

That includes new restrictions on advanced semiconductors and restrictions on the export of large language models. It was difficult to control – most software could fit, compressed, on a thumb drive.

An executive order could spark more opposition from the industry than Friday’s voluntary commitments, which experts say is already reflected in the practices of the companies involved. The promises do not hinder the plans of AI companies or prevent the development of their technologies. And as voluntary commitments, they are not enforced by government regulators.

“We are pleased to make these voluntary commitments with others in the sector,” Nick Clegg, the president of global affairs at Meta, Facebook’s parent company, said in a statement. “This is an important first step in ensuring that responsible guardrails are built for AI and they create a model for other governments to follow.”

As part of the safeguards, the companies agree to:

  • Testing the security of their AI products, on the part of independent experts and sharing information about their products with governments and others trying to manage technological risks.

  • Ensure that consumers can find AI-generated material by implementing watermarks or other means of identifying the generated content.

  • Regularly report publicly on the capabilities and limitations of their systems, including security risks and evidence of bias.

  • Deploying advanced artificial intelligence tools to solve society’s biggest challenges, like curing cancer and combating climate change.

  • Conduct research on the risks of bias, discrimination and invasion of privacy from the spread of AI tools.

In a statement announcing the agreements, the Biden administration said companies must ensure that “innovation does not come at the expense of the rights and safety of Americans.”

“The companies that develop these new technologies have a responsibility to ensure that their products are safe,” the administration said in a statement.

Brad Smith, the president of Microsoft and one of the executives attending the meeting at the White House, said that his company endorses voluntary protections.

“By moving quickly, the White House’s commitments create a foundation to help ensure that the promise of AI stays ahead of its risks,” Mr. Smith said.

Anna Makanju, vice president of global affairs at OpenAI, described the announcement as “part of our ongoing collaboration with governments, civil society organizations and others around the world to advance AI governance.”

For companies, the standards described on Friday serve two purposes: as an effort to prevent, or shape, legislative and regulatory movements with self-control, and a signal that they are facing this new technology thoughtfully and proactively.

But the rules they mostly agree on are the lowest common denominator, and can be interpreted by each company differently. For example, companies are committed to strict cybersecurity around the data and code used to create the “language models” on which generative AI programs are developed. But there’s no specificity about what that means — and companies have an interest in protecting their intellectual property anyway.

And even the most cautious companies are vulnerable. Microsoft, one of the companies that attended the White House event with Mr. Biden, scrambled last week to counter a hack organized by the Chinese government into the private emails of American officials dealing with China. It now appears that China has stolen, or somehow obtained, a “private key” held by Microsoft that is the key to authenticating emails – one of the company’s most guarded pieces of code.

As a result, the agreement is unlikely to slow efforts to pass legislation and impose regulation on the emerging technology.

Paul Barrett, the deputy director of the Stern Center for Business and Human Rights at New York University, said that more needs to be done to protect against the dangers posed by artificial intelligence to society.

“The voluntary commitments announced today are unenforceable, so it is critical that Congress, along with the White House, immediately enact legislation that requires transparency, privacy protections, and improved research into the many risks posed by generative AI,” Mr. Barrett said in a statement.

European regulators are poised to adopt AI laws later this year, prompting many companies to challenge US regulations. Several lawmakers have introduced bills that include licensing for AI companies to release their technologies, the creation of a federal agency to oversee the industry, and data privacy requirements. But members of Congress are far from agreement on the rules and are racing to educate themselves on the technology.

Policymakers are struggling with how to respond to the advancement of AI technology, with some focused on the risks to consumers while others are deeply concerned about falling behind rivals, especially China, in the race for dominance in the field.

This week, the House select committee on China’s strategic competitiveness sent bipartisan letters to US-based venture capital firms, demanding an accounting of the investments they have made in Chinese AI and semiconductor companies. These letters come on top of months in which various House and Senate panels have questioned the most influential entrepreneurs and critics of the AI ​​industry to determine what kind of legislative guardrails and incentives Congress should examine.

Many of the witnesses, including Sam Altman of San Francisco who started OpenAI, pleaded with lawmakers to regulate the AI ​​industry, pointing to the potential for the new technology to cause undue harm. But that regulation has been slow to get under way in Congress, where many lawmakers are still struggling to understand what AI technology really is.

In an attempt to improve the understanding of lawmakers, Senator Chuck Schumer, Democrat of New York and the majority leader, started a series of listening sessions for lawmakers this summer, to hear from government officials and experts about the merits and dangers of artificial intelligence in many fields.

Mr. Schumer is also preparing an amendment to the Senate version of this year’s defense authorization bill to encourage Pentagon employees to report potential issues with AI tools through a “bug bounty” program, commission a Pentagon report on how to improve AI data sharing, and improve AI reporting in the financial services industry.

Now Demirjian contributed reporting from Washington.

cleantechstocks

Recent Posts

Aduro’s Disruptive Oil Upgrading Technology Moves Closer to Commercialization

  Aduro's Disruptive Oil Upgrading Technology Moves Closer to Commercialization Alberta's oil sands produce vast…

9 months ago

Global Markets: Retail sales increase in July

WINNIPEG – The following is a glance at the news moving markets in Canada and…

9 months ago

Top picks in REIT sector from BMO and RBC analysts

Daily roundup of research and analysis from The Globe and Mail’s market strategist Scott Barlow…

9 months ago

Investors look to AI-darling Nvidia’s earnings as US stocks rally wobbles

The logo of technology company Nvidia is seen at its headquarters in Santa Clara, California…

9 months ago

China’s ‘Lehman Moment’? Which domino will fall next as property crisis grows? – South China Morning Post

China’s ‘Lehman Moment’? Which domino will fall next as property crisis grows?  South China Morning Post…

9 months ago

Slide in euro zone service sector sharpens ECB’s rates dilemma

LONDON, Aug 23 (Reuters) - Euro zone business activity declined far more than thought in…

9 months ago