Categories: AI News

ChatGPT maker investigated by US regulators over AI risks

Receive free updates on Artificial intelligence

The risks posed by artificially intelligent chatbots are being officially investigated by US regulators for the first time after the Federal Trade Commission launched a wide-ranging investigation into ChatGPT maker OpenAI.

In a letter sent to the Microsoft-backed company, the FTC said it would look into whether people were harmed by the AI ​​chatbot’s false information about them, as well as whether OpenAI engaged in “unfair or fraudulent” privacy and data. security procedures.

Generative AI products are in the crosshairs of regulators around the world, as AI experts and ethicists sound the alarm over the amount of personal data the technology uses, as well as its potentially harmful outputs. , from misinformation to sexist and racist comments.

In May, the FTC issued a warning to the industry, saying it was “focusing closely on how companies can choose to use AI technology, including new generative AI tools , in ways that have a real and significant impact on consumers”.

In its letter, the US regulator asked OpenAI to share internal material ranging from how the group retains user information to the steps the company has taken to address the risk of models making statements that ” false, misleading or defamatory”.

The FTC declined to comment on the letter, which was first reported by The Washington Post. Writing on Twitter later Thursday, OpenAI chief executive Sam Altman it is called “It’s very disappointing to see the FTC’s request start as a trickle and not help build trust”. He added: “It is very important to us that our technology is safe and pro-consumer, and we are confident that we follow the law. Of course we will work with the FTC.

Lina Khan, the FTC chair, on Thursday morning testified before the House judiciary committee and faced strong criticism from Republican lawmakers because of her strong stance on enforcement.

When asked about the investigation during the hearing, Khan declined to comment on the investigation but said the regulator’s broader concerns involved ChatGPT and other AI services “feeding into a huge trove of data” while there are no checks on what type of data is created. entered into these companies”.

He added: “We’ve heard about reports where people’s sensitive information has appeared in response to a question from someone else. We’ve heard about libel, slanderous statements, untrue things coming out. That’s the kind of fraud and fraud we’re concerned about.”

Khan also faced questions from lawmakers over his mixed court record, after the FTC suffered a major defeat this week in its attempt to block Microsoft’s $75bn takeover of Activision Blizzard. The FTC on Thursday appealed against the decision.

Meanwhile, Republican Jim Jordan, chair of the committee, accused Khan of “harassing” Twitter after the company alleged in a court filing that the FTC had “irregular and improper” behavior in implementing a consent decree imposed on it last year.

Khan did not comment on the filing on Twitter but said all the FTC cared about was “about the company’s compliance with the law”.

Experts are concerned about the amount of data being hoovered up by the language models behind ChatGPT. OpenAI has more than 100 million monthly active users two months after its launch. Microsoft’s new Bing search engine, also powered by OpenAI technology, was used by more than 1 million people in 169 countries within two weeks of its release in January.

Users have reported that ChatGPT creates names, dates and facts, as well as fake links to news websites and references to academic papers, an issue known in the industry as “hallucinations”.

The FTC investigation dug into the technical details of how ChatGPT was designed, including the company’s work on curing hallucinations, and the management of human reviewers, which directly affect consumers. It also asked for information on consumer complaints and efforts made by the company to assess consumers’ understanding of the chatbot’s accuracy and reliability.

In March, Italy’s privacy watchdog temporarily banned ChatGPT while it investigated the US company’s collection of personal information following a cyber security breach, among other issues. It was reinstated a few weeks later, after OpenAI made its privacy policy more accessible and introduced a tool to check the age of users.

Echoing earlier admissions about ChatGPT’s failure, Altman tweeted: “We’re clear about the limitations of our technology, especially when we’re short. And our profit-limited structure means we’re not motivated to make unlimited returns. However, he said the chatbot was built on “years of safety research”, adding: “We protect user privacy and design our systems to learn about world, not private individuals.”

cleantechstocks

Recent Posts

Aduro’s Disruptive Oil Upgrading Technology Moves Closer to Commercialization

  Aduro's Disruptive Oil Upgrading Technology Moves Closer to Commercialization Alberta's oil sands produce vast…

1 year ago

Global Markets: Retail sales increase in July

WINNIPEG – The following is a glance at the news moving markets in Canada and…

1 year ago

Top picks in REIT sector from BMO and RBC analysts

Daily roundup of research and analysis from The Globe and Mail’s market strategist Scott Barlow…

1 year ago

Investors look to AI-darling Nvidia’s earnings as US stocks rally wobbles

The logo of technology company Nvidia is seen at its headquarters in Santa Clara, California…

1 year ago

China’s ‘Lehman Moment’? Which domino will fall next as property crisis grows? – South China Morning Post

China’s ‘Lehman Moment’? Which domino will fall next as property crisis grows?  South China Morning Post…

1 year ago

Slide in euro zone service sector sharpens ECB’s rates dilemma

LONDON, Aug 23 (Reuters) - Euro zone business activity declined far more than thought in…

1 year ago