Categories: AI News

OpenAI scuttles AI-written text detector over ‘low rate of accuracy’

Image Credits: zmeel / Getty Images

OpenAI has shut down its AI classifier, a tool that claims to determine the probability that a passage of text was written by another AI. While many use and perhaps unwisely rely on it to pull off low-effort cheats, OpenAI has retired it due to its widely criticized “low accuracy rate.”

The theory that AI-generated text has some defining feature or pattern that can be detected reliably seems intuitive, but so far has never been proven in practice. Although some generated text may have clear speech, the differences between major language models and the speed at which they develop make the speech impossible to trust.

TechCrunch’s own testing of a gaggle of AI writing detection tools was conclusive that they were at best hit and miss, and at worst completely worthless. Of the seven generated text snippets given by the different detectors, GPTZero identified five correctly, and OpenAI’s classifier only one. And that in a language model that is not new even in time.

But some take the claims of the analysis at face value, or more on top of it, because OpenAI ships the classifier tool with a list of limitations so important that one wonders why they put the thing in place. People worry that their students, job applicants, or freelancers submit the generated text to put it in the classifier to test it, and while the results may not be reliable, they sometimes are.

Since the language models have only improved and multiplied, it seems that someone at the company decided that it was time for them to take this flexible tool offline. “We are working to incorporate feedback and are currently researching more effective provenance methods for text,” reads a July 20 addendum to the classifier’s announcement post. (Decrypt seems to be the first to notice the change.)

I’m asking about the timing and reasoning behind closing the classifier, and will update when I hear back. But it’s surprising that this happens at a time when OpenAI has joined many other companies in a “voluntary commitment” led by the White House to develop AI ethically and transparently.

Among the commitments made by the companies is the development of strong watermarking and / or detection methods. Or try to do so, however: despite every company that has made this effect in the last 6 months or more, we have not seen any watermark or method of identification that has not been avoided.

There is no doubt that the first to achieve this feat will receive a large reward (any tool, if truly reliable, will be valuable in countless situations) so it is probably not necessary to make it part of any AI agreements.



cleantechstocks

Recent Posts

Aduro’s Disruptive Oil Upgrading Technology Moves Closer to Commercialization

  Aduro's Disruptive Oil Upgrading Technology Moves Closer to Commercialization Alberta's oil sands produce vast…

9 months ago

Global Markets: Retail sales increase in July

WINNIPEG – The following is a glance at the news moving markets in Canada and…

9 months ago

Top picks in REIT sector from BMO and RBC analysts

Daily roundup of research and analysis from The Globe and Mail’s market strategist Scott Barlow…

9 months ago

Investors look to AI-darling Nvidia’s earnings as US stocks rally wobbles

The logo of technology company Nvidia is seen at its headquarters in Santa Clara, California…

9 months ago

China’s ‘Lehman Moment’? Which domino will fall next as property crisis grows? – South China Morning Post

China’s ‘Lehman Moment’? Which domino will fall next as property crisis grows?  South China Morning Post…

9 months ago

Slide in euro zone service sector sharpens ECB’s rates dilemma

LONDON, Aug 23 (Reuters) - Euro zone business activity declined far more than thought in…

9 months ago