Connect with us

News

ChatGPT Maker OpenAI Comes Up With a Way to Check If Text Was Written by a Human

Published

on

  • ChatGPT maker OpenAI says its latest tool makes mistakes but is more prepared to handle outputs from recent AI systems than a version from 2019.
  • The startup, which built ChatGPT, wants feedback on the tool from parents and teachers.
  • The release comes two months after OpenAI captured the public’s attention when it introduced ChatGPT.

Artificial intelligence research startup OpenAI on Tuesday introduced a tool that’s designed to figure out if text is human-generated or written by a computer.

The release comes two months after OpenAI captured the public’s attention when it introduced ChatGPT, a chatbot that generates text that might seem to have been written by a person in response to a person’s prompt. Following the wave of attention, last week Microsoft announced a multibillion-dollar investment in OpenAI and said it would incorporate the startup’s AI models into its products for consumers and businesses.

Schools were quick to limit ChatGPT’s use over concerns the software could hurt learning. Sam Altman, OpenAI’s CEO, said education has changed in the past after technology such as calculators has emerged, but he also said there could be ways for the company to help teachers spot text written by AI.

OpenAI’s new tool can make mistakes and is a work in progress, company employees Jan Hendrik Kirchner, Lama Ahmad, Scott Aaronson and Jan Leike wrote in a blog post, noting that OpenAI would like feedback on the classifier from parents and teachers.

“In our evaluations on a ‘challenge set’ of English texts, our classifier correctly identifies 26% of AI-written text (true positives) as ‘likely AI-written,’ while incorrectly labeling human-written text as AI-written 9% of the time (false positives),” the OpenAI employees wrote.

This isn’t the first effort to figure out if text came from a machine. Princeton University student Edward Tian earlier this month announced a tool called GPTZero, noting on the tool’s website that it was made for educators. OpenAI itself issued a detector in 2019 alngside a large language model, or LLM, that’s less sophisticated than what’s at the core of ChatGPT. The new version is more prepared to handle text from recent AI systems, the employees wrote.

The new tool is not strong at analyzing inputs containing fewer than 1,000 characters, and OpenAI doesn’t recommend using it on languages other than English. Plus, text from AI can be updated slightly to keep the classifier from correctly determining that it’s not mainly the work of a human, the employees wrote.

Advertisement

Even back in 2019, OpenAI made clear that identifying synthetic text is no easy task. It intends to keep pursuing the challenge.

“Our work on the detection of AI-generated text will continue, and we hope to share improved methods in the future,” Hendrik Kirchner, Ahmad, Aaronson and Leike wrote.

WATCH: China’s Baidu developing AI-powered chatbot to rival OpenAI, report says

Source: NBC New York

Advertisement

Follow us on Google News to get the latest Updates

Trending