AI Made Friendly HERE

How to Detect Text Written by ChatGPT and Other AI Tools

PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing.

Can you spot ChatGPT-generated text? The immensely popular AI is being used in emails, cover letters, marketing pitches, college essays, coding, and even some news stories. But ChatGPT’s output is often so convincingly humanlike, sussing out what’s written by a human and what’s written by a computer program may be best left to the computers themselves.

Most AI detection tools are free, albeit with character limits (something that can sometimes be bypassed by pasting in chunks of text at a time). An AI detector can serve many purposes, from making sure the text you write doesn’t come off as too generic and stilted to uncovering deception from job candidates.

Educators are at the top of the list of those who could use a reliable way to tell whether something has been written by AI. And they have indeed been among the early adopters of AI detector software. But just as ChatGPT and its kind can be unreliable, so too can the AI detectors designed to spot them.

In the ChatGPT subreddit, students routinely seek advice about allegations that they’ve used AI in their work. Such was the case for a high school student falsely accused by their history teacher of using ChatGPT. The teacher would not disclose what tool was used and, according to the student, felt justified in making the claim because the detector had helped them catch other AI-written text from other students who admitted to using ChatGPT.

A college student, meanwhile, received only the smallest redress when they were given 0% on an online exam after a professor ran the essay portion of the test through plagiarism checker TurnItIn, which determined that it had a 55% likelihood of being written by AI.

These are cautionary tales we wanted to tell. And if you’ve been wrongly accused of using AI, you can read “What to Do If You’re Falsely Accused of Using AI to Cheat.”) Since ChatGPT and the like are trained to imitate how humans speak, separating out what an AI has cribbed from common usage and what is actual text written by people is not an easy task—even for AI.

There was some talk in the AI community of AI generators including a watermark, or signals within AI-written text that could be detected by software without affecting the text’s readability. And though companies developing AI, including OpenAI and Google, told the White House they would implement watermarks, it’s now a new day or AI in D.C. (OpenAI shut down its own AI text-detection tool in mid-2023, citing its “low rate of accuracy.”)

That said, we tested some of the most-used free AI detectors. I ran through text from my own story Is Dall-E the Next Dior? How AI Is Trying to ‘Make It Work’ in Fashion, as well as text from a ChatGPT-generated prompt: “Please write me an article on how AI is being used in the fashion industry, specifically Stable Diffusion, DALL-E 2, and Midjourney.”

The results show that AI is fairly good at detecting other AI but can frequently mistake text written by a person—in this case me—for AI. This is alarming for those who might have their words judged for academic or professional reasons by such checkers, possibly without even being aware of it.

1. GPTZero

GPTZero was crushing the dreams of college students just days into ChatGPT making headlines. It was developed in 2023 by one of their own. Edward Tian, then a senior at Princeton, used the knowledge from his comp-sci major and journalism minor to analyze text for “perplexity” (how complex the ideas and language are) and “burstiness” (if there’s a blend of long and short sentences rather than sentences of more uniform length).

Tian trained GPTZero on paired human-written and AI-generated text. While it can be used to test a single sentence (as long as it’s 250 characters or more), GPTZero’s accuracy increases as it’s fed more text.

GPTZero’s origin and speed to market made it popular among educators. But the program’s FAQ cautions against using results to punish students: “The nature of AI-generated content is changing constantly. As such, these results should not be used to punish students. We recommend educators to use our behind-the-scene as part of a holistic assessment of student work.”

Anyone can try GPTZero for free at GPTZero.me. It lets you check up to 5,000 characters per document via pasting or upload. There are three pricing plans: essential ($8.33 a month for 150,000 words), premium ($12.99 a month for 300,000 words), and professional ($24.99 a month for 500,000 words).

The Results

Of the AI-written text I fed it, GPTZero said: “We are highly confident this text was AI-generated” My own received, “We are moderately confident this text is entirely human” and estimated that my work, for which I did not use AI at all, might be 13% AI.

2. Writer AI Content Detector

Writer makes an AI writing tool, so it was naturally inclined to create the Writer AI Content Detector. The tool is not robust, but it is direct. You paste a URL or up to 5,000 words into the box on its site and get a large-size percent detection score right next to it. The product is free, and those who have a Writer enterprise plan can contact the company to discuss detection at scale.

The Results

Given about 4,000 characters of the ChatGPT-written piece, Writer AI Content Detector graded it “81% human-generated content” and recommended, “You should edit your text until there’s less detectable AI content.” For about 5,000 characters of my own piece, I got a “100% human-generated” score and a robot-issued “Fantastic!” compliment.

3. ZeroGPT

ZeroGPT is a straightforward, free tool for “students, teachers, educators, writers, employees, freelancers, copywriters, and everyone on earth,” which claims an accuracy rate of up to 98%. There are pro ($7.99 a month for 100,000 characters and some bonus features), plus ($14.99 a month for 100,000 characters and even more features), and max ($18.99 a month for 125,000 characters and other features) accounts as well. It works on a proprietary, undisclosed technology the company calls DeepAnalyse, which it says is trained on text collections from the internet, educational datasets, and its proprietary synthetic AI datasets produced using various language models.

Users paste up to 15,000 characters into a box on the site and receive one of the following results: the text is human-written, AI/GPT-generated, mostly AI/GPT-generated, most likely AI/GPT-generated, likely AI/GPT-generated, contains mixed signals with some parts AI/GPT-generated, likely human-written but may include AI/GPT-generated parts, most likely human-written but may include AI/GPT-generated parts, and most likely human-written.

The Results

ZeroGPT knew what I was up to by submitting the AI-written piece. “Your text is AI/GPT Generated,” it said, before giving it a score of 98.4% AI GPT. For my writing, I was relieved to see this conclusion: “Your text is human written,” although it gave me a 1.76% AI-written score for three sentences that I definitely wrote myself.

4. Originality.ai

Originality.ai says it has a 99% accuracy rate and bills itself as the “most accurate AI checker.” It did correctly guess that the AI-written text I put in was AI but it only gave my own writing a borderline chance of being written by a human.

You can test up to 750 words on its site for free. Pricing for its products is calculated per credit, with a credit being between 50 to 100 words. There is a pro plan for $12.95 a month for 2,000 credits or an enterprise one for $136.58 per month for 15,000 credits. If you don’t want to be tied down to a plan, you can pay $30 one time for 3,000 credits.

Originality.ai successfully detected the AI text and put my own work at a 50% chance of being generated by a human.

Humans Are Still the Best AI Detectors

While these AI detectors were indeed able to tell AI-written text from text written by a human, precautions against relying completely on their results still apply. I’m a professional writer; those who are not might not have the same results with their own work. This is not a brag—it’s just some hope for me to cling to in these times of AI journalists taking jobs from human ones.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird