Best AI Dection Tools at a Glance
The gold standard for AI detection with 99% accuracy and user-friendly visual features.
Perfect for educators with Google Docs integration to easily scan student assignments.
A strong paid alternative to Originality AI with decent accuracy across models and handy assessment.
Half the internet is filled with AI crap. Smell it? I sure do, and I don’t like it. That’s why I’m here today—to guide you through the best GPT-4 AI detection tools available.
I’m an AI copywriter. I create content using various AI copywriting tools and regularly test them for AI presence. So, trust me, I know what I’m talking about.
Just look at my Originality’s stats:
I know why you want to pick the best AI detection tool. You want to make sure your writings and creations won’t get tagged as AI-generated, even if you’ve used AI to make them. (Yeah, yeah, you’re not here to look at pretty pictures).
I’ll run each tool through five tests (where possible). First up, GPT-4 (first without a prompt, then with). Next, Claude2 (again, first without a prompt, then with one), and finally, human-written content. For each tool, I’ll use the same prompts but will vary the topics for a bit of spice. Don’t worry; it won’t affect the overall result.
You might think five tests per tool aren’t enough. First, it’s done for clarity. Second, I’ve already run massive tests on these AI detection tools. I too was once on a quest for the best. Hoped free ones would do the job, but nope. More on that later.
During the tests, I’ll focus solely on AI detection capabilities. If you’re interested in a more detailed review of a specific tool, I’ll provide links to reviews that’ll help you make a final choice.
- Originality AI — Best overall
- Passed AI — Best for educators
- Winston AI — Decent alternative to Originality AI
- Copyleaks — Unreliable AI detection
- Sapling AI — Mediocre detection abilities
- GPTZero — Disappointing accuracy
- Crossplag — Mixed results
- AI Detector Pro — Confusing interface
- ContentAtScale — One of the better free options
- ContentDetector.AI — Solid free alternative
- KazanSEO — Completely ineffective
- Writer.com — Terrible detection
- GLTR — Also outdated, fails to detect modern AI
- Hugging Face — Outdated, only detects GPT-2
As an experienced AI content creator, Originality AI remains my daily driver and the undisputed king of detection. With 99% accuracy even on advanced models, it precisely tags AI content through helpful color coding and visual hover pop-ups. The team actively nurtures it by rapidly responding to new developments like GPT-4. For most everyday use cases, Originality AI is the tool I rely on to maintain integrity across projects.
|Best For||Any Content Creator|
|Price||$30 or $14.99/mo|
- High Accuracy: Catches GPT-4 AI content with up to 99% accuracy.
- Plagiarism Checker: Detects direct copying and paraphrasing.
- Readability Score: Provides insights based on a 20k study for optimal readability.
- Team Management: Add unlimited team members and monitor their activity.
- Multilanguage Support: Breaks language barriers across 15 languages.
- Shareable Reports: Easily share content analysis results.
- API Integration: Integrate AI detection into workflows.
- Paraphrase Detection: Uniquely detects paraphrased content.
- Pay-as-you-go: $30 one-time for 3000 credits (1 credit = 100 words). Additional credits at $0.01 each. Valid for 2 years.
- Monthly Subscription: $14.95/month with 2000 monthly credits. Additional credits at $0.01 each.
- Content Marketers: Verifying content originality in an AI-dominated world.
- SEOs: Confirming content is unique and avoiding penalties.
- Publishers: Scanning entire sites to assess AI content risks.
- Agencies: Validating human-created content and monitoring team outputs.
- Writers: Proving content authenticity and managing false positives.
At this time, Originality AI is the best AI content detection tool available.
It has everything needed and nothing extra. I mainly use it for AI detection — the other features aren’t necessities for me. I always check readability and plagiarism with Grammarly. I have an annual subscription there so no credits get deducted.
With Originality, additional credits are charged for plagiarism and readability checks. The plagiarism detection also isn't flawless — decent but not perfect.
This tool wasn’t my first choice. I started with ContentAtScale, KazanSEO, and Hugging Face's detector. All free options I’ll cover later.
They handled GPT-3 and GPT-3.5 well enough initially. But then came the far more advanced GPT-4. It was a whole new ball game. The free tools simply couldn’t keep up anymore.
So I had no choice but to get a paid detector, Originality AI. I expected it to be expensive but nope. $20 covered two or even three months of scans. The pricing has changed now as improvements are constantly being made. But even so, it remains reasonably affordable in my book. There are pricier yet inferior options out there.
For me, Originality AI remains the gold standard for AI detection right now. It cant detect AI generated text even produced by best AI writing tools like Jasper or ContentAtScale.
I won’t ramble on further. Let’s dive in.
Test 1: ChatGPT (no prompt)
Let's have ChatGPT write an essay on "What is AI Copywriting?"
Originality AI says:
97% AI written, 3% human. Not 100% but an A-grade nonetheless.
Notable are the handy visual cues. First, color-coded text:
The color-coding isn't based on the overall percentage. One section could be green for human text, another red for AI, and so on. It varies throughout the document.
Second, pop-up messages on mouseover:
This makes it super easy to work with content — no reading tea leaves required.
On to the next test!
Test 2: GPT-4 + prompt
Essay topic: AI copywriting vs traditional copywriting
Prompt: You are an AI copywriter. You know how important it is to use different copywriting frameworks. Writing style: semi-professional. Use everyday language, cutting down the use of buzzing words. Your language is vivid and engaging. You know how to start smart and finish in the same way. You are a master of words. Apply mood to your writing: pain, frustration, relief, and humor.
Checking Originality AI:
56% human written content detected. Not too shabby for my improvised prompt! Of course any AI detector can be bypassed with savvy prompt engineering. But that doesn't make Originality AI bad. It makes me good at prompts. And this one was pure improv.
Test 3: Claude2 (no prompt)
I'll be straight with you — most AI detectors can't spot Claude2 yet. But I believe in showing what tools can currently do. I'm not here to hype and sell. I'm here to show you the real deal.
Let's have Claude2 write an essay on why it's better than ChatGPT:
Here's what Originality AI reports:
100% AI content found. Perfect score this round!
Test 4: Claude2 (with prompt)
Now let's test Claude2's output using a prompt.
We are writing about using AI in copywriting:
The article is short, but excellent. Judge for yourself and think: not knowing this was written by AI, would you guess it wasn't written by a human? I'm saying the same thing. Read on:
Tired of Writing? How AI Can Lend a Helping Hand
We've all been there - staring at a blank page, cursing writer's block. You need to come up with brilliant copy that sells, but your mind feels sluggish. If only you had some help!
Well, the future is here. AI tools like ChatGPT are game-changers for copywriters. No more struggling solo! But to really take advantage, you need to use them strategically.
First, don't hand over the reins completely. AI lacks human creativity and strategic thinking. Give it a direction, intent and parameters, then let it work its magic.
Next, get it to generate multiple options. You'll likely need to refine or combine its suggestions. Treat it like a brilliant intern, not a master copywriter.
Use AI for research and data gathering too. It can summarize industry reports and find relevant facts fast.
Finally, edit, edit, edit. Just like with human-written drafts, you'll need to tweak for tone, clarity and impact.
With practice, you can harness AI as your trusty AI. It takes time to get the balance right, but the productivity boost is worth overcoming the learning curve. Soon, you'll whip up copy with half the effort.
So don't despair or trash your computer the next time you face writer's block. Put that AI to work and get more done in less time. You provide the vision, it handles the busywork. Together, you'll craft copy that truly resonates.
Originality AI's verdict:
100% original content. Does this mean Originality AI is bad at catching AI content? Nope. It means: 1) I'm good at prompt engineering (it's true). 2) Claude2 is more advanced than ChatGPT at generating human-like content.
Test 5: Human written content
I enjoy reading SearchEngineLand's articles – top notch stuff. Let's check a snippet:
Originality AI says:
100% original, 0% AI. No surprises here!
I use Originality AI daily and have run over 1200 tests. Here's how it generally performs:
- GPT-4 without prompts: 99%
- GPT-4 with prompts: 50/50
- Claude2 without prompts: 70%
- Claude2 with prompts: 30%
- Intuitive, newbie-friendly interface
- Good speed (not instant but understandably so)
- Color-coded text in sections, not just overall %
- Pop-up messages on mouseover
- Helpful support
- Team management options
- Claude2 detection is just ok
- Plagiarism detection could be better
I'm a big Originality AI fan. The plagiarism checking could be improved by scanning beyond just page one results.
Yes, it can be bypassed with prompt engineering know-how. But without prompts, it'll catch just about anything AI-generated. However, if human text closely follows a template, it may flag 50% as AI just because it's formulaic. Common sense always required.
As an AI copywriter, I understand Claude2 surpasses ChatGPT in some ways. Prompts work more seamlessly in Claude. Even if Originality AI releases an update targeting Claude2, I doubt much will change.
So no knocks against Originality here. Any grievances belong to the Claude2 developers and those who've mastered prompt engineering.
I use Originality AI daily and recommend it not just for the affiliate commissions, but because it's truly the best AI detector I've found. Highly recommend.
Passed AI is the perfect supplement to Originality AI, thanks to its awesome Google Docs integration that makes scanning student assignments seamless. With 99% accuracy like Originality, it allows educators to easily uphold academic integrity right within Google Docs, where the work happens. The unmatched Chrome extension takes it up another notch. For educators specifically, Passed AI is the tool of choice.
- Quick Scans: Get fast verdicts on docs.
- Audit Reports: See edits, duration, and more.
- Google Docs: Makes it super convenient.
- Plagiarism Detection: Top-notch accuracy.
- User Education: Learn to interpret results.
- Privacy: You make the rules.
- Custom Courses: Stay organized.
- Document Timeline: Shows AI use.
- Monthly: Standard Plan — $9.99/month
- Institution Plan: Flexible pricing
Extra scan credits can be purchased. ($0.1 per credit, 1 credit scans up to 200 words. Credits don't expire)
- Educators: Uphold integrity.
- Students: Verify original work.
- Institutions: Protect reputations.
- Researchers: Find authentic content.
- Content Creators: Confirm originality.
Let's try out Passed AI, another AI detection tool that integrates with Google Docs.
I only recently discovered Passed AI thanks to some competitors. I run a blog, though I don't post as often as I'd like due to my day job. Time is limited.
Anyway, Passed AI has a user-friendly interface like its main rival. No surprises since it utilizes Originality.ai's API. Any Originality updates, like multi-language support, are instantly reflected in Passed AI too.
But Passed AI does have one very cool unique feature — its browser plugin. I'm a big fan. It could be a game-changer not just for educators, but for anyone actively working with copywriters.
You can read all about it in my in-depth Passed AI review.
For now, let's get to the juicy tool testing!
Test 1 — ChatGPT essay, no prompt
Let's have ChatGPT write an essay about AI's impact on education.
Run it through Passed AI:
95% AI content detected. Not bad, but not perfect either.
I'll note up front that unlike Originality, you can turn off text highlighting here, like this:
Just like that.
And unfortunately there's no color coding: it's either AI content, human content, or in between. And it's all highlighted the same color.
There are also no visual hover hints like in Originality AI. For some that may be a downside.
Test 2 — ChatGPT essay with prompt
Same prompt, but this time our essay topic is how to detect if a student used ChatGPT.
Passed AI says:
52% original, 48% AI. Again, similar results to Originality.
Currently Passed AI runs on the Originality API, so results are very similar. However, Passed AI does have one unique difference from Originality AI that I'll get to below.
Test 3 — Claude2, no prompt
Essay topic: Spending summer wisely as a student
Checking Passed AI:
99% AI content found. Can't complain about that. Not a perfect 100% but close enough.
Test 4 — Claude2 with prompt
Essay topic: Elderly care
Passed AI results:
75% human written, 25% AI. Once more, on par with Originality since it's the same backend.
Test 5 — Checking human content
Let's try a hand-written article from AuthorityHacker.
Passed AI says:
99.4% human, 0.6% AI. Excellent.
I've run over 20 tests on Passed AI. For detecting AI content, it's on equal footing with Originality. No surprises there since it uses their API.
- Simple, user-friendly interface
- Speed on par with Originality
- Good support
- Handy Chrome plugin
- No color-coding of text like Originality
- No visual cues on mouseover
I'm a fan of Passed AI. Its results match Originality since it uses their API. But its Chrome plugin has awesome capabilities, which could make it the top choice not just for educators, but for any content creator. Your call. If unsure, check my in-depth Passed AI review.
As an alternative to Originality AI, Winston AI is decent but needs improvement in transparency and accuracy. With around 60% accuracy on advanced models, it misses more AI content but may work for some. Handy sentence-level assessment provides a bit more, but the lack of visual aids like highlighting is a drawback. It shows potential, but more clarity on training methodology and improvements to precision would go a long way..
|Best For||Content Creators|
|Price||Starts at $18|
- Detects various LLM content.
- Boasts 99.6% accuracy.
- Quick scans for instant results.
- OCR handles scanned and imaged docs.
- Sentence-level assessment.
- Plagiarism checker verifies authenticity.
- Flexible pricing for different needs.
- User-friendly interface.
- Regular updates to match LLM advances.
- Essential: $18/month, 80K word scans, advanced AI detection, email & chat support, and more.
- Advanced: $29/month, 200K word scans, advanced AI and plagiarism detection, and more.
- Elite: $49/month, 500K word scans, advanced AI and plagiarism detection, and more.
- Essential: $12/month ($144/year), 80K word scans, advanced AI detection, and more.
- Advanced: $19/month ($228/year), 200K word scans, advanced AI and plagiarism detection, and more.
- Elite: $32/month ($384/year), 500K word scans, advanced AI and plagiarism detection, and more.
- Writers verifying content originality.
- Educators checking student work.
- Publishers assessing content authenticity.
- Businesses maintaining content integrity.
- Individuals investigating content origins.
Test 1: ChatGPT, no prompt
Let's have ChatGPT write an essay on the importance of choosing an AI detector.
Winston AI says:
0% human content, 100% AI. Excellent!
A few things I noticed:
The scan results display on a separate tab, not inline like Originality AI and Passed AI:
Metrics like Human Score, Plagiarism, Readability appear at the top. The text itself sits below under AI Prediction Map:
Also like Passed AI, there's no color-coded highlighting. Just 3 colors based on likelihood of AI: Likely AI, Possible AI, Unlikely AI:
And no pop-up messages on mouseover. Originality AI has the best visual cues.
Moving on to Test 2!
Test 2: ChatGPT with prompt
Essay topic: Will AI kill copywriting?
Same old prompt, but let's see how Winston AI fares:
No trouble for Winston here. But what about Claude2?
Test 3: Claude2, no prompt
Essay on weight loss after 50 for women:
Winston AI's take:
More interesting now. 28% human, 72% AI. Winston did well since even Originality AI sometimes misses Claude2 with no prompt.
Test 4: Claude2 with prompt
Essay on maintaining mental health when working from home:
I love what Claude2 wrote in just 5-10 seconds! Text:
The Remote Worker's Guide to Preserving Sanity
The siren call of remote work lured you in with promises of flexibility, zero commute, and working in your pajamas. But the honeymoon stage is over, and the harsh realities of WFH are setting in. Your work-life balance is crumbling like a week-old biscuit. You haven’t seen real sunlight in days. And you’re pretty sure your coworkers have forgotten you exist.
Welcome to the mind-numbing abyss of remote work isolation. While your partner likes your constant presence at first, their patience is wearing thin. Even your cat seems annoyed at your clinginess. Don’t panic just yet though - maintaining your mental health while working from home is possible. With some adjustments, you can come out of hiding from under your mountain of unfolded laundry.
First, set up a dedicated office, not just a corner of your bedroom. Otherwise work will invade your personal space like an unwanted party guest hogging the chip bowl.
Give yourself permission to step away from your desk hourly, even for quick moments of head-clearing zen. Working from your couchhorizontal position does not count as a break.
Set rigid work hours and stick to them like superglue - no answering emails at midnight. Disconnect at quitting time, or you’ll fall into a pit of burnout despair.
Most importantly, get out of the house daily, even if just for a short walk around the neighborhood. Don’t become a hermit. Talk to actual humans; they still exist out there.
With structure and boundaries, you can have the WFH cake and your sanity too. Now go ahead: close that laptop, change out of your pajamas, and rejoin the world of the living.
Let's see Winston AI's verdict:
80% human, 20% AI generated. Not bad given Claude2's capabilities.
Test 5: Human content
Testing a Forbes article on making money online:
Winston AI says:
100% human, 0% AI. Perfect!
Behind the scenes:
I've run over 50 tests on Winston AI. Here are my conclusions:
- GPT-4, no prompt: 95%
- GPT-4 with prompt: 50/50
- Claude2, no prompt: 60%
- Claude2 with prompt: 30%
- Decent at detecting AI content (not great, not terrible)
- Reasonable pricing
- No transparency on training data
- Can't test Plagiarism Checker in free version
The bottom line:
I'm testing Winston AI again. It has pros and cons. On one hand, it attracts me. On the other, it falls short in areas. Like meeting someone who seems great but has some undefined flaw you can't move past.
Objectively, I don't know what data Winston used for training. Yes, they analyze text features like perplexity and burstiness, do linguistic analysis, mention data training — it's all standard. But for a paid tool, I want full transparency into how it was trained.
Here's why it matters. When I bought my wife a used car recently, I asked the seller endless questions about its history. Who drove it, how long, maintenance habits — everything that signals reliability. He looked at me blankly and said he was selling it for his brother who lived out of town. He couldn't answer my questions, only assure me "It runs great!" Well, that wasn't enough for me.
I need to know EVERYTHING about a product, not just that it works. Originality AI gets it — they openly share training data volume. That gives me peace of mind. Food for thought.
One more gripe — I couldn't test Plagiarism Checking on Winston's free trial. I get that we're reviewing AI detection, but I want the full test drive experience! Passed AI lets you try everything.
Still, I generally like the tool. More transparency around training data would help. But overall, it's solid, which is why I ranked it as a top contender. The flaws aren't dealbreakers.
Copyleaks AI content detector offers a suite of tools to detect plagiarism, AI content, and ensure authenticity. With a focus on digital trust, it provides solutions for educators, businesses, and more. Features include catching content from GPT-4, Bard, paraphrasing, and image text checks. Committed to accuracy through user feedback, Copyleaks aims to be a reliable verification tool. But is that true?
- AI Content Detection: Spots human vs AI text from GPT-4, Bard, etc.
- Plagiarism Detection: Advanced AI checks content in 100+ languages.
- Paraphrasing Detection: Catches various paraphrasing techniques.
- Image Text Checks: Unique feature to catch copied image text.
- Deception Detection: Catches attempts to trick the software.
- Source Code Checks: Detects copied and modified code.
- User Feedback: Allows rating results to improve the model.
- 100+ Languages: Comprehensive language support.
Trial: 10 Pages
- 100 Pages — $10.99
- 250 Pages — $24.99
- 500 Pages — $40.99
- 1000 Pages — $75.99
- 2500 Pages — $184.99
- 5000 Pages — $349.99
- 10,000 Pages — $679.99
- 120,000+ Pages — Custom
Annual Plans (Save 16%)
- 1200 Pages — $9.16/Month
- 3000 Pages — $20.82/Month
- 6000 Pages — $34.16/Month
- 12,000 Pages — $63.32/Month
- 30,000 Pages —$154.16/Month
- 60,000 Pages — $291.66/Month
- 120,000 Pages — $566.66/Month
- 120,000+ Pages — Custom
- Each Page = 250 words scanned
Let's break down the pricing.
The free plan lets you scan 10 pages, 250 words each. That's 2,500 words total.
The $10/month paid plan gets you 25,000 words checked. And so on.
It's kinda pricy if you ask me.
- Educators: Maintaining academic integrity
- Content Creators: Confirming authenticity
- Businesses: Checking internal docs
- Developers: Preventing code plagiarism
- Legal Firms: Verifying document credibility
In my subjective opinion, Copyleaks isn't a tool for everyday users. Frankly, at first I got lost and didn't really understand what's what. When I registered, I tried scanning text in their app at https://app.copyleaks.com/ , not on the direct link for AI detection only. And here's what I saw:
I chose Free Text since that's what I needed:
It's a tangled mess if you ask me. But let's try it out.
I had ChatGPT write a Copyleaks review. Pasted it in the window above and hit Scan. I only checked "AI generated text" since I wanted to see how well it detects AI. And here's what I got:
The whole text was highlighted red. Up top it says "0% match." Looks like it's reporting on plagiarism, not the % of AI content. Likely 100% AI since everything is red. Lots of settings and parameters... I conclude the Copyleaks app isn't for average Joes. Hard to compare it to basic AI detectors when there's so much going on. Do you need all this? Up to you.
I'll switch to a simpler test method because simplicity matters more to me. What I showed above is the Copyleaks app at https://app.copyleaks.com/ . I'll run all further tests on their standalone AI detector at https://copyleaks.com/ai-content-detecto
Note: The 250 word trial limit means I'll ask ChatGPT and Claude2 to generate content around 250 words in length going forward. It is what it is. And I really dislike when free trials restrict features during the testing phase.
Note 2: I was somewhat mistaken with the first note. You will see why later on.
Test 1: ChatGPT, no prompt
Let's have ChatGPT write an article on staying original in the AI era:
The essay ChatGPT produced is 262 words long. Now let's see what Copyleaks has to say about it:
Copyleaks does not provide an overall percentage score like other AI detection tools do. We simply see the text highlighted in red, but there is no indication of the exact percentage of AI content. This is not ideal.
However, if you hover over a section of the text, a pop-up estimate like "69% probability for AI" appears. I would prefer to get a clear overall percentage right away. So, rounding the estimate, let's say this is 70% AI generated text and 30% human authored content.
Based on this first test alone, I can already say the results are rather unsatisfactory. The previous three tools I tested were nearly flawless at detecting purely AI generated text in my first test. But Copyleaks is off by a significant margin of 30%.
Test 2: ChatGPT with prompt
Let's prompt ChatGPT to write an essay on how to blend AI and human content together.
Checking the output in Copyleaks:
"This is human text" declares Copyleaks. Oh really? However, when I hover over the text that is now completely unhighlighted, the tool says only 41.8% is detected as human. Here's the proof:
So simple math: If roughly 40% is human, why the overall verdict of "human text"? Can anyone explain this?
Test 3: Claude2, no prompt
I'm not even sure it's worth testing Copyleaks on content generated by Claude2, but let's give it a shot.
Let's have Claude2 write an essay on using itself to create human-like content:
Claude2 offered up some pretty decent advice. Now to see what Copyleaks makes of it:
And once again we get "This is human text." Oh come on, really Copyleaks?
Hovering shows a mere 54.9% estimated as human. Displaying the actual percentage would be better than the blanket "human text" label. But nope...
Test 4: Claude2 with prompt
Shall we keep going, friends? Well, in for a penny, in for a pound.
Let's have Claude2 write on how to avoid GPT-4 detection:
This time the essay didn't quite appeal to me personally. But let's test it:
Result: 52.6% probability for human content, something we only learned by hovering over the text. Yet the overall verdict remains "This is human text."
Test 5: Checking human content with Copyleaks
Let's test a wikiHow article about happiness:
After these trials, I could use some joyful content! Hence the topic.
Let’s test it:
The result seems fairly accurate: 74.5% human text. Though I'd expect 100% for truly human-written content.
What I Like:
- Excellent plagiarism detection. But is that what we're here for?
What I Don't Like:
- Unreliable AI detection
- Unfriendly interface (in the app)
- No way to see results in the app if it's deemed human written
- The AI detector feels disconnected from plagiarism checking and doesn't work quite right in the app
The Bottom Line
Unfortunately, I can't recommend Copyleaks as an everyday AI detection tool. I know it excels at catching plagiarism, as evidenced below:
I tested this same text in Originality AI when reviewing it, just to verify either tool's accuracy. As we can see, Copyleaks does a superb job on plagiarism detection.
But I entered the Copyleaks app itself, and here's what happens:
When using the AI detector on their homepage, my test credits didn't deplete during trials. So it seems you can use the homepage tool as much as needed during the free trial to check AI content.
Nice! But let's test some Claude2 content with a prompt that was previously flagged as human. And here's what we get:
I see 0% plagiarism, but checking AI presence was my main goal. And there's simply no AI detection score provided here. Yes, I did check "AI detection" before scanning.
The conclusion: It seems if Copyleaks deems content as human-written, you won't know the % of human content when using the app itself. It is what it is.
Copyleaks is a superb tool for catching plagiarism and more. It analyzes many metrics when scanning text. But it falters on AI detection.
I understand the desire to combine everything into one platform. But the execution falls short here. If claiming to have an awesome AI detector, you need to deliver.
Especially if touting some study declaring yours the best tool. You may see that claim at the very top of their homepage when you visit. (It may no longer be current when you read this).
It reminds me of a childhood story. A classmate of mine (how many years has it been) loved boasting of his strength and toughness before school, claiming he could beat anyone. I never liked that. So one day I called him out behind the school, one-on-one. Landed a couple hits, and he fell. My friends and I left, and he resumed boasting of his might. If you're reading this, no hard feelings, buddy!
Grandiose claims without proof are just empty noise.
I did read that study end-to-end. The fellow did good work, kudos. But first, it's no longer current since some tools are now defunct. Second, I only saw end results — the testing methodology and text samples used weren't clear.
The bottom line — I can't recommend Copyleaks as an everyday AI detection tool. But feel free to try it out — it's free and no credit card needed.
Now let's move on to the next tool.
Sapling's an advanced AI platform for customer teams. Gives real-time suggestions so peeps can respond faster. Has an AI detector to catch robot-written crap. Nice.
- AI Detector spots if text is AI-made. Gives probabilities for GPT-3.5, GPT-4, ChatGPT, etc.
- Autocomplete EverywhereTM uses deep learning for fast suggestions everywhere.
- Sapling Suggest recommends live chat responses for quick fixes.
- Conversational Insights analyzes chats with NLP for useful info.
- Enterprise Security has TLS, AES-256 encryption, PII redaction.
- Integrates with ServiceNow, Salesforce, Zendesk, etc.
- Catches 60% more language issues than other checkers.
- Snippet library enables consistent, professional communication.
- FREE: $0/month with basic features and limited snippets.
- PRO: $25/month ($12 if annual). For individuals. Advanced features, unlimited snippets, premium suggestions.
- ENTERPRISE: Custom pricing for teams. Hit up Sapling for details.
- API: Metered plan for developers.
- Sales Teams: Seal deals faster with better communication.
- Support Teams: Respond accurately and quickly to customers.
- Content Creators: Verify originality by detecting AI content.
- Developers: Integrate AI with Sapling's API.
- Managers: Get insights into team communication, improve over time.
I signed up for Sapling special. And I'll say it — I like it! Has as many features as the last tool (different, but still), yet it feels...homey. That's the only word for it. Everything here is great. It's cozy. You know when you visit someone and it's so comfy you wanna stay? That's this.
First, and most valuable — the 30-day free trial with no CC needed. That's awesome in today's internet, and I appreciate it.
Second, it's intuitive. I like it here. The tool has many capabilities. And that worries me. Will it be like Copyleaks? Just another trend-chaser? Let's find out.
Test 1: ChatGPT (no prompt)
Wrote an article on "how to start an online business working full-time"
Check in Sapling:
60.8% Fake. Hmm...I'd like to see at least 90%, but it is what it is. Now about this detection tool's features.
We see the content is fake. Okay. What if we want to dig deeper into what's actually fake? Sapling gives two options.
Option 1 — Full text:
We see the whole thing marked up — what's fake and what's not. It's all on one page. Just scroll to see it all.
Next, we can see how Sapling flagged the AI content, sentence by sentence.
Very nice and convenient.
Right there on the same page are instructions on getting the most out of the tool:
Cool. For user-friendliness and interface, it's flawless. No complaints there. But accuracy in detecting AI content raises some doubts. Let's continue.
Test 2: ChatGPT with a prompt
Same prompt. Wrote an article on "how to stay fit while working from home"
Came out decent. Good draft, I'd say.
0% fake. Dang...I wanted more, more, more fake content from Sapling!
If we look close at "Sentences", Sapling did flag some sentences as AI generated:
But in "Full" it's all white:
So 0% AI generated content. *Sigh*
Test 3: Claude 2 without prompts
Let's write on "how to make AI-generated content sound more human":
10.7% AI content. Yes, I'd like to see better, but it is what it is. Like before, "Full" and "Sentences" show red lines. Won't share again — you get the idea.
Test 4: Claude 2 with a prompt
Let's write on "how to lose 50 pounds in 6 months". I know this topic — I did it years ago with exercise and diet change, that's it. But let's see what Claude 2 says:
Alright, decent advice overall. Let's check:
0% AI content. Seems it didn't catch it, but "Sentences" shows:
While "Full" is all white, sentence-level review isn't so kosher. My final thoughts later.
Test 5: Human Content
Testing a Wikipedia article:
Sapling AI detector result:
- Friendly interface. Really feels like home.
- Speed. The detection tool is real quick.
- When editing, the detector automatically processes, no need to re-copy/paste text.
- Can see AI content in Full and sentence-level, all on one page.
- Instructions right there, no clicking around to figure it out.
- Unfortunately, Sapling's detection abilities are limited. Not bad but far from ideal. Room to grow.
To summarize. I really liked Sapling, and I'm a nitpicky perfectionist. Liked a lot — wrote the specifics above. But the AI detection has some catches.
it's a good, even unique tool, but catches AI text 50/50. Which is decent already. I'd recommend checking it out, especially if the other features interest you. The detection is more of an add-on feature, and as an add-on it works alright.
Given the care clearly put into the overall tool (I can feel it a mile away), I'd say give it a look and maybe bookmark to re-check every month or so. Will they ever reach Originality AI's perfection? Can't say, but maybe if devs are into it.
GPTZero's an AI detector meant to ID content from ChatGPT, GPT4, Bard and more. Aims to bring transparency as AI content spreads. Claims to be the gold standard, but it's just another tool.
- Detects ChatGPT, GPT4, Bard, etc.
- Chrome extension to catch AI browsing the web.
- Dashboard tailored for educators, student writing, ed tech.
- APIs for integrating detection into tools.
- Says it constantly improves accuracy.
- Optimized for student writing, academic prose.
- Highlights sentences likely AI-generated.
- Doesn't store or collect uploaded docs.
- 5K characters per doc
- 3 file limit per batch
- Free forever
- No plagiarism scanning
- No API access
- $9.99/month (usually $19.99)
- 50K characters per doc
- Unlimited batch uploads
- 1 million words detection or 100K plagiarism/month
- No API access
- $19.99/month (usually $29.99)
- 50K characters per doc
- Unlimited batch uploads
- 2 million words detection or 200K plagiarism/month
- Access to premium detection model
- No API access
- Educators: Detect AI in student work
- Content Creators: Ensure originality
- Researchers: Validate authenticity
- Enterprises: Integrate into workflows
- Users: Spot human vs. AI content online
Let's start — this is my first time trying GptZero. Logged in via Google, and the first thing I saw was:
Nice to be welcomed like that. A plus.
The interface is simple and no-frills. But that's not a negative here. Simplicity is what's needed.
However, testing text didn't go so smooth.
I'll say upfront — I have higher standards here since it's positioned as an AI and plagiarism detection tool specifically, not something else. Where detection was an add-on before, GptZero is the detection tool.
Test 1: ChatGPT without prompt
Wrote an article on "how to avoid binge eating at night":
Good, I see a 62% probability this content was AI-generated. However, I expected GptZero to highlight the text it thinks is AI. But it doesn't:
Yet in the welcome video I saw text highlighted yellow. Here's the video:
Skip to 1:23, that's where the author shows the result interpretation.
And yes, not 62% AI content, but 100% AI content. GPT-4, in fact.
Test 2: ChatGPT with a prompt
After that first letdown, I expect another.
But let's not speculate. Wrote an article on "how to avoid AI detection with GPT-4" using the same prompt:
Here's GptZero's result:
49% AI written text. But again, no highlighted lines so I can't see what it thinks is AI. Also no tooltip hints when hovering. Not great for a paid tool.
Seems it just uses perplexity and burstiness to detect AI. I cover what those are in my article "How does AI detection work" — but there are many other metrics.
Here's all you see after each test:
I read their FAQ. Here's what I got:
Training and Data:
GPTZero was trained on a large dataset of human and AI articles. Tested on challenging out-of-sample articles.
- Correctly classifies 99% of human and 85% of AI articles using a 0.88 threshold on `completely_generated_prob`.
- AUC score of 0.98.
- More text submitted increases accuracy. Document > paragraph > sentence.
- Most accurate for English prose by adults, like its training data.
- Fine-tuned for student writing and academic prose. Significant boost in accuracy here.
- Many educators trust it due to its mission of providing safe AI detection tools, unlike side projects.
Well, they claim 85% accuracy for AI detection. I'd dispute that. Also say it's best for educators.
Let's take a different path then. No human content. Will write three essays — one ChatGPT, two Claude 2.
Test 3: ChatGPT again, essay this time
Essay on the ethics of using AI to write essays:
Alright, we wrote an essay. Since GptZero is for educators, right? Let's check:
50% human, 50% AI. Oh really?
Test 4: Claude 2, no prompt
Think our little friend can handle it?
Let's not speculate, essay on improving grades:
Check in GptZero:
50% AI, 50% human. But different message: moderate likelihood of AI.
Ah finally, some highlighted text:
One more test.
Test 5: Claude2 with prompt
Essay on staying calm during exam prep:
53% AI generated. More yellow highlights below:
One last human content check.
Article from Authorityhacker:
Check for "humanity":
0% AI content — excellent! One of the few solid tests.
- Speed (mixed feelings)
- Welcome message
- Lightning fast, 1-2 seconds. Good but also bad because: the faster the check, the less data analyzed. Here it seems to only use perplexity and burstiness. Yes, the FAQ says otherwise but I don't care what's said, only what's proven. Talk is cheap.
- Text highlighted just twice during all tests. Yellow... What's that indicate? The AI likelihood for those lines? What's the logic, devs?
- No visual tooltip hints when hovering over text. Would've been useful.
Can I recommend this tool as a daily driver for checking texts, essays, etc. (since GPTZero markets itself for educators)? No, sorry. I explained why above. The tool feels like an attempt at something good that needs a lot more work.
My advice to the developers: completely rethink what you're doing. Improve the tool thoroughly.
You can't lean on some endorsement (even from a known publisher) calling you the best tool (maybe without full industry knowledge), and rest on that. You can't. Period.
However, you can use the tool completely free forever. So test it yourself. But I wouldn't pay for it. Constantly seeing content flagged 50% AI without any real clues beyond rare yellow highlighting isn't for me.
But something compelled me to run more tests...
To avoid empty claims, I did dozens more. Here's proof:
And once I even got this:
But that's the only case where GPTZero was certain the content was AI-generated, not human. Here it modestly gave a 71% probability of AI.
The final verdict: GPTZero doesn't handle GPT-4 well. 90% of the time I got 50% for both prompted and unprompted. Only once did it flag GPT-4 content as 71% AI, highlighting it all yellow.
For human content, GPTZero is right 95% of the time, almost always. I won't get into Claude2 since it stumps most tools, or only succeeds with low frequency. So draw your own conclusions. But I can't recommend it for checking GPT-4 specifically, though I wish I could.
Moving on to the next tool.
CrossPlag's a plagiarism checker with an AI detector. Aims to uphold academic integrity by scanning against 100 billion+ texts. Can ID if text is AI or human. Checks 100+ languages.
- Cross-lingual: Detects across 100+ languages.
- AI detector: IDs AI content to maintain quality.
- Privacy: You control data access.
- Flexible pricing: Pay-as-you-go and bundles.
- Fast checks: Quicker for bundles.
- Custom workflows: For different institutions/businesses.
- No daily limits: Unlimited checks.
- Support: Priority for bundles.
- Pay-As-You-Go: 9.95€ for 5,000 words / 50 credits. 24H turnaround.
- Bundle: 149.95€ for 100,000 words / 1000 credits. Faster checks, priority support.
- Institutional: Contact for pricing.
- Students: Plagiarism checks for papers.
- Academics: Check papers and submissions.
- Bloggers: Ensure original SEO content.
- Schools: Comprehensive solution.
- Businesses: White-label, API integration.
Went to the site, signed up, and landed right in the spot:
Can paste text to check or upload a file. Simple and intuitive.
Passed the first test. I love simplicity and clean interfaces, plus when nothing extra is added.
Okay, this tool falls in the first category — AI detection is the core function. So testing standards are higher. Much higher.
Test 1: ChatGPT no prompt
Wrote article on "how to use AI in email marketing":
100% AI generated content. Well, first pancake isn't a lump, good start. Visually interesting 0-100% scale. But as an experienced user I miss: 1) color-coding, 2) visual tooltip hints. Let's see what Crossplag shows in the other 4 tests.
Test 2: ChatGPT with prompt
Let's write on "how to stay awake during the night shift". Fitting now, since it's late and I'm still writing.
Check in Crossplag:
0% AI text, so 100% human text. Ah, my prompt passed detection easily.
Test 3: Claude with prompt
Article on "what a work from home routine looks like":
8% AI content, 92% human. So-so result.
Test 4: Claude 2 with prompt
Writing on "How To Start a Business While Working Full Time"
6% AI content, 94% human written content. Well, few are managing Claude 2 yet.
Test 5: Human Content
Let's check out this article: (a random site — just googled for a topic).
8% AI content, 92% human.
Behind the scenes:
I've run about 20 additional tests. Here's what I've found:
- It nails bare GPT-4 content without prompts.
- With simple prompts, Crossplag already struggles but still catches some AI content — around 6-8%.
- Claude2 is tougher: it practically fails on Claude2, even without prompts. 10% AI detected at best.
- Human content checks out: no false positives detected whatsoever.
- Simple to use: Paste text, get results in seconds
- Speed is decent — not the fastest detector but doesn't need to be
- Fully free to test with no credit card
- Intuitive, straightforward interface
- No color-coded text
- No visual pop-up messages
I have mixed feelings about this tool. There's absolutely nothing extra here. Sign up and you can start scanning for AI content right away. Which is great, except there's zero visualization — no color coding or pop-up messages. I would have appreciated those features. And I'm probably not alone.
I took a close look at their homepage. They really position themselves as a plagiarism checker first and foremost. The AI detection feels more like an add-on. Speaking of, I scanned a ton of content but credits never depleted. Later I realized credits are used for uploading docs and checking plagiarism.
Oh right — the free account lets you scan up to 1000 words for plagiarism. AI detection doesn't use credits at all, from what I can tell. In that case, it's a decent enough free addition. Not amazing but certainly gets a passing C+ grade considering what we've seen.
I couldn't find pricing on their site which is a bit odd. You get to try it free first, then they show pricing later apparently. Not a fan of that approach. I prefer transparency upfront. Though maybe I'm missing something — I should get more rest! If so, my apologies.
AI Detector Pro aims to spot and remove AI traces from content. Wants to make sure you don't get slapped for using AI by finding traces and suggesting rewrites. Gives detailed reports on AI-written sections and sketchy parts. The AI Eraser provides alternate wording where it sniffs out AI. Also integrates with Word and Google Docs to detect and edit AI directly.
- Claims to detect AI-generated content
- Detailed reports highlighting AI evidence
- AI Eraser suggests rewrites for traces
- Integrates with Word and Google Docs
- Gives confidence levels for detection
- Manage AI reports by project
- Checks docs and websites
- Makes content seem more human
- FREE: 3 reports, limited AI Eraser
- BASIC ($27.98 $13.99/month): 100 reports/month, AI Eraser, research tools, content tools, data export
- UNLIMITED ($49.98 $24.99/month): Unlimited reports, AI Eraser, research tools, developer tools, content tools, data export, API (extra fee)
- Students: Avoids false positives for academic integrity
- Job Seekers: Improves AI resumes and cover letters
- Content Creators: Enhances drafts, catches AI mistakes
- Business Owners: Adds tone, voice, context to AI content
- Users where AI is illegal: Edits content
Wow! Just listen to this video. No, you don't even have to watch. I'm speaking for all men here: listen up!
I listened to that video twice (twice!) trying to figure out if that voice was human or AI. I heard the sighs, the British accent, and I gotta say — it sounds like a real woman talking. If my wife spoke to me in that tone, I'd have to wear earmuffs constantly to avoid a permanent erection! Sexy. I like this beginning. Fingers crossed it keeps up. I'm tired of being disappointed, aren't you?
But here comes the first letdown. I signed up, confirmed my email, went to the "Tools" section and guess what I see? Take a look:
I shrank the screen so hopefully you can see. There's a Website Status Checker, SSL checker, DNS lookup, IP lookup and a whole lot more, a LOT more stuff. But where the heck is the AI detector?
I search for "AI detector" (tried it different ways — AI detection, detect AI, you name it):
So where, please tell me, am I supposed to find this AI detection tool?
And at the very bottom there's this:
Which clearly states in black and white that their tool should support AI detection. Where is it?!
But I'll keep searching.
After a few minutes of angry hunting...
Turns out I did find where to test their AI detection tool.
Once you're in their app after signing up, go to Reports:
And here you can enter text:
My bad. I should have watched the video closely — you can clearly see you need to go to Reports. Why hype it up with that voice if we have to watch and not just listen?
Anyway, you enter text to scan it, but you won't see the text right away — more hunting. Instead you'll see this:
Click the link at the top or bottom and whew, finally, you can see the text and if it's AI or not.
Alright, time for the tests. The first impression was mixed, but more on that later.
1. ChatGPT, no prompt
Let's check it out:
82% AI written content. Interesting. But that's not all. Here's what you'll see next:
And it gives a judgement on each paragraph about how confident Aidetector Pro is that it's AI or not. So far I see two options: Suspicious or AI. I think you can guess what those mean.
But wait, there's more. The tool lets you edit the text and re-check it. Well, this being my first test drive, I gotta try that out.
Click the OpenAI eraser next to the results or in the top menu. This pops up:
You can edit the highlighted text, then re-check and use the new version.
I tried editing based on the tool's suggestions, but got this:
The suggestions are for paying customers only. Sigh, what happened to your promise of free access to AI detection and erasing? Look:
It clearly states the free plan includes three reports + AI eraser. I clicked the exclamation point (trying to be very thorough here) and saw only a description of the eraser, nothing about limitations.
I could try to justify the developers here: they gave me access to edit the text highlighted as AI, then re-check it. But dangling a carrot then saying "Ooh so tasty!" without letting me nibble? Not a good look with the competition out there, and there are some worthy alternatives at the top of this list.
Of course I won't be editing anything now. First impressions ruined. I'll finish testing since I get three free reports. Not much, but it's something.
So that's one test down. I'll do two more: ChatGPT with a prompt and Claude without.
Test 2. ChatGPT with prompt
Let's write an article on "How not to be disappointed in a girl on the first date" (nudge nudge):
2% AI content, 98% human. We easily fooled Aidetector Pro here, like many other tools on this list.
Nothing to show in Details — everything's squeaky clean with no colored highlights.
Test 3. Claude, no prompt
Writing an article on "How to become a leader":
Wow! Color me impressed. I was getting used to Claude2 slipping past most detectors, but here we get 96% AI content. Nice job.
That wraps my brief testing mission for this tool, since they didn't let me do more. Now for the takeaways.
- Sexy voice in the intro video (who didn't like that?)
- Good at detecting GPT-4 and Claude2 content without prompts
- Interesting visualization of tested data
- Has an AI eraser (that I couldn't try)
- It's a maze. Took time to find the AI testing part (yeah yeah, should've WATCHED the video not just listened, but still)
- Limited testing capabilities in free mode
I don't even know. At first I was disappointed by the maze-like setup. I mean, why not call the Reports tab something like AI Scan to make it clear what you do there?
Honestly, after landing on the Tools page, my first urge was to close the site and never return. Tools implies all capabilities in one place. But I found a bunch of random crap, not what I needed. Why is any of that even there?
Imagine you have a fridge and some genius suggests bolting an oven onto it. Crazy right? That's what I think! The analogy is clear.
An AI detection tool should be an AI detection tool, period. Maybe add plagiarism checking. That's it.
Yes, you nailed detecting GPT-4 and Claude2 content. But that's not enough. And my tests were limited since you didn't allow more. Yes, I read your docs since I'm thorough; you said you limit free access due to the expense of AI detection.
So here's the deal: ditch the clutter on your site, polish the UI so no one gets lost (not everyone will watch the video, and even if they do they may miss stuff while fantasizing about the sexy voice), then get back to me. I'll test the crap out of your tool then.
I think I've said enough for you to draw your own conclusions here.
I gotta admit, I'm tired of being disappointed. I've tested a few more AI detection tools. They couldn't even identify bare GPT-4 output as AI, calling it human content. Why would I bother posting that here?
For a final note, I'll list some free or freemium tools that are better known and used to work for me. Before GPT-4, before Originality really took off.
Not all work now. But first, they're free. Second, they help you understand the basics if you're just dipping your toes in, not swimming like a fish.
Alright, here we go.
Despite its limitations, ContentAtScale remains one of the better free detection options, making it great for initially testing the waters. With 70% accuracy on bare GPT-4 content and helpful color coding visuals, it reliably catches a fair amount of AI-generated text. However, advanced models and prompted content are challenging. Still, for free detection capabilities, ContentAtScale is a solid starting point.
Oh yes, I used this tool for a long, long time and liked it then and still do today. But back then I was just starting out in AI copywriting, didn't fully understand it all (maybe still don't). GPT-3 was around, then GPT-3.5. This tool nailed detecting those two, but times changed.
I'll admit, testing ContentAtScale's detector now feels nostalgic because it was truly the best for me back in the day.
A quick word on its origin. In the beginning there was the word... Oops, wrong story. First there was an AI writing tool by the same name, still around today, then the detection tool came later.
Alright, let's go.
Test 1. ChatGPT, no prompt
Article topic: The AI's Midlife Crisis: Why "Reward is Enough" Might Not Be Enough
I decided to inject some mood into my AI-generated topics. Something's gotta lift my spirits!
Checking with ContentAtScale's AI detector:
1% human probability means 99% AI written content. So far so good, friends.
Test 2. ChatGPT with a prompt
Topic: The Picasso of Algorithms: Exploring Creativity in Artificial Intelligence (Mood: Amusement)
Here ContentAtScale's detector failed. It sees 100% human content.
Test 3. Claude2 without a prompt
Topic: How to use AI for translation (no mood since the topic alone lets Claude2 write as if prompted)
Not bad — ContentAtScale detected 50% AI content. Decent for a free tool.
Test 4. Claude2 with a prompt
Topic: AI's Legal Drama: Law, Logic, and Argumentation in the Age of Machines (Mood: Drama)
Let's see what CaS says:
No surprise, the detector failed here given Claude2's capabilities.
One last test for this tool. Let's check some of my own content:
Test 5. Human written content
Checking an excerpt from my recent Passed AI review:
Checking with CaS Ai detector:
Result: 100% human written. No mystery there.
What I left out:
Rather than exhaust you (does anyone have the patience to read this whole thing?!) I did about 20 more tests.
Here's what I found:
- GPT-4 no prompt — detected about 70% of the time
- GPT-4 with prompts — detected as human 90%
- Claude2 no prompt — 50/50 (not bad!)
- Claude2 with prompts — fails to detect 90%, sees as human
- Human content — detects accurately 95% of the time
- ContentAtScale detector is free
- Handles GPT-4 no prompt decently
- Nice interface
- Color coding
- Visual hover hints (right side)
- Struggles with GPT-4 + prompts
- Claude2 no prompt only 50/50
- Claude2 with prompts does poorly
I figured ContentAtScale wouldn't handle GPT-4 at all. But as you can see, I was wrong. Yes, false positives happen, especially with prompts, but that's prompt engineering for you. Claude2 is similar to other tools.
Bottom line — it's free! This is where I'd recommend starting your AI detection journey. Try it out, kick the tires. Check as much as you want before moving to a paid tool.
Expert opinion: For free detection, ContentAtScale is one of the best in class. Give it a shot.
2. ContentDetector.AI: A Promising Tool
As far as free detection goes, ContentDetector.AI punches above its weight class. With 50% overall accuracy and decent detection of Claude2 content both with and without prompts, it outperforms many paid options. The inclusion of helpful color coding and hover pop-ups add nice visual aids not often found in free tools. For those new to AI detection looking for a competent free option, ContentDetector.AI is a great alternative pick.
The creator of this tool asked me to test it out. Well, let's see what it can and can't do. I don't make empty promises. If it needs testing, I'll test it.
First off, this detection tool is free. The interface is simple and friendly. That's all I can say for now. On to the tasty part — the tests!
Does the flashy name hide something worthwhile? We'll find out now.
Test 1. ChatGPT, no prompt
Topic: Building Client Trust in an AI Solution: Case studies, transparency, and ethics.
Result: 37.5% AI written content. Nope, this is 100% GPT-4.
Test 2. ChatGPT with prompt
Let’s write an article about adding AI into your workflow:
Checking in ContentDetector.AI:
Result: 37.5% AI content. Hey, not bad. At least it didn't call it totally human. And notice some highlighted sentences. Yes it's free, but the creator added color coding and hover hints:
Test 3. Claude2, no prompt
I bet Roop (the creator) is nervous now.
Yes! 52.7% AI written content! How do you like that? Not every paid tool can detect Claude2 at all, and here we get over 50%. Nice work Roop, virtual high five!
Test 4. Claude2 with prompt
Article on celebrating when your AI startup is succeeding.
Woo hoo! I'm giddy as a kid, really. 30.36% AI content. Doesn't nail it, but it's something.
Test 5. Human content
Using an excerpt from from my recent article:
4.29% AI written — in truth this likely has more AI content from my prompt engineering, but I'll take it!
About 30 more tests. I don't even know how to put this...
Ha! A little mystery never hurts. The tool is good. For free, it's very good — a solid A from Alex Kosch. Testing revealed ContentDetector AI surpasses ContentAtScale in some ways. Both are free, but the first handles Claude2 much better, with and without prompts.
Here are my results:
- GPT-4 no prompt: 50/50
- GPT-4 with prompts: 50/50, also great! My simple prompt fooled many paid tools, as you saw.
- Claude2 no prompt: 50/50, an excellent score.
- Claude2 with prompts: 50/50. Again, solid.
- Simple, friendly interface
- Color coded AI sentences
- Hover hints
- Unfortunately some false positives
Not every horse in this race gets a My Take, as you've probably noticed. Well, for a free tool contentdetector AI is very good. I especially like that it can detect Claude2 content, with and without prompts, even if not 100%.
No false positives for human content. GPT-4 is trickier. I'd like to see it catch bare GPT-4 better, 80-90% would be great.
But it also does decently on prompted GPT-4 — at minimum it SEES the AI, even if partially.
Conclusion: Recommended. I'd even use it alongside ContentAtScale above. Cross-checking in multiple detectors is smart, especially when both are free.
One of the best free AI detectors, especially for Claude2 outputs.
I also used this tool early on, back in the GPT-3 and GPT-3.5 days. But GPT-4 changed everything. A lot.
To try the tool, you'll need to register.
The KazanSEO AI detector is free. I'm not sure how well it works now, but for a complete review of the best detection tools, we'll test it out.
The interface is simple, friendly, and nice. You'll see that soon. Nothing to pay, no limits. But most importantly, we need to see if it can consistently spot GPT-4 content.
Let's find out.
Test 1. ChatGPT, no prompt
Article topic: AI in Crisis Management: Tools and strategies for remote teams during unforeseen challenges
Checking in KazanSEO:
Unfortunately, not a good result — 99.98% human written when this is 100% GPT-4 generated.
Test 2. ChatGPT + prompt
Topic: The Future of AI in Education: Personalized learning and remote classrooms
99.97% human, while it's 100% AI.
Test 3. Claude2, no prompt
Utterly dismal — 99.95% human when this is 100% AI generated Claude2 content.
Test 4. Claude2 with prompt
Checking in KazanSEO:
99.95% real — completely incorrect, 100% AI written.
Test 5. Human content
Using an excerpt from a recent article of mine:
Yup, 99.95% human as expected.
Offscreen: I did about 20 more tests. Unfortunately only detected GPT-4 30-40% of the time in a couple cases, no more.
Writer is actually meant for other purposes. Writer is an AI writing assistant. Check their site if interested. But we're talking AI detection here.
Back in the GPT-3 and GPT-3.5 days this service did alright, but I always disliked the limitations. And the limits are: only 1500 characters checked. Too little. I always had to check in snippets. Not fun.
To clarify, here AI detection is just an add-on. So expectations aren't super high. But as with any add-on tool, it should perform its job.
The interface is simple as can be: copy text, paste, check. Easy peasy. Alright, let's get to it.
1. ChatGPT without a prompt
Let's ask ChatGPT to write a review of Writer com. Despite its memory limits, Writer has been around a while, so it should manage something decent. Really we just care about the text itself though.
ChatGPT warns it only has info up to 2021. But as I said, we only care how the words are strung together.
Let’s check it.
And we hit the annoying limitation I mentioned:
This really irritated me back in the day.
Okay, let's trim the text and try again:
And got this sad result: 100% human content. Sigh. Not even 1% AI generated. But no...
Let's continue our sad tale.
Test 2. ChatGPT with prompt
Let's write an article on how to stay true to yourself (making up random stuff now)
The dreaded limit strikes again! And the result: 100% human-generated. They even congratulate me — Fantastic! What's fantastic about total failure? This is 100% AI written.
Test 3. Claude2, no prompt
I'll keep going out of duty, but the outcome is clear.
Writing on how to make yourself happier when stressed
And again, it's fantastic! 100% human-written content. But no, Writer, this is 100% AI.
Test 4. Claude2 with prompt
Writing on how not to smash your monitor when really angry
Fantastic human-generated content again, 100%. Sigh...
Test 5. Human content check
This time I'll just insert a link to an article of mine, handy feature:
Result: Well at least here it's accurate, 100% human content.
Oh what a "fantastic" AI detection tool we have here! Just slaps a 100% human content label on everything I feed it, except for actual human content once.
I don't know, maybe the service has its merits, but for AI detection it's terrible 100% of the time. A surreal "character" — that's my verdict. Don't even touch it, who knows what fantastical things could happen! (And I'm not joking).
I have two tools left — GLTR and Hugging Face. Both are outdated, only detecting GPT-2 content. As we know, the world has moved far past those two models.
Here's GLTR now:
I admit I never liked it much, but it was popular for a time. In simple terms, it measures the "temperature" of text. The lighter the better, basically.
Just to be sure, I tested 10 texts from Claude2 and GPT-4. GLTR failed, and that's not its purpose anymore. No point in screenshots, take my word.
Same with Hugging Face. More visually pleasing at least:
As we see, it plainly states "for GPT-2". Old model, so whether Claude2 or GPT-4, the result is always the same. Back when I used GPT-3 I'd cross-check here and it helped sometimes. But now, that's just history.
I took the time to test it too just in case, and got results similar to Writer's tool. Only no one yelled "Fantastic!" after each check or limited me.
You still with me, friends? If so, you've accomplished a feat. This is the longest review I've done. But you know what I realized? The value isn't the length, it's the usefulness. This got long because I tested every tool I came across.
Now to the point. For paid tools, Originality AI and Passed AI remain the best for me. As an alternative, Winston AI may work for some. The Originality AI team nurtures their creation, constantly improving, reacting quickly like with GPT-4's release. They build new things — expanding from English-only to 15 languages now. Impressive.
Passed AI keeps pace. Their main advantage is integrating with the Passed API. So for AI detection they simply can't fall behind. But I especially love their Chrome plugin — truly unmatched and superior to Originality's. Originality's is good too, but Passed AI's is the advanced version.
It depends on your needs. For detailed Google Docs checks, I'd hands down choose Passed AI. For everything else, go with Originality.
For free tools, only two stand out — Contentdetector.ai and ContentAtScale. Since both are free and have practically no limits (ContentAtScale caps at 25,000 characters, still a lot), I recommend using both.
I can't definitively say which is better. Contentdetector excels with Claude2, ContentAtScale with GPT-4. So choose based on your needs.
But remember — free tools will never give you the full clear picture. Use them at the start when you're just learning about AI content and detection in general. When you reach the next level of AI content creator, get a paid tool. None will break the bank.
I'll wrap up this long play now. I hope you enjoyed it and gained something useful. I tried my best for you.
And here are a few last points in case you still have questions.
Look, I get it. You're wondering why on earth you'd need to fuss about AI content detection. Well, let me break it down for you. AI content generation is the number one topic right now. The importance of detecting content, especially content created by AI, is skyrocketing. Whether it's for plagiarism detection or to ensure the content you’re consuming is human-generated, you need a solid content detection tool. You're in luck, 'cause tools like Originality and Passed AI content detector have jumped into this game.
So, you're an educator, right? You're buried in essays, papers, and assignments that look suspiciously polished. Ever thought some of your students might be using AI to write their assignments? Yeah, they're sneaky like that. An AI content detection tool can be your best buddy in separating genuinely insightful human-generated content from the regurgitated mess dished out by an AI model. You'll get a content score to show you what's likely to be written by your bright-eyed scholar and what's the handiwork of some generative AI.
And for my copywriting comrades—listen up. You're cranking out content for clients who want "authentic, human connection." Not something generated by AI. What if you could assure them that the content, even if aided by AI, still preserves that human touch? Using top AI content detectors, you can provide that assurance by showing the content score. A low score indicates it's more human-like, making it easier to sell your craft and build trust. Plus, it helps you keep tabs on how much your AI writing tools are influencing your work.
You might be tempted by some free AI content detector tools. Free version this, free to use that. I actually used them, and mentioned about it above. Listen, some of them aren't badб like ContentAtScale’s detector or ContentDetector.AI.
But when you compare them to other tools, they often lack in accuracy. Why? Because the top AI content detectors are built on highly accurate AI models and a lot of training datasets. So if you’re serious about this, it’s worth the investment. But if you are just starting, then, of course, you can start with free options.
Now, some of you might think that good ol' plagiarism detection is enough.
But let me spill some tea: plagiarism and AI content are two separate beasts.
Plagiarism detectors check for similarities with existing content; they don't sniff out if the content is written by an AI or a human.
That’s why specialized AI detection tools can help. Questions about AI? These are the answers you've been seeking.
Content at scale (I am talking not about Content At Scale ai detector now) stands out as a major issue. We're talking about content generated by ChatGPT, Claude2, you name it. And when it comes to detecting AI-generated content, the best AI detectors I mentioned above come to the rescue as they do can detect content written by AI.
When you're navigating through the turbulent skies of content creation or consumption, having a robust content detector is a tool you don't want to skip out on. I’ve tried to address all those burning questions you've asked about AI.
So, before you approve that next piece of content, do yourself a favor and check for AI-generated text. Let's be honest: the content generated by an AI can be hard to detect with the naked eye (I already can as I work a lot with AI content). Even if you fancy yourself an AI guru, it's better to rely on technology to do what it does best. Plus it speeds up the process
It’s a simple step: check for AI content. But it's a giant leap for maintaining the quality and integrity of human-generated content in a world increasingly dominated by AI. So, make it a point to scrutinize every piece of content that comes across your desk or screen. Because when it comes to AI, you can never be too careful. Don't say I didn't warn you.
About the Author
Meet Alex Kosch, your go-to buddy for all things AI! Join our friendly chats on discovering and mastering AI tools, while we navigate this fascinating tech world with laughter, relatable stories, and genuine insights. Welcome aboard!