Skip to content

Why ‘It’s Becoming Increasingly Clear’ Flags Your Writing as AI

The phrase 'it's becoming increasingly clear' now appears in AI text 147x more than human writing. Here's why ChatGPT keeps using it - and what to write instead.

9 min readBeginner

Here’s the same sentence written two ways:

Version A: “It’s becoming increasingly clear that AI-generated content poses significant challenges for content creators. Also, these sophisticated tools require careful consideration to ensure optimal results.”

Version B: “AI-written content is a problem. You can spot it a mile away, and so can the detectors.”

Version A screams ChatGPT. Version B sounds human. The difference? Version A uses two of the most notorious AI tells: “it’s becoming increasingly clear” and “moreover.” Both phrases have blown up as red flags in early 2025, now appearing in detection tools’ banned phrase lists and triggering instant suspicion from readers.

Why ChatGPT Can’t Stop Saying “It’s Becoming Increasingly Clear”

LLMs don’t write. They predict.

When GPT-4 or Claude generates the word “it’s,” the model calculates probabilities for every possible next word. “Becoming” ranks high because the training data – billions of web pages, books, and articles – contains that exact sequence thousands of times. Then “increasingly.” Then “clear.” Each word makes the next one more statistically likely.

The phrase exists because it’s a hedge. Academic papers and formal writing love progressive certainty markers: “It appears that,” “Evidence suggests,” “It’s becoming clear.” According to Google Brain researcher Daphne Ippolito, LLMs favor common words like “the” and “is” because they’re training to minimize prediction error – rare or surprising word choices increase that error.

So the model plays it safe. Over and over. Until every output starts to sound the same.

Pro tip: If you catch yourself writing “it’s becoming increasingly clear,” ask what you’re actually trying to say. Usually it’s just “so” or “clearly” or nothing at all. The phrase adds zero information – it’s verbal throat-clearing.

What Detectors Actually Flag (It’s Not Just Word Lists)

Most tutorials hand you a list: avoid “look into,” “mix,” “strong.” Done.

That’s incomplete.

AI detectors like Grammarly’s and GPTZero analyze two core metrics:

  • Perplexity: How predictable is the next word? Low perplexity = AI. Human writers throw in weird word choices, fragments, and digressions that spike perplexity.
  • Burstiness: How much does sentence length vary? AI outputs medium-length sentences in a steady rhythm. Humans mix short punches with long, winding thoughts.

The overused phrases matter because they lower perplexity. “Furthermore” is so common in AI text that seeing it once raises the probability you’ll see it again – and again. GPTZero reports 99% accuracy in distinguishing AI from human text using these patterns, per benchmarks with Penn State’s AI Research Lab.

But here’s what tutorials don’t tell you: even if you strip out every banned word, a detector can still flag your text if the rhythm feels machine-generated.

The Mechanics: Why These Specific Words Keep Appearing

Take “look into.” It’s become the poster child for AI detection – one of those words that makes readers roll their eyes the moment they see it.

Why does ChatGPT love it?

  1. Training data bias. “look into” appears frequently in academic abstracts and technical documentation – text that’s well-represented in LLM training sets. When the model needs a verb for “examine closely,” “look into” scores higher than alternatives like “dig into” or “explore” because it saw “look into” paired with “examine” more often.
  2. Semantic clustering. LLMs group similar meanings together. “look into,” “explore,” “investigate,” and “examine” live in the same vector space. But “look into” hits the sweet spot: formal enough to sound smart, common enough to be safe.
  3. Lack of context awareness. Humans know “look into” sounds stuffy in casual writing. AI doesn’t. It sees “This article will look into into…” in its training data and reproduces the pattern without understanding that real people almost never talk that way.

The same logic applies to “furthermore,” “moreover,” and “To sum up.” These are transition clichés – the writing equivalent of clip art. They work, they’re safe, and they appear everywhere in the training corpus. So the model defaults to them.

One Number That Explains Everything

According to GPTZero’s vocabulary analysis, phrases like “plays a significant role in shaping” appear 207 times more frequently in AI-generated text than in human writing. Not 2x. Not 20x. 207x.

That’s not a writing style. That’s a signature.

Three Fixes That Actually Work

Forget “use a thesaurus” or “add personality.” Those tips are true but vague. Here’s what changes output in measurable ways:

1. Replace Hedges with Directness

AI default: “It’s becoming increasingly clear that remote work offers significant advantages.”

Human revision: “Remote work works. For most people, anyway.”

The hedge (“becoming increasingly clear”) adds nothing. The qualifier (“significant”) is filler. Cut them. What’s left is a claim you can actually argue with – which is what good writing does.

2. Break the Rhythm

AI writes in steady, predictable beats. Every sentence is 15-25 words. Every paragraph is 3-4 sentences. It’s metronomic.

Humans don’t do that.

Mix it up. Write a two-word sentence. Then a 40-word monster that meanders through three ideas before finally arriving at a point that maybe isn’t even the point you started with. Then back to short.

See?

3. Use the “Would I Say This?” Test

AI Phrase Would You Actually Say It? Human Alternative
“look into into the intricacies” No “Look at how this actually works”
“Furthermore, this approach” No “Also” or “Plus” or just start the next sentence
“Currently, ever-evolving landscape” Absolutely not “Right now” or “These days” or delete entirely
“It’s becoming increasingly clear” Only ironically “Clearly” or “So” or “Here’s the thing”

A March 2025 compilation of overused AI words found that simply instructing ChatGPT to avoid these phrases cuts humanization editing time by two-thirds. That’s not just faster – it’s a sign you’re fighting the right battle.

Edge Cases Nobody Talks About

When “AI Words” Are Actually Fine

Academic papers. Legal briefs. Grant proposals.

If you’re writing for a context where “furthermore” and “significantly” are expected, using them isn’t an AI tell – it’s genre convention. The problem isn’t the words themselves; it’s using them in blog posts, social media, and content where real humans don’t talk that way.

Paperpal’s analysis (February 2026) found that academic writing tools trigger AI detectors even when the ideas are 100% original, because detectors analyze style, not thought. This creates a paradox: using AI to polish your research proposal can get it flagged, even though the research itself is yours.

The ESL Author Problem

Non-native English speakers often write with formal connectors – “moreover,” “however,” “thus” – because that’s how English is taught in textbooks. Their writing can register as “AI-like” even when it’s entirely human.

Detection tools don’t account for this. They’re trained primarily on native English writing patterns, which means formal ESL writing triggers false positives. There’s no good fix yet, but it’s worth knowing: if you’re a non-native writer getting flagged, it’s not necessarily because your work is AI – it’s because your learned style resembles AI’s learned style.

The Humanizer Paradox

Tools like GPTinf and Quillbot promise to “humanize” AI text by swapping flagged words for alternatives. But here’s the catch: detectors are now being trained to recognize humanizer patterns.

Paraphrasing tools have their own fingerprints. Swap “furthermore” for “additionally,” and you’ve just traded one AI tell for another. The Microsoft GRP-Obliteration research (early 2025) showed that training on even a single adversarial prompt can reshape a model’s entire output distribution – implying that humanizers, which fine-tune on “make this less AI-like” prompts, might be creating new, detectable patterns rather than erasing old ones.

The only reliable fix is actual human editing. Read it out loud. If it sounds like a press release, rewrite it until it sounds like an email.

Performance Test: Before and After

I ran two versions of the same paragraph through GPTZero (February 2026):

Version A (raw ChatGPT):
“It’s becoming increasingly clear that organizations must use latest solutions to navigate the complexities of digital transformation. Also, this complete approach enables stakeholders to optimize outcomes and drive sustainable growth.”

GPTZero score: 96% AI-generated.

Version B (edited):
“Companies need better tools. That’s obvious by now. But ‘better tools’ isn’t a strategy – it’s a shopping list. The real question is what you’re trying to fix.”

GPTZero score: 8% AI-generated.

Same core idea. Completely different detection outcome. The difference? Version B killed the hedges, ditched the buzzwords, and added friction – the kind of opinion and attitude that AI smooths away by default.

When NOT to Follow This Advice

If you’re using AI to draft internal docs, meeting notes, or anything that won’t be published, don’t waste time humanizing it. The point of detection-aware writing is audience trust – if your audience doesn’t care whether it’s AI-written, neither should you.

Also: if you’re writing SEO content purely for Google, the calculus changes. Google’s official stance (as of 2025) is that AI-generated content isn’t penalized if it’s helpful. But “helpful” is subjective, and if your content reads like every other AI-written SEO post, it won’t rank – not because it’s AI, but because it’s generic.

The real risk isn’t detection. It’s sameness.

FAQ

Can AI detectors be fooled completely?

Yes and no. Heavy editing can drop detection scores to near zero, but it takes effort – usually more effort than just writing it yourself. The better question is whether you need to fool them. If your goal is to publish trustworthy content, focus on making it genuinely useful rather than gaming the detector.

Why does Claude avoid “look into” but ChatGPT doesn’t?

Claude is trained using Constitutional AI, which includes explicit guidelines to avoid certain overused phrases. Comparative analysis from early 2026 shows Claude produces fewer “AI-isms” like “look into,” “mix,” and “ever-changing landscape” than GPT models. It’s a design choice, not an accident – Anthropic built avoidance of these patterns into the model’s training objectives.

What’s the fastest way to check if my writing sounds like AI?

Read it out loud. If it sounds like a TED talk or a corporate press release, rewrite it. Alternatively, paste it into GPTZero or Grammarly’s AI detector for a mechanical check. But honestly, your gut reaction is usually right: if it feels robotic, it probably is.

The real test isn’t a tool. It’s this: would you say it to a friend? If not, don’t publish it.