I remember when ChatGPT first appeared, and the first thing everyone started saying was “Writers are done for.” People started speculating about news sites, blogs, and pretty much all written internet content becoming AI-generated — and while those predictions seemed extreme to me, I was also pretty impressed by the text GPT could produce.
Naturally, I had to try out the fancy new tool for myself but I quickly discovered that the results weren’t quite as impressive as they seemed. Fast forward more than two years, and as far as my experience and my use cases go, nothing has changed: whenever I use ChatGPT to help with my writing, all it does is slow me down and leave me frustrated.
Why ChatGPT just doesn’t work for me
ChatGPT can generate some really good stuff. It can produce language that’s natural, coherent, even witty and interesting. But most of the prime examples you tend to see come from extremely simple prompts — the “write a poem about shopping at Walmart but in the style of William Shakespeare” kind of prompts that everyone was doing back in 2023.
I’m sorry, I simply cannot be cynical about a technology that can accomplish this. pic.twitter.com/yjlY72eZ0m
— Thomas H. Ptacek (@tqbf) December 2, 2022
When you ask for something like that, part of what makes the result great is that it’s unexpected. You’ve essentially asked it to surprise you, and in most cases, it will succeed.
When you’re trying to use ChatGPT for boring old writing work, on the other hand, it’s a whole different story. First of all, for pieces like this, it’s useless. Everything I’m writing now is about my own experiences and opinions — AI can’t write those for me. Some people might argue that it could help you plan, brainstorm, or refine arguments — but I’ve never wanted help with that kind of thing anyway.
The kind of work I actually tried to use ChatGPT for was company blogs. You know the type — explainers, how-tos, and recommendation posts covering topics related to the company and its products (with some subtle self-promotion thrown in as well). When you write this kind of thing, you’re often given a lot of requirements: a style guide for language and grammar, keywords to insert, sources to include, sources to avoid, and a content outline with the headings, structure, and key points all pre-decided.
If I wanted to get some usable copy, I couldn’t just feed GPT a one-line prompt and let it run wild. So I tried a structure that looked like this:
- A preliminary “context” prompt explaining what I was writing and what kind of things I was going to ask for, along with an example paragraph to show it the style of language I wanted.
- Subsequent prompts with “content outlines” that provided a heading or two along with bullet points on what to cover.
I would never ask for too much at once and since I didn’t trust it to add stats and sources, I’d leave all that out with the intention of doing it myself afterward.
But as much as I tried to split the work into small chunks and give abundantly clear instructions, I would always run into the same problems:
- ChatGPT is terrible at listening to instructions.
- It has a bad habit of taking things too far.
- Incorrect or irrelevant information sneaks into the copy quite often.
- Its “writing style” gets repetitive and cliche very quickly.
- When any of the above problems occur, it’s near impossible to get the LLM to revise or correct its output successfully.
Let me show you what I mean.
1. It’s terrible at listening to instructions
When you’re trying to get very specific output from ChatGPT, you have to give specific instructions. Unfortunately, it feels like the more things you ask for, the more likely GPT is to ignore some of them. My prompts would have headings, content bullet points, word counts, formatting instructions, and the chatbot had to remember the style instructions from the start of the session too. I tried lots of different approaches to simplify things but it always felt like ChatGPT just couldn’t handle this many instructions.
Two specific things I would have problems with frequently were word counts and bullet points. The LLM rarely gave me the number of words I asked for (usually giving me something way under rather than way over), and it never listened to me when I said I did or didn’t want any bullet points.
Sometimes this is fixable — say I wanted 200 words with bullet points but I got 300 words without bullet points. It’s fairly quick to cut a few words and shove some bullet points in there. Unfortunately, it always felt like I got the harder-to-fix mistakes.
When you ask for 500 words with no bullet points and get 200 words with bullet points — you basically have to do most the work yourself.
2. It has a bad habit of taking things too far
When you tell ChatGPT you want something specific, such as friendly language or second person point of view, it tends to latch onto these concepts and go crazy with them.
Friendly language turns into full-on text chat language with emoji, and second-person perspective somehow turns into excessive questions and references to the reader so it can use the pronoun “you” as often as possible.
You’re also a bit stuck if you want a lot of something like examples or quotes. If you simply tell GPT you want “lots,” “plenty,” or “many” of something, it will probably give you double or triple the amount you want. If you give it a specific number, it’s likely to ignore it completely and give you something random. In other words, you can barely control the output.
3. Incorrect or irrelevant information sneaks into the copy quite often
We all know that AI “hallucinates” and gets its facts wrong at random times — it happens enough that you have to check every single thing it says, which takes a long time and affects how much time you can save by using it.
To combat this, I basically never asked for facts or figures. I did try a few times right at the beginning but the fact-checking process really is a slog, and I quickly stopped bothering.
The problem is, GPT will throw random inaccuracies into its responses whether you ask for facts and figures or not. This means you can’t just check how well the copy reads or whether you’re happy with the points it’s trying to make — you have to check and consider the validity of everything it says. It’s such a drag.
And since I’m talking about mistakes and hallucinations here, I might as well mention the “worst-case scenario” too. Sometimes the LLM just goes off the rails, and while this doesn’t happen all the time, when it does — you’ve got to throw that session in the bin and spend time pasting your prompts into a new chat and starting all over. I’ve never really seen ChatGPT go crazy in a particularly funny way personally, but my friend did get this gem once:
4. Its “writing style” gets repetitive and cliche very quickly
At first, it seems insane just how “human” ChatGPT can sound. But as you use it more and more, you realize it doesn’t just sound human, it sounds like “the average human.”
OpenAI’s data sets include most of the internet — millions of internet articles, Reddit threads, and personal blogs. And I’m sorry to say, a lot of this content is utter garbage. But because ChatGPT is trained on it, it picks up all of the most common bad habits.
So when you generate a lot of text with ChatGPT, you’ll start noticing that certain sentence structures and phrases get repeated a lot. The two worst culprits for me were these two sentence structures:
- “From A and B to C and D, blah blah blah.” (Example: In the world of TikTok, there’s a place for everyone — from DIY enthusiasts and beauty gurus to pet lovers and educators.)
- “Whether you’re A or B, blah blah blah.” (Example: Whether you’re just starting or looking to level up your channel, using smart strategies to build authentic engagement is the key to standing out.)
ChatGPT likes these two sentence structures so much that I was pretty much guaranteed to get three or four of them in every single session. Those two examples even came from the same paragraph of one of its responses. And that “In the world of…” phrase in the first example is another way it really loves to start a sentence. All of it is boring, cliche, and ridiculously overused (which is, of course, the very reason ChatGPT ends up generating them so much). I even tried expressly banning certain phrases and sentence types, starting each prompt with this little list:

People say it’s almost impossible to tell AI-generated text from human-written text nowadays — but I kind of disagree. If you’ve tried to use these AI tools yourself and experienced all of the problems and bad habits, you get to know a lot of tell-tale signs. When people mix AI content with human-written stuff and heavily edit the majority of ChatGPT’s output, you can hide it completely. But content that’s just come straight from the language model and published practically as is — you can tell. You can tell quite easily.
5. It’s near impossible to get the LLM to revise or correct its output successfully
The real deal breaker with all of this is when a problem occurs, ChatGPT can rarely fix it alone. Whenever things went wrong, I would try once or twice to explain the problem and ask for revisions, but it just didn’t work.
If I asked for the right word count, most of the time I would just get the same word count again. If I asked it to get rid of the bullet points, it would say “Of course!” and then give me more bullet points. If I asked it to adjust the tone or the style, it would struggle to apply the change across the board, and I’d end up with a weird mix of both.
Maybe if you just kept reprompting and regenerating ChatGPT’s responses, you would get pretty close to what you wanted eventually. The problem for me is that I’m a writer — and using ChatGPT forces me to fill the editor role instead.
This probably isn’t a problem for everyone — plenty of writers also do a bit of editing as part of their work, but it really bothers me. I hate editing other people’s work and I hate editing GPT’s output.
As for how often things went wrong — when every session is multiple hours long with 30+ prompts, you nearly always hit a snag somewhere.
The result: I gave up
I did try for a good few months to learn how to get some use out of ChatGPT back in 2023 and I went back to it after major updates over the next two years — but my experience never changed. I tried other LLMs too, but even the newer “reasoning” models that blurt out inner monologue before answering still have the same usability issues.
Current LLM models just don’t speed up the writing process for me — all they do is force me to spend time doing what I hate instead of just putting time into what I enjoy. If you hate writing and never want to do it, ChatGPT can most certainly help you out. But if writing is a hobby for you or what you’ve chosen to do as a living, this thing will likely drive you crazy.
In the end, I never published anything one could call “AI-generated.” Every time I tried to, as they say, “integrate it into my workflow,” I would end up bashing my head against the wall, wasting a lot of time, and then closing the thing down when I realized I had achieved nothing and my deadline was just around the corner.