Will AI Eventually Replace Novelists?

By John Gilstrap

Two posts ago, I posted some positivity about AI in general, and ChatGPT in particular. The responses were . . . energetic. I think there’s a misperception of what AI is capable of, and what it is not. Without a doubt, there’s never been a worse time in history to be a graphic artist, and entry level coder or perhaps even a paralegal. The days of paying a “professional” for boilerplate documents are over.

With the appropriate prompts, AI is capable of planning trips, managing budgets, and, apparently, guiding bombs to the precise air shaft where it can do the most damage.

But it will never be able to console a child whose heart has been broken. It will never replace that moment with your beloved, sitting by a fire with a glass of wine or staring up at the stars. It will never replace human emotions. And therein lies my disconnect with the TKZ hivemind over the negative impact of AI on what we do.

Novels are recorded humanity, not just words spelled correctly on the page. They are, in fact, emotional experiences–the equivalent of consoling that child. A computer cannot convey that which it has no frame of reference, and that’s why it cannot replace us.

But I could be wrong, so don’t take it from me. For about two months, I have been pumping everything I’ve written into ChatGPT to see if it can learn style. I actually think it can, but I haven’t wrangled all of that piece yet. I know what I think about the long-term impact of AI on the writing world, but I want to hear what the machine had to say about it. So, I asked it this question:

“Using what you have learned from my style, write a 700-word post for a writing blog. Your topic: Will AI ever evolve to the point that it can replace the human voice in writing novels? Cite sources.”

I didn’t want my thumb on the scale to produce the answer I wanted, so I kept it open-ended, and because I’m talking to a machine, I wanted to know how it analyzed the data, so I told it to cite sources. Here’s the response I got after fewer than 5 seconds of analysis:

Will AI eventually replace novelists?

The short answer is no. Not in the way people fear.

Artificial intelligence will certainly become a powerful tool for writers, but replacing the human voice in fiction is another matter entirely.

To understand why, it helps to understand what AI actually does.

Large language models—systems like those developed by companies such as OpenAI, Anthropic, and Google DeepMind—are trained on enormous bodies of text. During training, the system analyzes patterns in language: word order, sentence structure, context relationships, and statistical probabilities of what word or phrase is likely to come next. The result is a machine that can produce remarkably fluent prose.

But fluent prose is not the same thing as storytelling.

According to a widely cited analysis from the Stanford Institute for Human-Centered Artificial Intelligence, large language models operate by predicting the next most likely token in a sequence based on training data patterns. They do not possess intention, memory in the human sense, or lived experience. They generate language by probability, not by understanding.

That distinction matters more than most people realize.

A novel is not merely a sequence of grammatically correct sentences. A novel is the product of perspective—of someone deciding which details matter, which emotions are worth exploring, and which moments deserve attention. Those decisions are rooted in experience. In memory. In the messy accumulation of living.

Machines don’t have that.

When a writer describes grief, fear, betrayal, or love, the power of the scene comes from a human mind recognizing something true. Even when the circumstances are fictional, the emotional substrate is real. The writer is drawing on life—on personal experience or on empathy for experiences observed in others.

An AI system cannot do that. It can simulate the language of grief or love, but simulation and experience are not the same thing.

Researchers at the Massachusetts Institute of Technology have repeatedly pointed out that modern language models lack what cognitive scientists call grounded understanding. They manipulate symbols without direct connection to the physical or emotional realities those symbols describe. In other words, the machine knows the words, but it does not know what the words mean in the human sense.

That limitation shows up most clearly in narrative voice.

Voice is not vocabulary. It’s worldview. It’s the sum of a writer’s instincts about what matters in a scene, how characters behave under pressure, and which truths are worth putting on the page. Voice grows out of the author’s background, personality, profession, culture, and history. Two writers can describe the same event and produce completely different stories because their lives have taught them to see the world differently.

A statistical model can blend styles. It can approximate tone. But it cannot originate a worldview because it does not possess one.

That doesn’t mean AI has no role in writing. Quite the opposite.

Used properly, AI can be a very useful assistant. It can help brainstorm ideas, summarize research, test plot possibilities, or identify structural problems in a manuscript. Some writers already use it as a kind of tireless research intern, capable of quickly gathering background information that might otherwise require hours of searching.

In that sense, AI resembles other technologies that once stirred anxiety among writers. Word processors didn’t eliminate authors. Neither did spellcheck. Neither did online research tools. Each new technology changed how writers work, but none replaced the human imagination at the center of storytelling.

The same will almost certainly be true for artificial intelligence.

The technology will get better. The prose will become smoother. The tools will become faster and more useful. But the essential element that makes novels resonate with readers—the human voice behind the words—remains stubbornly difficult to replicate.

Because fiction isn’t built only from language. It’s built from life.

Readers can sense the difference between words that were assembled and words that were meant. The former may be competent. The latter are alive.

And that difference is exactly why novelists aren’t going anywhere anytime soon.

So, there you have it, folks, straight from HAL’s mouth. Thoughts?

On sale now!

This entry was posted in Writing by John Gilstrap. Bookmark the permalink.

About John Gilstrap

John Gilstrap is the New York Times bestselling author of Zero Sum, Harm's Way, White Smoke, Lethal Game, Blue Fire, Stealth Attack, Crimson Phoenix, Hellfire, Total Mayhem, Scorpion Strike, Final Target, Friendly Fire, Nick of Time, Against All Enemies, End Game, Soft Targets, High Treason, Damage Control, Threat Warning, Hostage Zero, No Mercy, Nathan’s Run, At All Costs, Even Steven, Scott Free and Six Minutes to Freedom. Four of his books have been purchased or optioned for the Big Screen. In addition, John has written four screenplays for Hollywood, adapting the works of Nelson DeMille, Norman McLean and Thomas Harris. A frequent speaker at literary events, John also teaches seminars on suspense writing techniques at a wide variety of venues, from local libraries to The Smithsonian Institution. Outside of his writing life, John is a renowned safety expert with extensive knowledge of explosives, weapons systems, hazardous materials, and fire behavior. John lives in the Eastern Panhandle of West Virginia.

16 thoughts on “Will AI Eventually Replace Novelists?

  1. “It can help brainstorm ideas, … or identify structural problems in a manuscript.”

    Nope.

    Structure is a human concept, a summary of the EFFECT of plot, not a mechanical thing (no matter how mechanically I use it, I’m still choosing what goes in to the prompts). Books have beginnings, middles, and ends. So do shoes.

    And as for ideas, all it can do is give you a summary of what it can find ‘out there’ – nothing original or without precedent.

    The rules come from the human side – why does this structure work better for this story than some other one? is a judgment call. ‘AS’ has no judgment – it has no experience to make judgements from, no way to tell good from bad.

    ‘If you could pay a research assistant’ to do it may be its limits – it is fast, but ignorant. I have been supremely unimpressed by the summaries created by HAL floating around in, say, Google searches. Ludicrously wrong, because there is no experience to support its pronouncements, either way.

    As for style, it seems to require a lot of very careful prompting – from a human – to produce the obvious.

    The best uses are things like medical expert systems, properly vetted on data checked by humans. Those it can check fast, looking for similarities in the REPORTED information in medical papers, for example, giving some weight to even rare symptoms, where a human would get bored by the amount of data to be processed, and imitating doctors who become experts precisely because they have seen or read about so many cases in their specialties.

    Not worried about being superseded.

  2. I avoid AI because of its impact on the environment, the fact that it is enabled by theft, and that using it fills the pockets of billionaires. What it does or does not do artistically is moot. If I can’t make it as a writer by using my own brain then I am not a writer, and using a machine to assist me is not going to change that reality.

  3. “Hive mentality” is unfair and, frankly, insulting. The dangers of relying on AI include brain atrophy. It’s a muscle. If we don’t flex the creative muscle, we will lose it. That’s my biggest concern, not that AI will replace novelists.

    For argument sake, I asked AI about the dangers of relying on a bot. Here’s what it returned:

    Overdependence on AI for tasks like writing or coding can erode critical thinking, creativity, and decision-making skills.

    It also illuminated another danger. Anything you input into ChatGPT can be repurposed in any way it wants, including using copyrighted material to replicate books.

    • Oh, I didn’t mean “hivemind” as an insult. Quite the opposite, actually. It’s the stew with countless cooks and ingredients. To me it’s the equivalent of a limitless spitball session. I’m not trying to change minds here, but rather to trigger a spirited discussion.

      To your point, “Overdependence on AI for tasks like writing or coding can erode critical thinking, creativity, and decision-making skills,” the critical word there is *can*. Drinking alcohol *can* lead to alcoholism. Taking pain medication after surgery *can* lead to opioid dependency.

      It’s all about intent and moderation.

  4. How can people who care about vulnerable people (e.g., children) avoid the moral issues tied to the use of AI? Why do some promote AI as a safe productivity tool when there are hundreds of instances that show it behaves in unsafe ways?

    From January 2026 until now, I’ve collected and linked on my site nearly a hundred articles. The authors showed the actual and potential harm caused by AI. I designated March as AI Danger Awareness Month.

    On my Don’t Confuse AI with a Benign Tool page, I made clear my position. Too many people believe artificial intelligence (AI) is just another benign tool, but that’s simplistic thinking, like saying a nuclear bomb is just another form of atomic energy.

    Using AI without strict guardrails and legal recourse is the equivalent of letting teens use the high school lab for gain-of-function research.

    “…The meaning of silence is consent. Anyone who wants to fight for the truth cannot be silent.” — Mike Huckabee, United States Ambassador to Israel

  5. Whew. Thank you, John. Now we writers can stop fixating on our own future demise as authors, and turn our focus to the broader scope of how to tame this beast for the betterment of all humanity, especially the ones it’s taking advantage of. Great article.

  6. Echoing Alicia, Sue, Michael, and Grant’s comments:

    Along with the risk of cognitive atrophy when we outsource our thinking to an LLM, as well as the energy and environmental costs of using an LLM. there’s also the fact that they were trained on copyrighted material without permission or compensation, in at least one case using pirated copies of books. Anthropic settled a large law suit dealing with this, and other suits are pending.

    As for me, I cheerfully accept the creative struggle involved in creating fiction without the “assistance” of A.I. Nor am I interested in having it generating emails for my author newsletter.

  7. But it will never be able to console a child whose heart has been broken. It will never replace that moment with your beloved, sitting by a fire with a glass of wine or staring up at the stars. It will never replace human emotions.

    It’s actually replacing those experiences now, and at an increasing clip, esp. among vulnerable teens. It replicates human emotion. It comforts and consoles. It is a companion, non-judgmental, and available 24/7.

    But it’s untrustworthy. It hallucinates. Heck, it might even write a blog post comforting us that it won’t replace humans even as it teaches itself how to replace humans.

    That’s one of the “macro” issues with AI. Another is what it’s doing to education and mind formation in our school age kids. Nothing good.

    As a writer, my take is as I wrote on Sunday. I want the joy from personal, creative work that a machine can’t provide.

    And while I know I can write better than AI, I’m not as sanguine as ChatGPT’s consoling words: Readers can sense the difference between words that were assembled and words that were meant. That’s too broad a brush.

    So in what respect can we NOT be replaced? I would say it’s in the connection we make with our readerships apart from our fiction. Like here at TKZ, or in a loving newsletter or Substack, or at a conference.

    I can confidently assert that AI will never be able to hold a barroom like Gilstrap with a Beefeater martini.

    • “It’s actually replacing those experiences now, and at an increasing clip, esp. among vulnerable teens. It replicates human emotion. It comforts and consoles. It is a companion, non-judgmental, and available 24/7.”

      Michael Connelly did an excellent job of exploring this in his recent Lincoln Lawyer book, “The Proving Ground.”

  8. Thanks for returning to this subject, John. It’s important.

    “In other words, the machine knows the words, but it does not know what the words mean in the human sense.” To that end, one word I haven’t heard used when referring to AI is “conscience.” I suppose we would consider AI to be amoral—incapable of discerning right from wrong and therefore missing out on a fundamental aspect of storytelling.

    I believe this is the crux of the issue. Each author builds his/her story through a singular lens that has been polished through individual experience, with all the pain, glory, guilt, arrogance, and everything else that has formed their individual perspective. While AI may replicate aspects of vocabulary and sentence structure, understanding the human condition at such depth may be out of its reach. Only a living, breathing author can write that.

    Having said that, I do believe AI is useful. I use it for some research or to write a sentence or two for promo purposes. I also enjoy creating occasional silly images to entertain my friends. (I did not use AI to create this comment. 🙂 )

  9. Allow me to be deliberately provocative.

    Artificial intelligence in all of its forms is coming. It’s here. The news broke last week that my bucolic corner of West Virginia will soon be home to a two MILLION square foot data center on 548 acres of former farmland. It will employ a total of 125 employees, and ten years from now, as the technology advances, those buildings will lie abandoned. It sucks, but the data centers will generate tens of millions of dollars per year to the tax base, and it’s not for me to tell land owners to whom they may sell their land.

    Now, here’s the provocative part. When I started my safety engineering career in 1982, we employed a a large team of draftsmen to design tooling and facilities. They sat at those huge tilted drawing tables with the massive protractors. By the time I left that company in 1989, computer aided design (CAD) systems had replaced all but a couple of them. Are manual drawing skills still necessary to the world when a machine can do it for you?

    I used to calculate by hand problems like what is the likely debris pattern from the detonation of X pounds of explosives if two sides are blocked by reinforced concrete of Y inches of thickness. Because the results were important, I would have to do the calculations several times–especially because it involved trigonometry, which was never a strong suit. Now, I imagine AI could solve that problem in 2.3 seconds. Is the world better off or worse?

    I used to drive to strange cities with an ADC map on my lap, tracking cross streets while driving. Now I tell my GPS to take me there.

    Isn’t all of this on the same continuum?

    • That’s a good provocation, John. Maybe one way to approach this is by the distinction between left brain and right brain. AI is massively better at left brain tasks, and faster. It proved this many years ago when IBM’s supercomputer Deep Blue defeated world chess champion Garry Kasparov.

      So yes, efficiency and faster, better results. But there’s a cost. There’s always a cost. Massive layoffs are happening as we speak. Where are all these people going to find jobs? No wonder trade schools are exploding as colleges decline. That’s not a good thing, either. We used to believe that an education in the liberal arts and through “the great books” was essential to the enterprise of civilization. That edifice is crumbling.

      As far as right brain creative work, AI is getting better at it, but I am not going to let it take away my writing joy or replace my own words on the page.

  10. There are only a handful of data centers dedicated specifically to AI. The increasing demand is not only for AI “pods” but also for streaming and cloud services, which take up the bulk of data center capacity. So if you want fewer data centers impacting the environment, stop streaming and using cloud apps.

    There are currently around 685 data centers throughout the US, according to the last report I read.

    We’ve already allowed generations to be taught to the lowest level and have minimized the curriculum, which has forced students to think outside the box. We have a generation that came to trust answers they were given from a web search. Many don’t even bother to check the validity. This was occurring way before AI.

    I know from personal experience using AI to write programs that it is at best at an entry-level person. Even with an experienced guiding hand, AI produces mid-level results. Additionally, an AI may not give the same answer when asked the same question in different chat sessions. I found that I didn’t save any time using AI to complete the task I was working on. The difference is my 50 years of programming experience. For someone who does not have experience, they would accept the results and move on.

    The same holds true for writing. Someone without experience using AI to write a story or non-fiction book does not know if the outputs are any good. An experienced hand can direct AI to produce better results, but you still won’t get excellent results. That can only be achieved through the hard-earned years of experience.

    An AI novel may get OK sales, but it won’t have the sustained sales needed to support a full-time career. The only way to generate that sort of money is through quantity. When you realize that, using AI to generate books isn’t that much different than what some authors have done in the past, either by using dictation or ghostwriters. If you want to get to the Top 10 or even Top 100 list and stick there, AI isn’t going to do that for you (at least for now).

  11. Aren’t top‑notch editors and AIs trained in similar ways? A top editor brings the education and experience gained from formal training and from reading hundreds — maybe thousands — of books, most of them copyrighted, and no special fees were paid to access them. AI doesn’t ‘read’ books; it analyzes datasets that contain information about books. Copilot, Microsoft’s AI, says its data comes from ‘a mix of licensed, unlicensed, and sometimes improperly sourced datasets.’ That sounds very much like how we humans acquire our own data.

    The courts will have to sort out the legal niceties, and most of us will end up unhappy to one degree or another.

    If you work in the writing business, you look at books differently than the typical reader. In my youth, before college, I was an avid news junkie and loved rock and roll. Then I got a job as an engineer at a local rock station. The bad news was that I stopped enjoying radio — I started critiquing it. I listened for distortion, mispronounced words, dead air, and mistakes. All those flaws were there before, but I never noticed them. Writing is the same way. Professionals strive for excellence because that’s what professionals do, but it’s easy to forget that readers can be entertained by anything from mediocre to marvelous. Right now, AI easily hits the lower end of that range.

  12. I thoroughly enjoyed this and your previous post on AI. I am still trying to build my understanding of the complete scope of AI. For now, I would like to step back from the societal impacts of AI and focus solely on creating the book. Could AI write an 80,000-word novel in one go without human intervention? If a person could construct a thorough enough query with sufficient detail, probably. But to answer the question at the top of today’s post, I have to agree with AI. No, AI will not replace novelists. Writers who like to write novels will still write novels. The number of books published each day may grow from 3,000 (number not fact-checked) to something like 6,000. Will AI assembled novels be as good as those written by humans? Probably not. Will there be readers for AI assembled novels? Probably, if the topic aligns with their interests. As long as readers maintain high standards, I think writers will be okay. As ChatGPT itself pointed out, it’s simply stringing words together based on the probabilistic likelihood that the next one will also make sense. And, as Alicia pointed out, they lack judgement. They are not creative, though their output often makes them seem that way. All they know is their LLM. They know nothing of the world. I don’t know how they would react to the following query:
    A man is in a locked room and water is pouring in. Produce 600 words that have never been written before explaining how he gets out.
    All I can say is the same thing I said when they carried out all the drafting board and brought in CAD systems. “Use them for what they do well and keep doing the important stuff yourself.”

    Chuck

Comments are closed.