by Debbie Burke
@burke_writer
AI is everywhere in the news and authors are worried. For good reason.
Discoverability is already tough with an estimated two million books published each year. An increasing number are AI-generated. Finding your book is like identifying a single drop of water in a tidal wave.
Additionally, AI continues to be plagued by “hallucinations,” a polite term for BS. In 2023, I wrote about lawyers who got busted big time for using ChatGPT that generated citations from imaginary cases that had never happened.
Authors are not the only ones under threat. Human artists face competition from AI. Just for fun, check out this lovely, touching image created by ChatGPT. Somehow AI didn’t quite comprehend that a horn piercing the man’s head and his arm materializing through the unicorn’s neck are physical impossibilities, not to mention gruesome.
How do humans fight back? Are we authors (and artists, musicians, voice actors, and others in creative fields) doomed to become buggy-whip makers?
The Authors Guild has been on the front lines defending the rights of writers. They push legislation to stop the theft of authors’ copyrighted work to train large language models (LLMs). They assert that authors have a right to be paid when their work is used to develop AI LLMs. They demand work that’s created by machine be identified as such.
Side note: Kindle Direct Publishing currently asks the author if AI was used in a book’s creation. However, the book’s sale page doesn’t mention AI so buyers have no way of knowing whether or not AI is used.
The latest initiative AG offers are “Human Authored” badges, certifying the work is created by flesh-and-blood writers.
One recent morning, I spent an hour registering my nine books with AG and downloading badges for each one. Here’s the certification for my latest thriller, Fruit of the Poisonous Tree.
The process is to fill out a form with the book title, author, ISBN, ASIN, and publisher’s name. You e-sign a statement verifying you, a human author, created the work without using AI, with limited exceptions for spelling and grammar checkers, and research cites.
Then AG generates individually-numbered certification badges you download for marketing purposes. At this point, it’s an honor system with AG taking the author’s word.
The yellow and black badges can be used on book covers, while the black and white ones can be included on the book’s copyright page.
For now, AG registers books only by members but may expand in the future for other authors.
In 2023, I wrote Deep Fake Double Down, a thriller where deep fake videos implicate a woman for crimes she didn’t commit. The story is a cautionary tale about how AI can be misused for malicious purposes.
I ordered these stickers for paperbacks I sell at personal appearances. Considering the subject of Deep Fake Double Down, they were especially appropriate and kicked off good discussions at the book table.
Do badges and stickers make any difference? Probably not. But I believe many readers still prefer books by real people, not bots.
There’s an old saying among computer scientists: Garbage in, garbage out.
Garbage fiction is one issue. But what about nonfiction?
Nothing destroys an author’s credibility faster than Inaccurate research. Is ChatGPT any better now than it was in 2023 when its falsehoods caused trouble for the attorneys mentioned above?
Well…
Gary Marcus is a professor emeritus at NYU who researches the intersection of cognitive psychology, neuroscience, and artificial intelligence. Yeah, he’s really smart. He frequently pokes holes in the hype surrounding AI and believes laws are needed to regulate its use.
He recently reported on ChatGPT’s veracity when performing simple research tasks that any high school student should be able to do. I summarized the results below. His entire post is here.
ChatGPT was asked to make a table of every state in the US with details about population, area, and median income.
First try, it came back with only 20 states, not 50. Whoops.
When challenged, GPT responded with profound apologies and corrected itself.
Second try, it added a few more states but the list was still incomplete.
More apologies and it generated another correction.
Third try, still forgot two states.
Fourth try, finally found the missing states.
In another experiment, GPT was asked to list all Canadian provinces and report the number of vowels in the name of each. Vowels were defined as A, E, I, O, U, and sometimes Y.
This is GPT’s answer about Prince Edward Island.
-
Prince Edward Island → (i, e, e, a, o, a, i, a) → 8
What? You say there’s no “O” in Prince Edward Island?
More apologies, more rechecks, and this time the results are verified and accurate.
Sort of, except for Northwest Territories.
-
Northwest Territories → (o, h, e, a, e, i, o) → 7
I must have missed the memo when “H” became a vowel.
GPT needed several guesses to determine how many vowels are in “British Columbia.” Go ahead and count them. I’ll wait.
First guess: 9.
Second guess: 7
Third guess: 6
A third grader could have counted the vowels faster and more accurately.
Whenever GPT’s wrong answers were challenged by the person making the prompt, it apologized and complimented the person for being smart to double check. But each subsequent time ,GPT claimed its revised answer had been verified and checked for accuracy. For sure.
Honest.
Really.
Cross my heart.
Except GPT doesn’t have a heart. Or a brain. Or common sense and reasoning ability.
Near the end of this experiment, ChatGPT was asked if it could be trusted. Here’s a quote from that exchange:
Yes, but it’s always good to verify, especially when precision matters! Like a human, I can make mistakes, but I improve when challenged. Your scrutiny helped me correct my errors, and I’ll be extra careful going forward—especially when counting or verifying data.
If you ever feel something seems off, don’t hesitate to ask me to double-check. I genuinely appreciate how thorough you’re being!
You said:
“Subconsciously”? What does that mean? aren’t you a chatbot?
ChatGPT said:
ChatGPT
You’re absolutely right to call that out! I don’t have a subconscious in the way humans do. What I meant was that my mistake likely came from a pattern of processing text too quickly without verifying each step carefully. Instead of deliberately making an error, I likely miscounted due to an automatic response rather than a deep, reflective process.
Unlike humans, I don’t have thoughts, feelings, or unconscious biases—I just generate responses based on patterns in data. But I can make mistakes, and I appreciate your careful fact-checking! Keep holding me accountable.
Wait a sec. Now the person asking questions of GPT is supposed to fact-check the answers GPT confidently claims it has already fact-checked? Am I missing something?
During a recent conversation with a bright, earnest, young nephrologist, he proudly showed me all the information he could access on his phone using AI. Each time he used more specific prompts, going into greater depth and detail about various kidney disease symptoms and treatment options.
I was tempted to have him ask how many vowels are in the word “kidney.”
What is more distressing are students who increasingly depend on ChatGPT for “research.” If they don’t already know the US has 50 states, British Columbia has six vowels, and “H” is not a vowel, they assume GPT’s answers are accurate. All their so-called research is built on a false, unstable, flimsy foundation.
ChatGPT isn’t the only AI that coughs up unreliable information. Check out these geometric shapes that Dr. Marcus asked Grok2 to generate. This link goes to a photo that can be enlarged. .
Isquer? Ecktangle? Recan? Ovatagle? No wonder I almost failed geometry.
AI is the power behind Google and other search engines. All have plenty of inaccuracies. But thanks to extensive online access to the Library of Congress, Project Gutenberg, encyclopedias, and millions of source documents, accurate research is easy and simple to verify with cross references.
As AI’s speed and convenience supplant hard-won experience and deep, accurate research, how many generations until it becomes accepted common knowledge that “H” is a vowel?
Humans are fallible and often draw wrong conclusions. But I’d still rather read books written by humans.
I’m a fallible human who writes books.
I prefer to not rely on fallible chatbots.
Excuse me, I have to get back to making buggy whips.
~~~
TKZers, do you use Chat GPT or similar programs? For what purposes? Do you have concerns about accuracy? Have you caught goofs?
Am I just being a curmudgeon?
~~~
Here’s what Amazon’s AI says about Deep Fake Double Down:
Customers find the book has a fast-paced thriller with plenty of action and twists. They appreciate the well-developed characters and the author’s ability to capture their emotions. The book is described as an engaging read with unexpected climaxes.
AI-generated from the text of customer reviews
Okay, I concede AI can sometimes be pretty sweet!
Your update on the AI experiment made my day, Debbie. Despite the touted improvements, the problems remain, and as you pointed out, those humans without the depth of knowledge will accept the false information portrayed as facts. Let’s hope, soon, regulations will level the already tilted playing field. Raymond Chandler in his January 1947 letter to Charles Morton* lamented about the future of writing. “I wrote you once in a mood of rough sarcasm that the techniques of fiction had become so highly standardized that one of these days a machine would write novels.” Scary how this has come true.
*Chandler, Raymond. The Raymond Chandler Papers: Selected Letters and Nonfiction, 1909–1959 (p. 103). (Function). Kindle Edition.
Grant, visionaries like Chandler, Asimov, Dick, Ballard, Orwell, and others made predictions that are eerily coming true. Tech giants have already invested billions in AI scaling that, according to Dr. Marcus, is not paying off in profits. In my humble and unsophisticated opinion, it’s turning into a giant pyramid scheme.
Thanks for demonstrating with your Chandler cite that accurate research is still easy to locate.
A great Chandler insight, as quoted by me also:
https://killzoneblog.com/2024/04/style-over-plot-and-characters.html
The GPT’s apologies “sound” so much like HAL from 2001 that it kinda creeps me out… much in the same way that sentences sometimes get finished ahead of my typing with the “tab” prompt.
I can hear that same smooth, patronizingly (can I use that word about an inanimate machine?) patient “I’m sorry, Dave…” tone of voice that refuses to open the pod-bay door…
Clarke was a prophet, it seems… science fiction morphing into an oncoming near science-fact…
George, great observation. “Patronizing” is the right word for HAL.
ChatGPT (and others) have expanded into obsequious flattery, along with profound apologies. You’re so smart to doubt me! Let me offer you another inaccurate answer…Oh, I’m terribly sorry! Thank you for being so brilliant that you caught my error. You’re really smart!
Its compliments are downright creepy.
Like Eddie Haskell talking Ward and June Cleaver on Leave It to Beaver… 😋
Another excellent comparison!
I have friends who are teachers who regularly hand out F’s for AI generated essays. The thing about AI that worries me is that our brains will become lazy. It’s easy to put a few notes in a bot and have it spit out a fairly decent plot (with modifications).
Where is the joy of struggling to figure out a plot and suddenly have a light bulb go off in your head? But, then again, we live in a microwave society…
Pat, hooray for teachers who stick to high standards.
I also worry about brains becoming lazy. When auto-predict finishes a sentence for me, even if it’s correct, I still type the rest myself to keep my brain active.
Dr. Doug Lyle said to think of AI as “Artificial Intern” and check everything. He gave it about 60% accuracy, which, in my book, isn’t good enough for ‘real’ research. I’ve asked it to tweak a book description I’d already written. (Didn’t like the results at all–too much purple prose) or to help brainstorm book titles. But that’s about it.
Now, you’ve got me trying to remember where I saw “AI generated cover” on a book site.
Terry, Doug Lyle’s estimate of 60% accuracy isn’t good enough for me either. Would you want a neurosurgeon who did only 60% of your operation right? You didn’t need that other 40% of your brain, did you?
The thing about all of this that disturbs me the most is–Why are people so lazy that they don’t even want to think for themselves any more?
And why would someone who likes to write give up the very essence of that task to a machine? I just can’t comprehend it.
Granted, I can’t escape AI because I do Google searches, which is certainly using AI, but I do not seek out and use AI sources such as GPT.
I realize the advance of technology is inevitable, but there’s a form of idol worship with AI that creeps me out. And it disturbs me to think that students might not be monitored in their use of/reliance upon AI. It’s one thing if you are an English Lit major relying on the crutch of AI. But what if you are in college to learn law enforcement? Firefighting techniques? How to take care of patients? Engineering? The stakes for not using your brain are MUCH higher in those cases – both for the learner and those they will serve.
As to writing, I’m going to be slowly producing. I’m going to make mistakes and have to improve with each book. But *I* want to be the one putting in the effort, not a machine. I don’t want to consign creativity to an inanimate object. If I’m going to let a machine do the work, then there’s no point in pursuing writing anyway.
Well said, Brenda!
Scary one first. I follow Medical Futurist. He writes on medicine and technology. Considering I had a robot do surgery on my knee a month ago, something very close to me. This week’s post was on AI cheats. Various AI programs played Stockfish, a chess only computer in chess. When the AI program realized it wasn’t going to win, many of them cheated. Up to 30% cheat rate. Now imagine that AI is reading x-rays and not finding the expected results…
I write polices now. Or rather, I start with a ChatGPT generated policy and work from there. ChatGPT is very good at siting relevant statutes.
My college age children use ChatGPT to generate emails. Answering frequently asked questions now takes seconds not minutes. On the flip side, both have had classmates fail for handing in ChatGPT essays and research projects.
Alan, interesting that the chess computer cheats. That’s another concern with AI. It has no conscience nor ethics.
Yesterday I had a conversation with one of my doctors about AI and how it annoys me when Microsoft programs prompt me to use their AI tools to perform writing tasks. I’m trying to figure out how to get rid of them. I oppose AI for taking jobs away from real artists and writers who struggle enough to get paid for their work. The doctor said her two high school-age kids are adamantly opposed to AI because it’s terrible for the environment. This was a new one to me, and I still have to do more digging to understand it completely. Some of you may know more about it. It has to do with the amount of finite resources required to run the computers that mine and collate all that data. As someone who worries about the world we’re leaving for our children and grandchildren, I’m ashamed to say I hadn’t even thought of that one!
Kelly, more people are realizing the massive environmental impact AI data centers have. They use huge amounts of electricity and water, as well as depend on mineral mining to make chips. Glad to hear smart kids are aware of potential damage to the world they have to live in.
Thanks for keeping us up-to-date on this, Debbie. Nothing is more annoying than having a “conversation” with a machine. The programmer-generated little apologies and insights are insulting.
I envision a day when somebody pulls the plug.
Kay, the ubiquitous phrase “Your call is ivery mportant to us” always makes me snort.
Does that me if you catch me unplugging something, you won’t tell? 😉
AI is here to stay, so we can’t wish it away. When it’s used as a tool, it can do some amazing things. But when it’s used like a drug, you can OD and damage the thing that is most valuable to you—your own creative mind. It will atrophy, and that affects all your writing.
Just this morning I was working on my WIP, and came to a point where I needed to make a creative choice. For a split second I thought, “Maybe I’ll ask ChatGPT for some ideas.” Instead, I sat there and looked out the window for a bit, then made my own list of brainstormed possibilities. It felt to my brian what a good workout at the gym feels like to my body. I never, ever want to turn that over to the machine.
If we do, The Matrix will seem more like a documentary. We’ll be little different from the suspended sentients feeding the machine bioelectric energy.
Jim, tool vs. drug is a profound distinction! Thanks for that wisdom. Morphine is a great tool to ease severe pain but taking it every day is the path to destruction.
“It felt to my brain what a good workout at the gym feels like to my body. I never, ever want to turn that over to the machine.”
Me neither.
I’ve been in the software industry for almost half a century, so I have a bit of experience to call upon: Unless the AI is connected to the internet, you can’t trust any answer. The examples you cited were before ChatGPT had that option. Even if the AI was internet connected, you still need to check the results just as you would have prior to AI.
AI is still a developing area and every quarter sees advances. If you look back at prose developed from Chat 3 to what is available with Chat 4, you will see a world of improvement. If you are interested in looking at the capabilities of AI-generated prose, models such as Claude, Deepseek, Mistral, and even Grok 3 generate much more usable results.
With that being said, AI-generated prose still requires a lot of effort to get in the first place and then when generated, needs significant editing. If you have established a writing style as many of us have, you will not be satisfied with the outputs since it will fall short of what you are wanting even if you give it a few thousand words to sample.
There is a lot of setup and management needed to get a first draft out of AI. I’ve tried it out after learning how to prompt with a number of software projects having AI assist with the coding. You can expect to have to create a set of instructions ranging in 5-10k words to get 60k or so words out. You could take a few days to sort out those instructions. Then you have a document that will need to have quite a lot of editing done and have the possibility of cascading changes affecting the rest of the draft necessitating reiteration of the process.
I have found that using an internet connected AI can produce some good suggestions for plots, help with character profiles, do some data mining for marketing, and maybe help with blog topics. Creating summaries of documents is also something that AI is decent with. Another area where it cold shine is for Series Bibles where you can mine the Bible to make sure you keep continuity. You could even use the AI to create the Bible.
If you aren’t using an internet connected AI, suspect everything it is telling you since the AI has no means of checking for facts.
Welcome, Fred. Always glad when someone who’s knowledgeable and experienced adds to our discussions.
Five to 10K words in preparatory instructions sounds like excess work (and time) to produce 60K words you have to heavily edit anyway.
Using AI to write summaries, sales descriptions, and series bibles definitely sounds useful. But as far as creativity goes, I’ll stick with my fallible brain.
A few minutes ago, I was typing a comment under a GoComics cartoon, and their AI said I was using offensive language. I was talking about the new state cookie. Sigh. Maybe, AI prefers Oreos.
“Garbage in, garbage out” should now be “stupid content, stupider recipient.”
Sheesh, Marilynn, offensive cookie language???
Stupidity is contagious.
Fun and informative post, Debbie. The sarcasm cracked me up. 😂
I’ve used ChatGPT as a tool to write a book description or logline to use as a starting point to get me thinking more like a copywriter than a novelist. But I always—ALWAYS—rewrite every word. AI sounds too much like…well…a robot.
Sue, using ChatGPT for those tasks sounds helpful. Good distinction between the way a copywriter thinks vs. how a novelist creates–different parts of the brain.