by Debbie Burke
Almost 20 years ago, a giant communications company decided to outsource their phone customer service to other countries. I learned about this from a friend who worked there. The company announced massive layoffs of employees because overseas labor costs were cheaper than using American workers.
Then, to add insult to injury, those employees whose jobs were being eliminated were required to train their replacements.
Not surprisingly, outsourcing didn’t work out too well. There was massive consumer backlash because neither the customers nor the new workers could understand each other on the phone. But the damage had been done. Thousands of American workers lost their jobs and the company’s reputation took a big hit that it never recovered from.
That kind of parallels today’s situation with writers and AI. Our work is being scraped from illegal pirate sites and used to “train” AI to replace us.
Some people joke that AI (artificial intelligence) is “artificial insemination.” Writers are being screwed without receiving any enjoyment. They didn’t even buy us dinner first.
The Authors Guild (AG) has been on the forefront to try to protect writers from unauthorized use of copyrighted works to train AI. In July, 2023, they sent an open letter to the CEOs of AI giants including Open AI, Meta, Microsoft, IBM, and others with a petition signed by 15,000 authors. AG also testified before the senate, decrying pirate sites that are used by tech companies to “train” AI models.
The genie is out of the bottle. AI is here to stay. The question now is: can the genie be forced to compensate writers for their words?
Here’s an excerpt from the Authors Guild statement on AI:
“The Authors Guild believes that the right to license a work for AI training belongs to the author of the work unless the rights are expressly granted in an agreement.”
A bill called “The Generative AI Copyright Disclosure Act of 2024” is under consideration by the House of Representatives. This only requires disclosure by anyone who uses copyrighted work to train AI. It does not address fair compensation for that use.
Recently Draft2Digital (D2D) did a survey among authors, publishers, and others to determine how they felt about the use of AI and what authors would consider fair compensation for use of their work. D2D CEO Kris Austin kindly gave permission to quote from the survey results (full results at this link).
Here are some highlights:
1. “Why do authors oppose AI training?”
AI companies are unethical/untrustworthy – 25%
Harms creatives & people – 25%
Ethical Objections to AI – 19%
Other Reasons – 14%
I worked hard for my work and it’s mine – 10%
AI has no place in creative work – 8%”
2. “Do authors consider current scraping methods fair use?”
It’s not fair use – 49%
Ethically questionable – 42%
Fair use – 5%
No opinion – 3%
3. “Do authors know that AI companies might be willing to pay for training data?”
Unaware – 57%
Aware – 38%
Unsure – 5%
4. “Are authors interested in the opportunity to sell their AI training rights?”
Yes – 31%
No – 25%
Maybe – 45%
5. “Does it matter to authors how the end product LLM (large language model) will be used?”
Yes, it matters. – 76 %
Not as long as I am compensated – 22%
No opinion – 2%
The next two questions concern whether authors would consider having their work used for non-competitive markets (places that would not affect the author’s income) and competitive markets (e.g. an AI-written mystery could sell on Amazon right next to your book but at a much lower price).
6. “If the use case is non-competitive, will authors consider selling their AI training rights?”
No Amount of money will ever be enough – 49.5%
Open to non-competitive opportunities – 50.5%
Would accept less than $100 per book – 11.1%
Only if $100 or more per book – 39.3%
Only if more than $5,000 per book – 14.1%”
7. “If the use case is competitive, will authors consider selling their AI training rights?”
No amount of money will ever be enough – 62.8%
Open to competitive opportunities – 37.2%
Would accept less than $100 a book – 6.3%
Only if $100 or more per book – 30.9%
Only if more than $5,000 per book – 15.8%
Here’s a summary of D2D’s position:
D2D’S STANCEUntil we see significant reforms, especially around greater contractual protections and transparency governing use, intellectual property protections, and rights restrictions, Draft2Digital will not offer AI rights licensing opportunities.· It’s a positive development that AI developers are seeking to pay for licenses
· Better protections are needed before D2D or its publishers can entertain such licenses
· AI training rights are an exclusive, valuable subsidiary right under the sole control of the author or publisher
· The rights-holder deserves full control over decisions related to if, when, and how their books are used or licensed for AI training purposes.
· Authors and publishers should refuse AI rights licensing contracts that are opaque, or that provide inadequate protections for author concerns
· AI developers must stop training upon books obtained without the rights-holder’s permission; otherwise, they will face continued reputational harm in the eyes of their customers and the creative community
· LLMs previously trained upon unlicensed content, and the applications built upon them, should either negotiate retroactive licensing settlements with rights holders, or scrap their LLMs and rebuild them from scratch by training upon licensed content only”
“At this time, Draft2Digital will not offer AI rights licensing opportunities.”
I believe most authors agree that compensation should be paid and payment should be retroactive to include past unauthorized use.
The devil is in the details.
· How to implement systems that detect/determine use of copyrighted material?
· How to enforce fair use?
· How much are authors paid?
· What if an author doesn’t want their work used for AI training under any circumstances?
The communications company my friend worked for treated their employees shabbily but at least they told workers in advance that they had to train their replacements.
Authors and publishers were never told in advance. Tech giants simply started using creative works without permission nor compensation to the creators. AI-written works currently flood the marketplace that was already crowded. Our incomes suffer.
We study, rewrite, and work hard to create meaningful content and deserve fair compensation.
Those devilish details will be fought out in courts for years to come.
~~~
TKZers, how do you feel about AI’s use of your creative work to train LLMs?
Please share your answers to any or all of the questions.
~~~
Debbie Burke writes her thrillers without AI.
Fruit of the Poisonous Tree is now available for preorder at this link.
I was at a computer security conference. AI was a big deal. AI can write code faster than an experienced coder and I don’t need to learn anything. AI powers phishing attacks. The whole time presenters are shouting the evils of AI their slide decks are full of AI images and text.
Hmm, that’s quite a disconnect, Alan. They use the “evil” themselves to rail against the “evil.”
Human nature tends to think, “why pay for what you can get for free?” I confess I use free images from Unsplash, Pixabay, and other sites b/c I can’t afford to pay to illustrate presentations, articles, blog posts, etc. that I don’t get paid for. But I avoid images identified as AI-created. Unfortunately, you can’t always tell.
A good friend is a creative and pretty good at it. He no longer posts personal pictures or art so it can’t be scraped. He is lucky. He has a small army of lawyers to protect his professional work.
If only we could afford an army of lawyers!
I so agree Debbie. And part of the problem is with the consumer—how many of us buy cheaper products made overseas instead of American-made?
I really worry about the future my grandkids will live in.
Your concern is echoed far and wide, Pat. No matter where on the political spectrum people fall, the majority of people I talk with say the same thing.
I wonder about the brain development of children. Why think when AI does that for you?
Thanks for showcasing this today, Debbie! I was one of those who filled out this survey 🙂
I’m a hard no on letting AI be trained on my writing. Full stop. Some proponents of AI generated fiction will argue that training AI on published writing is no different than the learning process that writers go through, but that’s false.
LLMs have no thought, no emotion, no awareness. The program generates material based on probabilities assigned to an enormous data set. It’s plagiarism, not learning. We writers learn by study, and yes, example, but then we write in our own voices and style. Moreover, we *think* in our own, unique ways, with our own idiosyncratic points of view. Most of all, we imbue our writing with our feelings and experiences. No LLM is capable of that.
I don’t see AI-generated fiction becoming dominant as inevitable. I appreciate D2D taking this stance, but if things change, I’ll still opt out from allowing LLMs to train on my writing.
Thanks again for a very important post! Hope you have a wonderful day, filled with words.
Dale, I also took the D2D survey and am right beside you as far as allowing AI to train. No way.
You make excellent points about the difference between studying other authors’ writing and plagiarism. AI has no conscience, ethics, or moral values.
Thanks for your articulate comments!
Aargh! (Pirate-speak for language I never use…)
To be perfectly honest? My first reaction is: this crap really makes me want to quit, go outside, and plant tulips. 🙁
Change is overrated. If I had any tooth and nail left, I’d fight it to the finish line.
I’ll be interested to hear what others are thinking…
I hear ya, Deb. Sometimes I feel like the buggy whip maker.
But we can’t not write so we keep at it.
I used to say at the library that even change will change 🙂
Too sweet, Dale! Thanks for making me LOL this morning…
AI can’t be copyrighted. Anyone dumb enough to use AI in their work will have to use it so sparingly that their whole work won’t be up for grabs because of no copyright protection. Meanwhile, Amazon is creating software that can recognize AI so they can prune this garbage from their website. Not because they care about writers but because no one can find stuff they want to buy. The only people who will make any money in the grand scheme of things is the lawyers on both sides of the issue.
Marilynn, so Amazon is using AI to detect AI? Hmmm?
Years ago I saw a framed picture hanging on the wall in a lawyer’s office. It was a pen and ink drawing from the 1800s. One farmer was tugging on the halter of a cow, a second farmer was tugging on the cow’s tail. In between them, sitting on a three-legged stool was a smirking lawyer milking the cow.
Many are against generative AI but are OK with assistive AI. The disconnect here is that both were initially trained on the same “unethically” sourced datasets. I had noticed the rhetoric change when many NeuroDiverse writers came out telling how AI was assisting them to write again or making writing fun again for them.
Some people against AI also have engaged in cyber-bullying to the point that some of them were getting cease and desist letters. Unjust attacks against authors who may have unknowingly had AI elements in their covers are also very prevalent in many Facebook groups.
At this point in time, I believe that all the court cases citing plagiarism with AI have been dismissed because the plaintiffs could not prove their claims. The few lawsuits that continue have been modified to pursue compensation for the use of copyrighted material.
I had many mixed feelings about the AG lawsuit. In order to participate in the suit, you had to join AG (if you weren’t already a member). I saw this as a way for them to pad their bank accounts by jumping on the AI hate. Now there is a new scheme to buy into an additional service AG would like to provide. To top it all off, if you look at the last item in their AI FAQ, you will see they support the use and education of AI tools. So here you now have thousands thinking AG is against AI use while AG is advocating learning about it.
I see a lot of gatekeeping going on around the whole issue of AI use no matter the degree. This is similar to the gatekeeping that occurred when self-publishing came about.
If a person is in the business of providing a service to authors, I am going to research about them before I purchase services from them. If I find out they have been bullying people, I will refuse to buy a service from them. We may have strong feelings about various topics but if as a business person, you can’t be more diplomatic about voicing your feelings and opinions, then your business will suffer.
I also find it disgusting that there are “black lists” going around listing creatives that have either voiced postivitely about AI or even been moderate in their acceptance of use. Authors are on that list who may have inadvertently had AI in their purchased covers. The level of hatred from one group of creatives to another is terrible.
To top it all off, all the folks on Facebook and Instagram railing against AI are feeding the very beast they hate since Meta created OpenLlama, an AI that is used as the basis of many other AIs.
All the hate going on is suppressing conversations about using or combating AI (depending on your position). AI poses challenges that we as creatives need to discuss to be better prepared to either exploit it or compete against it. It’s not going away.
Fred, thanks for joining the discussion. You’re so right that AI is not going away. Some uses are helpful, like tools that prompt writers for plot, formatting software, tracking sales, etc. But like any tool, it can be destructive. A hammer can be used to build a house or bludgeon someone.
What a sad state of affairs that authors attack each other. That’s why TKZ is such an oasis of rational discussion. We just want to help each other succeed.
Thanks for stopping by.
Thanks for bringing this important topic to TKZ, Debbie.
“The Authors Guild believes that the right to license a work for AI training belongs to the author of the work unless the rights are expressly granted in an agreement.”
That sounds fair to me.
Congratulations on the new Tawny Lindholm novel! I pre-ordered my copy yesterday.
Aw, thanks, Kay! Much appreciated! Hope you enjoy Fruit of the Poisonous Tree!
Several years ago, I wrote a post about book piracy sites. Piracy pales in comparison with the massive scope of AI’s unauthorized scraping,
Theft is theft, no matter how big or small.
As Kay said, I concur with the Author Guild’s statement. AI training rights belong to the author. Otherwise, it’s piracy.
Sue, absolutely!