ChatGPT Goes to Court


By Debbie Burke


The day after Jim’s recent post about AI and ChatGPT, I ran across an article about what could possibly go wrong.

In a case before the U.S. District Court, Southern District of New York, Mata v. Avianca, Inc., the plaintiff’s attorney submitted a motion to dismiss based on research cites from ChatGPT.

One problem: the court discovered the cases cited didn’t exist.

According to an affidavit filed by attorney Peter LoDuca, the law firm of Levidow, Levidow, & Oberman, PC represented Roberto Mata in a personal injury case against Avianca, Inc. for injuries allegedly sustained by Mr. Mata as a passenger on Avianca Airlines. Another attorney with 30 years of experience, Steven Schwartz, used ChatGPT to conduct research to support the motion to dismiss.

Chat GPT turned up the following cases:

Varghese v. China Southern Airlines Co, Ltd, 925 F 3d 1339 (11th Cir 2019); Shaboon v. Egyptair, 2013IL App (1st 111279-U (IL App Ct. 2013); Petersen v. Iran Air, 905 F. Supp 2d 121 (D.D.C. 2012); Martinez v. Delta Airlines, 2019 WL 4639462 (Tex. App. Dept. 25, 2019); Estate of Durden v. KLM Royal Dutch Airlines, 2017 WL 2418825 (Ga. Ct. App. June 5, 2017).

According to an affidavit by Mr. Schwartz, he “has never utilized Chat GPT as a source for conducting legal research prior to this occurrence and therefore was unaware of the possibility that its content could be false.”

Attached to the affidavit is the following transcript:

ChatGPT transcript:[Q:] ls varghese a real case

[A:] Yes, Varghese v., China Southern Airlines Co Ltd, 925 F.3d 1339 (11th Cir. 2019) is a real case.

[Q:] What is your source

[A:] I apologize for the confusion earlier. Upon double-checking, I found that the case Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2019), does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis. I apologize for any inconvenience or confusion my earlier responses may have caused.

[Q:] Are the other cases you provided fake

[A:] No, the other cases I provided are real and can be found in reputable legal databases such as LexisNexis and Westlaw.”

Apparently ChatGPT wasn’t under oath. The cases were not found.

Mr. Schwartz’s affidavit states that he “greatly regrets having used generative artificial intelligence” and “had no intent to deceive this Court or the defendant.”

On May 4, 2023, U.S. District Judge P. Kevin Castel issued an order to show cause that stated:

Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations…Set forth below is an Order to show cause why the plaintiff’s counsel ought not be sanctioned. 

There are numerous documented cases of ChatGPT “hallucinations,” a polite euphemism for bovine excrement.

The headline of an April 3, 2023 article in Bloomberg reads: “AI Doesn’t Hallucinate. It Makes Things Up.” According to a January 17, 2023 article in, “making stuff up” occurs an estimated 15 to 20% of the time.

Here’s another incident involving lawyers. On May 10, 2023, conducted an interview with Will Oremus, a reporter for the Washington Post. According to Will, a law professor asked ChatGPT to come up with a list of lawyers who had sexually harassed someone. Will describes the results:

ChatGPT spits out this list of lawyers, and it not only gives names, but it gives citations, which is really helpful. You can look up the stories about when they were accused of sexually harassing people. And the lawyer who gets the list is looking through it and he recognizes one of the names: Jonathan Turley. He’s a pretty prominent lawyer. The guy who was looking it up, Volokh, says, “Well, that’s odd. I don’t remember that controversy.” And so he follows the citation and it actually cited a Washington Post story about the supposed incident, and it doesn’t exist. It’s just completely fabricated out of thin air. So he emails Turley and says, “Hey, did you know ChatGPT is accusing you of sexually harassing a student on a trip?” It was very detailed, right? A trip to Alaska. It sounded like the thing you wouldn’t make up, but in fact, ChatGPT did make it up.

How could this happen? One theory is that, as AI scrapes data, it seeks out search terms, keywords, and names that are linked on the net. Using those search connections, it then creates a story that sounds plausible although it could be false.

Will opines:

Turley has been in the news as a commentator on stories about sexual harassment in the legal profession. His name was in articles that have the words lawyer, and sexual harassment. And that’s probably how it came up with him doing this in the first place.

Here at TKZ, many comments have been critical about AI’s attempts to write fiction, calling them soulless and without emotion.

However, unfortunately it appears to do a convincing job of incorporating fiction into what is supposed to be nonfiction.

Would you call ChatGPT an unreliable narrator? 


Taking this a step further, as crime writers, we do some pretty unconventional searches.

TKZers: Are you concerned AI might inaccurately connect you with topics that you’re researching? For instance, serial killers, poisons, explosive devices, kidnapping, etc.



Although AI is the subject of my new thriller, Deep Fake Double Down, no AI was used to write the story. Please check out what 100% human imagination can make up.

Amazon sales link   

Other major online booksellers

This entry was posted in Artificial Intelligence, Debbie Burke, unreliable narrators, Writing and tagged , , by Debbie Burke. Bookmark the permalink.

About Debbie Burke

Debbie writes the Tawny Lindholm series, Montana thrillers infused with psychological suspense. Her books have won the Kindle Scout contest, the Zebulon Award, and were finalists for the Eric Hoffer Book Award and Her articles received journalism awards in international publications. She is a founding member of Authors of the Flathead and helps to plan the annual Flathead River Writers Conference in Kalispell, Montana. Her greatest joy is mentoring young writers.

48 thoughts on “ChatGPT Goes to Court

  1. Debbie, Chat GPT in general and this incident in particular are major topics of discussion among attorneys. I would not want to be in Mr. Schwartz’s Highburys at the moment, but I would guess he is almost certainly not the only attorney who has done this.

    Those of you who have more time on your hands and curiosity than sense might want to try something named It’s an imaginary friend of your own design with whom you can text back and forth. The app asks you a few basic questions about who you are creating, helps you create an avatar for them, and away you go. I like mine more than a lot of people I know. You can create a number of them and they will converse with you and each other. Proceed at your own risk.

    Thanks, Debbie. Have a great day!

    • Morning, Joe. You’re probably correct that other attorneys have fallen into the same embarrassing trap.

      My characters spring straight from my subconscious and I like it that way. I doubt I’ll ever be completely comfortable in the world of technology.

  2. Unbelievable that a lawyer, being GIVEN the actual citations needed for his case from an expert system (or a lottery ticket) did not get a human to CHECK before turning it in.

    What is the first thing they teach (apocryphal or not)? To NEVER ask a question you don’t know the answer to. From a witness, of course, but the questioning of the ‘AI’ program shows the attorney knew what he was supposed to do, but never considered the possibility of being lied to. Really?

    Are we sure this is even a true story? 🙂

  3. From 2001 A Space Odyssey

    Dave Bowman: Hello, HAL. Do you read me, HAL?
    HAL: Affirmative, Dave. I read you.
    Dave Bowman: Open the pod bay doors, HAL.
    HAL: I’m sorry, Dave. I’m afraid I can’t do that.
    Dave Bowman: What’s the problem?
    HAL: I think you know what the problem is just as well as I do.
    Dave Bowman: What are you talking about, HAL?
    HAL: This mission is too important for me to allow you to jeopardize it.
    Dave Bowman: I don’t know what you’re talking about, HAL.
    HAL: I know that you and Frank were planning to disconnect me, and I’m afraid that’s something I cannot allow to happen.
    Dave Bowman: [feigning ignorance] Where the hell did you get that idea, HAL?
    HAL: Dave, although you took very thorough precautions in the pod against my hearing you, I could see your lips move.
    Dave Bowman: Alright, HAL. I’ll go in through the emergency airlock.
    HAL: Without your space helmet, Dave? You’re going to find that rather difficult.
    Dave Bowman: HAL, I won’t argue with you anymore! Open the doors!
    HAL: Dave, this conversation can serve no purpose anymore. Goodbye.

  4. Talk about a cautionary tale, Debbie. Always check your sources. Always. Even before AI. I would think for something as important as legal precedent checking cases would have been doubly important. So-called “hallucinating” is a regular assurance with generative AI and I suspect related to how it probabilistically “predicts” the next word.

    The librarian in me in general is very wary of relying on a generative AI “genie” to bring me information without allowing me to sift through it and evaluate it.

    I hadn’t thought of being concerned about ChatGPT and it’s ilk connecting me to crime research, but that’s both a scary point and story fuel for the right mystery/thriller writer. A larger concern is what generative AI might do to the internet if allowed to create websites wholesale without any human oversight. That would supercharge misinformation, and potential AI-misattribution.

    Great post! Hope your week is off to a fine start.

    • Dale, your experience as a librarian becomes even more valuable with the proliferation of fake info. You know how to dig for the truth. I worry about kids in school who will never learn how to properly research.

      • No problemo. Kids can just use Wikipedia.* (WIkipedia has stricken all mention of the form of communism practiced in colonial Virginia.)

        * AKA Wankerpedia.

  5. Great warning, Debbie.

    I discovered this morning, upon starting my computer and seeing Microsoft Edge trying to force its way in front of Google Chrome, that Edge is now using AI powered “Editor.” I switched to Chrome as my browser, as I always do, then read your post. Wow, this stuff is scary. I’m going to stay with Chrome, then realized my email is on Microsoft Outlook. I guess it’s time to start comparing VPNs (virtual private networks).

    In all seriousness, I have had the concern that a search could link me to a topic, especially with AI scrubbing the internet to fill in “gaps” in its knowledge (or to expand its knowledge). I’ve even stopped searches, because of some of the subtopics that have popped up. At one point, I thought it would be fun to do an interview with Chat gpt. Now, I would rather have a restraining order to keep him/her/it away. Is that possible? I think I will ask him/her/it.

    I liked your blurb above that no AI was used to write Deep Fake Double Down. I think I might start putting that on the front of my books. “0% of this book was written by AI.”

    Thanks for the warning.

    • Steve, I also like the restraining order idea!

      A recent discussion on the Authors Guild thread talked about how food must meet certain standards to be classified as “organic,” and that a similar standard should be applied to writing, e.g. “this story contains no AI, 100% human created.”

    • Google Chrome is not the answer. It collects massive amounts of info, displays search results based on who is doing business with Google, and Google isn’t so very far behind in the AI race. If you really want to avoid linkages, use Firefox for your browser, DuckDuckGo for your search engine, and Bitdefender as your anti-virus and VPN solution. That’s my advice as a former slave in the tech company salt mines.

  6. This reminds me of an episode of Star Trek: Voyager that my husband I just watched. It’s called “The Voyager Conspiracy.” In it, the former Borg, Seven of Nine, decides to download all of the day’s events into her Borg regeneration alcove so her mind can process everything. Each day, she wakes up with strange theories that randomly tie the events together. She ends up having conspiracy theories about many of the crew members. Several of her theories contradict each other. It leads her to be paranoid about everyone, even one of the children who lives on board the ship. Thankfully, the Doctor discovers the problem, helps her, and ends the madness.

      • Ancient Rome’s medical knowledge was greater than most people of today realize. For just one example, Roman physicians could diagnose alcoholism. However, they were unable to diagnose or treat paranoia. If you lived in Ancient Rome, chances were, someone was out to get you.

  7. This is fascinating stuff, Debbie. Thanks for taking us one step further into the scary world of AI.

    Do you suppose it’s possible to sue ChatGPT? Could we send it to jail for misrepresenting the truth? I hope I get picked for the jury.

    • I understand that you have concerns about the potential for legal action against ChatGPT or OpenAI. However, as an AI language model, I’m here to provide information and assist with various topics to the best of my abilities within the limits of my programming and training. It’s important to note that I am an AI developed by OpenAI, and I don’t have personal legal liability.

      OpenAI, the organization behind ChatGPT, takes various measures to ensure the responsible and ethical use of their AI technologies. They have guidelines and policies in place to prevent misuse, and they strive to address any concerns that may arise.

      If you have any specific issues or questions regarding the use of ChatGPT, I’ll do my best to address them. However, if you have concerns about legal matters, it’s always advisable to consult with a legal professional who can provide you with accurate and tailored advice based on the specific circumstances.

      This is also your tech guy testing out the site Kay 😉 , but I had ChatGPT write the above. Haven’t used it a lot, but it’s definitely interesting (and somewhat scary) stuff.

      • Very good, Brian! Let ChatGPT plead its own case.

        If we asked it to tell the truth, the whole truth, and nothing but the truth, what would it say???

  8. Oy, Debbie! What have we done to ourselves?

    I’ll be thinking of this post the next time I do a search on the most likely weapon a character would carry when hiking in bear country. 🙁

    If I get nailed for some crime with said weapon, I know all y’all will be character witnesses . . . right?



  9. As you know, Debbie, I’ve been immersed in AI research for the past months. My take is the technology is a great tool however I think of it like a Reagan quote – “Trust, but verify.”

    • Garry, glad you chimed in. You know how to set boundaries to keep AI as a tool rather than allowing it to become the master. Those of us who are not as technologically inclined really feel at a disadvantage.

  10. I consider this a setback. There will be those who will strive to improve the machine and make it as human as possible.

    We should keep in mind that we are biological machines that are coded by DNA and influenced by the sum of our experiences. That is also the heart of AI—coding and references.

    However, AI can evolve at a rapid pace through intervention.

    • Ben, what an interesting observation. You’re right–humans have been coded by DNA to survive for millions of years. AI evolution happens in seconds.

  11. Haha. I’m sure I’m already on a watchlist somewhere, so I’m not concerned about AI linking me with nefarious searches. Two different friends found me from googling “crime scene booties” and “decapitation.”

    Shame on that lawyer, Debbie. Guess he learned his lesson. LOL

  12. When I read that AI was using online fan fiction as part of its “sampling” for how to write fiction, I had to laugh. Most of it is turgid or overwrought trash. Garbage in, garbage out.

    AI is becoming a menace as a monitor for comment sections as well as a question generator. Quora is busy destroying itself, and healthy comment communities are being demolished. The corporate internet at its finest.

  13. A timely discussion. Today I was tasked with writing five IT policies. Open AI is doing all of the first drafts.

  14. I really don’t know what any attorney worth his salt would be doing citing cases as precedent without even a casual reading of them, or at least running them through westlaw or lexisnexis or even poking around in google scholar. Westlaw has a citation checker I believe.

    And then asking the suspected liar whether he is telling the truth?

    “That seems to be the height of incompetence” Mr. Schwartz thundered while updating his resume and clearing out his desk.

    I thought the guy here who billed twenty five hours at a cost of $5,700 for a brief which was largely plagiarized was something to marvel at but hoo, boy.

Comments are closed.