By Debbie Burke
Believe none of what you hear and half of what you see.
This saying has been around for centuries, variously attributed to Benjamin Franklin and Edgar Allen Poe.
Today, thanks to Artificial Intelligence (AI) and Machine Learning (ML), you can no longer believe anything you hear or see.
That’s because what your ears hear and what your eyes see could be a DEEPFAKE.
What is a deepfake? Wikipedia says:
…synthetic media in which a person in an existing image or video is replaced with someone else’s likeness. While the act of faking content is not new, deepfakes leverage powerful techniques from machine learning and artificial intelligence to manipulate or generate visual and audio content with a high potential to deceive. The main machine learning methods used to create deepfakes are based on deep learning and involve training generative neural network architectures, such as autoencoders or generative adversarial networks (GANs).
Deepfakes have garnered widespread attention for their uses in creating child sexual abuse material, celebrity pornographic videos, revenge porn, fake news, hoaxes, bullying, and financial fraud. This has elicited responses from both industry and government to detect and limit their use.
I wrote about AI three years ago. Since then, technology has progressed at warp speed.
The first recognized crime that used deepfake technology occurred in 2019 with voice impersonation.
The CEO of an energy business in the UK received an urgent call from his boss, an executive at the firm’s German parent company. The CEO recognized his boss’s voice…or so he thought. He was instructed to immediately transfer $243,000 to pay a Hungarian supplier. He followed orders and transferred the money.
The funds went into a Hungarian account but then disappeared to Mexico. According to the company’s insurer, Euler Hermes, the money was never recovered.
To pull off the heist, cybercriminals used AI voice-spoofing software that perfectly mimicked the boss’s tone, speech inflections, and slight German accent.
Such spoofing extends to video deceptions that are chilling. The accuracy of movement and gesture renders the imposter clone indistinguishable from the real person. Some research shows a fake face can more believable than the real one.
Security safeguards like voice authentication and facial recognition are no longer reliable.
A November 2020 study by Trend Micro, Europol, and United Nations Interregional Crime and Justice Research Institute concludes:
The Crime-as-a-Service (CaaS) business model, which allows non-technologically savvy criminals to procure technical tools and services in the digital underground that allow them to extend their attack capacity and sophistication, further increases the potential for new technologies such as AI to be abused by criminals and become a driver of crime.
We believe that on the basis of technological trends and developments, future uses or abuses could become present realities in the not-too-distant future.
The not-too-distant future they mentioned in 2020 is here today. A person no longer needs to be a sophisticated expert to create fake video and audio recordings of real people that defy detection.
In the following YouTube, a man created a fake image of himself to fool coworkers into believing they were video-chatting with the real person. It’s long—more than 18 minutes—but watching even a few minutes demonstrates how simple the process is.
Consider the implications:
What if you could appear to be in one place but actually be somewhere else? Criminals can create their own convincing alibis.
If corrupt law enforcement, government entities, or political enemies want to frame or discredit someone, they manufacture video evidence that shows the person engaged in criminal or abhorrent behavior.
Imagine the mischief terrorists could cause by putting words in the mouths of world leaders. Here are some examples: https://www.cnn.com/interactive/2019/01/business/pentagons-race-against-deepfakes/
Deepfakes could change history, creating events that never actually happened. Check out this example made at MIT of a fake Richard Nixon delivering a fake 1969 speech to mourn astronauts who supposedly perished on the moon. Fast forward to 4:18.
How was this software developed?
It arose from Machine Learning (ML). The process involves pitting computers against one another to see which one most accurately reproduces expressions, gestures, and voices from real people. The more they compete with each other, the better they learn, and the more authentic their fakes become.
A fanciful imagining of a contest might sound like this.
Computer A: “Hey, look at this Jack Nicholson eyebrow quirk I mastered.”
Computer B: “That’s nothing. Samuel L. Jackson’s nostril flare is much harder. Bet yours can’t top mine.”
Computer A: “Oh yeah? Check out how I made Margaret Thatcher to cross her legs just like Sharon Stone.”
The Europol study further outlined ways that deepfakes could be used for malicious or criminal purposes:
Destroying the image and credibility of an individual,
Harassing or humiliating individuals online,
Perpetrating extortion and fraud,
Facilitating document fraud,
Falsifying online identities and fooling KYC [Know Your Customer] mechanisms,
Falsifying or manipulating electronic evidence for criminal justice investigations,
Disrupting financial markets,
Distributing disinformation and manipulating public opinion,
Inciting acts of violence toward minority groups,
Supporting the narratives of extremist or even terrorist groups, and
Stoking social unrest and political polarization
In the era of deepfakes, can video/audio evidence ever be trusted again?
~~~
A big Thank You to TKZ regular K.S. Ferguson who suggested the idea for this post and provided sources.
~~~
TKZers: Can you name books, short stories, or films that incorporate deepfakes in the plot? Feel free to include sci-fi/fantasy from the past where the concept is used before it existed in real life.
Please put on your criminal hat and suggest fictional ways a bad guy could take advantage of deepfakes.
Or put on your detective hat and offer solutions to thwart the evil-doer.
~~~
Debbie Burke’s characters are not created by Artificial Intelligence but rather by her real imagination.
Please check out her latest Tawny Lindholm Thriller.
Until Proven Guilty is for sale at major booksellers here.
Thanks for this, Debbie.
1) I wonder if the products of this software will stand up to facial recognition scrutiny as it currently exists.
2) for all you science fiction kids: What happens when the deepfake version of yourself acquires enough intelligence — whatever that is — to acquire the desire to take over your life? We’ve seen stories like that involving androids. a deepfake model would have a much easier time of it.
3) The Europol list is interesting, but I wonder why it limits the danger of violence being incited against individuals to minority groups. Isn’t everyone in danger? Asking for a friend.
I’ll be thinking about this one all day, Debbie. Thanks for the banquet for thought. Hope you are having a great week!
Hi Joe,
Debbie isn’t available right now so her deepfake clone is filling in comments. 😉
1. There are proposals to embed watermarks to identify deepfakes but I suspect hackers can defeat them.
2. Not familiar enough with the sci-fi genre to answer.
3. I agree we are all in danger.
Always glad to hear from you!
What happens when the deepfake version of yourself acquires enough intelligence — whatever that is — to acquire the desire to take over your life?
Artificial intelligence is a misnomer. What is possible is simulated intelligence. We can’t create life, just the semblance thereof. We barely know where consciousness is located in the brain, and it turns out that region is maybe the size of a pea, somewhere in the brainstem, IIRR. But the scans this is based on are statistically processed to yield results and are not perfectly reproducible. We do have some pretty good generalizations, but there are exceptions, such as “the man with no brain.” I’m sure you’ve seen the civil servant’s x-ray. https://www.sciencealert.com/images/brainscan.jpeg
(No, I don’t know if he works at the DMV,)
Behind any deepfake will be human will power. We’re in enough trouble as it is, with two wills in our heads: https://www.researchgate.net/profile/Geoffrey-Guenther-2/publication/331047505
J, your comments are always though-provoking and well-researched. Thanks for adding to the discussion.
The cartoon “Dilbert” has a current storyline about an AI that Dilbert is trying to erase before it kills him. A running story line in “Non Sequitur” is Alexa, Siri, and the other household AIs’ plans to destroy humanity. Fear of what computers can do runs deep. It’s not a surprise that it was a common plot in original STAR TREK.
Remember Hal in 2001-A Space Odyssey?
The old Mission Impossible TV series frequently had plots that relied on one of the team members impersonating someone in order to fool guards watching on closed-circuit TV. And of course, there’s the fairy tale of the prince and the pauper who looked enough alike to trade places.
As the human population continues to grow, we’ll have more pandemics (because that’s how nature works) and use more electronic communication to maintain quarantines. When we’re stuck in our bubbles, how will we know what’s real and what’s fake? The Matrix gets another step closer. Pretty scary stuff.
Kathy, great point about the increase in electronic communication due to pandemics. Scary indeed.
Thanks again for suggesting this topic.
Great post, Debbie. Thanks for updating and informing us on this disturbing (and depressing) criminal use of a new technology.
When I hear of the elderly (or the technologically handicapped) being robbed by electronic scams, it makes my blood boil. I don’t have answers to your first two questions, but regarding question #3, I can think of appropriate punishment that might make criminals think twice, but alas, it would be outlawed by the 8th amendment, regarding cruel and unusual punishment.
But, we can write fiction and get creative in our just rewards, like kicking the terrorist over the edge of the dam. (Instrument of the Devil)
I hope you have a scam-free rest of the week!
Hahaha, Steve! Please don’t mention any more of my crimes until the statute of limitations has run out. 😉
So true that we fiction writers can dispense justice that eludes us in real life.
Scary stuff, Debbie. I can’t watch the news anymore without asking myself if what I’m being shown is real.
There was a case not too long ago of a network news program showing a video clip of a supposed attack here in the US by so-called “right-wing extremists” (can’t remember all the details); it turned out the clip they used was an old one of some kind of community event where a Civil War attack was simulated. The news gurus had to issue an apology.
All kinds of movies come to mind, but one that illustrates this topic for me was the 1997 Dustin Hoffman movie, Wag The Dog. A US President was discovered in a sexual entanglement two weeks before elections, so his minders manufactured a war somewhere in the world to a) distract the citizens; and b) make him a hero. And that was in 1997 . . . oy!
Deepfake is Deeperfake these days. 🙁
Deeperfake! Good one, Deb.
Fascinating, Debbie, and scary!
1. When I think of disguise, I remember Maurice Leblanc’s Arsene Lupin who was a master of the low-tech superficial fake. I much prefer that to the AI version!
2. You gave a pretty good list of all the ways this can be used by criminals.
3. I haven’t had enough coffee yet to think about solutions.
Funny, though. I was looking around last night for a good quote about writing to post on Twitter this morning. Here’s the one I scheduled: “There is only one plot–things are not what they seem.” — Jim Thompson
Very appropriate.
Great quote from Jim Thomsen, Kay. Thanks for adding it.
No matter how much caffeine I have, I couldn’t come up with many meaningful solutions.
Good morning, Debbie. Fascinating post on deep fakes, which certainly can cause all sorts of mischief and worse, potential crimes. One way would be to create a deep fake of a teenaged grandchild and then video call (either via phone or video messaging) and have the deep fake plead for money. Another would be deep fake blackmail, where a blackmailer has created deepfake and unless they are paid off, will post the deepfake online where it could potential wreck the victim’s reputation.
One potential way to combat this is to employ intelligent systems (often called A.I.) to scrutinize the videos to prove they were deep fakes.
It’s an old adage in science fiction that it’s really about the present. In this case, the present has caught up with a science fictional future.
Thanks for another great True Crime Thursday post! Here’s to the real versions of all us.
Good suggestions, Dale. The grandchild scam would be esp. effective. I shudder to think how many grandparents would understandably panic and pay immediately w/o checking that the child is okay.
The real version of me appreciates the real version of you!
We’re already seeing with virtual interviews people being hired aren’t the people beng interviewed. The more we stay apart from each other the easier it is to get away with/falsify stuff.
Thanks for mentioning the employment scam, Cynthia. With so many people working remotely, that’s fertile ground for falsification.
The pandemic’s most toxic effect is not the disease but disconnection and isolation from others.
This is awesome, Debbie! I can see using this in a plot. It also scares the bejeebers out of me. I watched one of the videos and will catch the others later, but in the one with Prez Obama, the voice wasn’t quite right. But I’m sure they can fix that!
Thanks for new ideas…
You’re welcome, Patricia. I’m already percolating the plot for a new thriller.
Fascinating subject, Debbie. You’ve just sent me down the Wiki-hole. Thanks…
You’re welcome, Garry. Now you don’t have time to mow the lawn, right?
Season 5, episode 10 of the CBS drama S.W.A.T., which aired on April 10th of this year, revolved around deep-faked police bodycam footage appearing to show the show’s main protagonist, the S.W.A.T. team leader Daniel “Hondo” Harrelson, executing two fellow police officers. The deep-fake video was part of a plot by a despicable criminal to take revenge on Hondo for having killed the equally-if-not-more despicable son of the criminal in order to stop the son from raping and murdering an innocent young woman in episode 2 of the season.
I haven’t seen S.W.A.T., Russ. Thanks for mentioning the episode and for stopping by TKZ.
You’re welcome, but I erred in my initial post, undoubtedly because I was only halfway through my first cuppa joe (and possibly at least partly because I’m so old that my son has accused me of having been overly distraught when the last dinosaur perished). At any rate, the episode in question was episode 16, not 10.
Thanks for the correction, Russ.
I’m still in mourning from when my pet dinosaur went extinct.
Neither Franklin nor Poe authored that first quote as written. The language fits neither their styles nor time periods. At best, it’s closer to Mark Twain’s period. Fun fact: When I was in graduate school in ancient times, one part of a standardized test an English major finishing their Masters had to take was a list of unattributed texts. I had to figure out the writer. It was easier than expected to my surprise.
Long before deepfakes and AI, screwed up attributions and poor scholarship have done enough to mess up history. Almost 20 years ago, I saw a quote from Dickens about his love of chocolate at the bottom of someone’s email. It was a minor thing, but I knew it wasn’t from Dickens because men during that period didn’t admit to liking chocolate. It was a drink brought to fashionable women and girls by their maids for breakfast since the Regency period and a bit earlier.
I put on my literary research and my Internet sleuthing skills. No, Dickens nor any of his female characters said that. Instead, I found the quote in a list of other quotes by another author followed a quote by Dickens. Eureka!
What I call the reality sniff test is a great way to judge a novel as well as most of the Deepfakes and false news. If what it is saying goes against what the reader/watcher knows about reality, historical information, and human behavior, it’s most likely failed writing or a bunch of poop.
But, Marilynn, I found the Franklin and Poe sources on the internet so they must be true, right? 😉
What fascinating detective work you did for your Masters. I’d like to learn more how you did it.
Poop is the politest term that can be used for most news these days.
The saying was also in the great Motown hit, I Heard It Through the Grapevine.
I assume you mean my search for the Dickens’ chocolate quote. Major writers like Dickens have all their public domain works online as well as concordances. I plugged in the word “chocolate” into all these sites. After I couldn’t find “chocolate” or the quote, I did a general search of the quote until I found it from another author with a Dickens quote beneath it. Stubbornness and luck was on my side in this.
The Internet comments I found about the supposed Poe quote say that he expressed the general sentiment but not the direct quote.
Wow, Debbie. This is not only disturbing but terrifying. Just because we can do something, doesn’t mean we should. You’ve got my crime writer mind buzzing!
Scared me, too, Sue. I’m drafting the eighth book in my series with this concept in mind.
Today images are manipulated to be used out of context all the time. It has been happening for at least 25 years. Time magazine darkened OJ Simpson’s face to make him look “more criminal”. Middle East “news reports” are full of old snippets and staged shots. The plane crash from “Survivor” has been used dozens of times for different aircraft accidents.
Russia and to a lesser extent Ukraine are using old footage for new war coverage.
Pelosi did not stumble around drunk. Numerous people are not really in the X rated clips floating around of them.
So far, courtrooms have not been fooled by deepfakes or manipulated images, probably. Fooling people in general? It is going to happen. It is happening. It has happened.
For the fiction writer, the software is getting cheaper and easier to use. Deepfake a speech by Candidate X. Stir the wrath of a Candidate Y supporter. Supporter tries to bomb a Candidate X rally. A few thousand dollars and a bot farm and I am rolling.
Thanks for weighing in with your specialized knowledge, Alan. Please stay out of bot farms. 😉 We need your input here at TKZ.
Selling fake viagra and Ray Bans pays better. 🙂
Most anything pays better than writing!
Fascinating look at deepfaking, Debbie.
The subject of AI driven content is very interesting. In my view AI facilitates frauds on a massive scale.
I read an article last year by Steven Marche in the New Yorker on the subject of GPT-3, an artificial intelligence text generation engine that is being packaged and is available to anyone for a price from several vendors. With it you can produce SEO (search engine optimized) spam content for your blog or your website. You don’t even need anyone from fiverr to help you write content.
I did a free trial of one such writing application called Sudowrite (as Marche did) and entered the first lines of some significant literatureto see what it would produce. The results were wordy and unfocused, and as the Everly Brothers sang “It didn’t have much of a plot.”
There is another AI text generator called Dreamily AI that one can play with although it is not GPT-3 driven. You enter a couple of lines from something and see what you get.
I entered
Call me Ishmael
and here’s what it produced.
“I’d been waiting all night. And as I’d come to expect, the one thing that would always arrive late had arrived early. He’d finally got here, just as the sun had set.
“You’re here,” he said with a small smile on his face and an unassuming suitcase in hand.
There was no response, but he seemed undeterred, simply continuing to walk up to where I stood, my heart hammering in my chest.
He smiled at me, reaching out and touching my cheek softly before pulling away from our embrace. I missed his warmth already, and it felt like my whole body was missing him”
I taught online for three colleges for fifteen years and the struggle was always with cut and paste plagiarism. Now, it’s different. I can envision an avalanche of allegedly original material from AI sources submitted by students to overworked professors who do a quick scan of the material.
Instant tonnage.
I can also see reams of this stuff, zombie novels, porn and suchlike packaged and slammed into KDP for reasons that are beyond me. Unless it’s so you can say to your friends “I’m a published novelist.”
It’s a brave new world, isn’t it?
https://www.newyorker.com/culture/cultural-comment/the-computers-are-getting-better-at-writing
Robert, thank you for the link to the New Yorker article. Fascinating. Are writers soon to be lumped in with buggy whip makers?
In this brave new world, I’m a big chicken.
It is passing strange Debbie. Here I am trying to learn yet another trade that’s becoming obsolete before my very eyes.
My father always said I was studying for my doctorate in obsolete technology.
Robert. don’t despair. After most humans are wiped out by the big EMP, nukes, or biological warfare, we survivors will still have stone tablets to write on. We can also tell stories around the campfire before we crawl into our cozy caves to sleep.