Deepfakes: Opportunities, Risks, and Regulation

“I’ll split 50% royalties on any successful AI generated song that uses my voice,” Grimes tweeted to her more than one million followers. Deepfakes will be a core part of the new emerging Identity Economy, as each of us will be able to harness our identity, hoping we keep control over such applications!

Let’s share an overview of the current deepfake landscape, from opportunities to risks, as well as risk management options and regulation adjustments.

Deepfakes: a few opportunities

The first industry that comes to mind as it relates to deepfakes and business opportunities, is Media production, with cost cutting on using actors, simplification of production planning and logistics, and the possibility to offer diversity options as you watch a movie. Imagine if you can choose your character and its main characteristics. You want a replay of Titanic with Taylor Swift in place of Kate Winslet… Or what if you want to become the main character of the movie and change its ending to save Leonardo… It would be an immense opportunity for streaming services to extend their features and work on personalization of experience.

We also see lots of possibilities in marketing, consumer goods, and retail with an hyper-personalization of campaigns, and the possibility to project yourself using products or wearing clothes before purchasing them.

Also even if still in its premise, in the healthcare industry we can consider many use cases around synthetic data generation for research purpose, surgery preparation on a digital twin, with new ways of diagnosing or monitoring disease without risking breaches in real patient privacy.

We hear about more eccentric use cases such as bringing dead people back to life from ancient times or keeping a digital clone of a loved one alive. I cannot truly imagine it being a mainstream use case, same as not everyone wants to consult a medium, a channeler or to know the date of their death, I personally believe it more as a marginal deepfake application.

However reviving historical figures for educational purposes or to convey empathy is a use case to explore.

In some criminal investigations and reconstructions, it can be a valuable tool, though once again we need to be very careful about why we would use a deepfake, with what objective and what the consequences could be, such as morbid attraction in the steps of the current true crime story trend.

Introducing a deepfake policing framework

I want to offer a framework to think about deepfake management and regulation, and it’s a similar framework we can use for cyber policing, social media moderation, ethics in the metaverse, or AI ethics in general…

A first step in our thinking consist in identifying clusters or segments:

  • Agents with bad intentions, malevolent actors: this covers hackers, scammers, cybercriminals, who ask for ransom, or look to destroy your reputation, conduct propaganda and misinformation, and even terrorize populations…
  • Ruthless / uncaring actors: they believe it is fun to play with the technology — there is even some component of attraction and even sometimes addiction — without thinking through the consequences, for example creating a deepfake of the pope or of Donald Trump out of prison without fully considering how it can serve misinformation, or creating a new music video with Drake and The Weeknd without assessing the impact on copyright, or generating a deepfake of global leaders threatening nuclear attacks without thinking about the consequences on international diplomacy.
  • The grey zone: We don’t necessarily mean bad, but if it’s good for business, we close our eyes on the bad aspects. Competitors develop AI chatbots and filters, so some actors jump on the bandwagon, without developing the use case thoroughly and carefully enough. For example, it might not protect its data enough, or it might reproduce institutionalized bias. The book Unmasking AI by Dr Joy Buolamwini is exemplary at exploring our more or less conscious biases and dysfunctioning (such as the male gaze, or the coded bias, in the data selection and in the algorithm input) as we build LLMs.
  • Unintended consequences: Some actors mean good, but bad happens, for example artists accept to deepfake their image and performance against remuneration — in that sense they harness their identity, but as a consequence their image loses its rarity. It’s not the exceptional Tom Cruise movie or the exclusive Eminem song of the year, but frequent generated content becomes something common that loses value through inflation. You can also think of unintended consequences in the porn industry — in order not to exploit underage actors and actresses, those get deepfaked, but as a consequence pedopornograpy availability and addiction becomes on the rise with scenarios that we don’t have control over anymore.
  • The insufficient perspective, which comes with unforeseen consequences. We have a hard time figuring out what could be the biggest risks of technology on child development, mental health, cognitive, social, and emotional development, on behaviors… because we don’t have enough distance, experience, and clinical data. I’m thinking here of how the Proteus effect emerged as we developed the metaverse — when the behavior of an individual, within virtual worlds, is transformed by the characteristics of their avatar, and the impact it can have on their real life. Another connection is the Her phenomenon, from the movie Her. How we might bond to non-existing creatures and start to believe in fake proximity with some celebrities because we converse with their Meta AI chatbot every day. And how we might lose track of reality and develop psychosis.

I believe it is especially important to introduce these different levels, because for a regulatory system to be relevant, we need to make those distinctions, and adapt how we can educate, regulate, monitor, control, and penalize according to those different levels of use cases

I am an optimist, in the sense that I am not in favor of just banishing a technology or condemning it because there are risks. I’d rather advocate for us all (futurists, ethicists, policymakers, corporate leaders, educators, and global citizens) to take the time, effort, and deploy resources to identify those risks and to develop adapted safeguards.

Actions for responsible innovation in regard to Deepfakes

Let’s take a look at a few options regarding deepfake management and what we might expect to see more and more.

  • The AI Act might be considered insufficient or too limiting by some players, but it is a true collective effort to reconcile two major concepts: disruptive innovation and responsibility.
  • Counter-technology and a cat-and-mouse game between counter-measures and counter-countermeasures (with deepfake detectors or scoring of AI-generated content).
  • It also means Education — in California and other parts of the world, children receive a class on Media Literacy and Internet Safety — those programs will include more and more awareness towards fake media.
  • Social media platforms now send a warning when a content becomes viral too fast, forwarded multiple times, or is not fact-checked. Those verification systems will need to become even more accurate. The same way we learn how to identify a non-secure connection, unprotected transaction, or fishing emails, I hope in the future we will have ways to clearly verify and track original content.
  • But I don’t want to be delusional either — on top of risks, regulation, and tech solutions, we are very human and we develop behaviors that can be toxic for ourselves and for our life in community. In her article on The WhatsApp Auntie phenomenon, Tunu Wamai illustrates how we develop social behaviors and social media tactics that, combined with deepfakes, can exacerbate the harm.

Some signals about deepfake regulation and deepfake management

Over 15 years posthumously, comedian George Carlin has been revived in an AI-generated comedy special titled “George Carlin: I’m Glad I’m Dead,” produced by AI Dudesy. The AI special features an impression of Carlin’s comedic style and voice, addressing current issues such as mass shootings, social media, and AI itself. The comedian’s daughter openly criticized the special, emphasizing that her father’s unique genius cannot be replicated by AI and suggesting support for living comedians instead.

In other news, US representatives María Elvira Salazar (R-FL) and Madeleine Dean (D-PA) introduced the No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI FRAUD) Act. The bill establishes a federal framework to protect Americans’ individual right to their likeness and voice against AI-generated fakes and forgeries. “Artificial intelligence (AI) brings immense innovation and convenience to America’s most critical business sectors and consumers, but it also comes with the unintended consequence of allowing thieves to steal their victim’s identity and intellectual property (IP) through AI. There is a vital need for federal law to address these concerns while establishing clear First Amendment protections. (…) It’s time for bad actors using AI to face the music. This bill plugs a hole in the law and gives artists and U.S. citizens the power to protect their rights, their creative work, and their fundamental individuality online. (…) Not only does our bill protect artists and performers, but it gives all Americans the tools to protect their digital personas” The No AI FRAUD Act establishes a federal solution with baseline protections for all Americans by:

  • Reaffirming that everyone’s likeness and voice is protected, giving individuals the right to control the use of their identifying characteristics;
  • Empowering individuals to enforce this right against those who facilitate, create, and spread AI frauds without their permission; and
  • Balancing the rights against First Amendment protections to safeguard speech and innovation.

Google recently filed a patent application for an AI-based system that detects “information operations campaigns on social media.” The system relies on neural network language models to track and predict whether or not text within social media posts contains misinformation. Google is decided to fight deepfakes especially in this election year 2024, with Ads disclosures (require election advertisers to disclose when their ads include realistic synthetic content), with Content labels (YouTube will require creators to disclose when they’ve created realistic altered or synthetic content, and display a label when content displayed is synthetic), with Digital watermarking via SynthID (a tool from Google DeepMind, directly embedding a digital watermark into AI-generated images and audio).

A patent from Intel aims to combat biases that emerge from improper training, by recognizing “social biases” in sets of image training data. The system tackles visual biases in training data by simulating “human visual perception and implicit judgments.” Essentially, it figures out what draws a person’s eye first in an image to determine the most biased aspect of a given image or scene, then assigns it a bias score of low, medium or high.

Cybersecurity company McAfee just announced an AI-powered Deepfake Audio Detection technology, known as Project Mockingbird, at CES2024. Cheapfakes involve manipulating authentic videos, like newscasts or celebrity interviews, by splicing in fake audio to change the words coming out of someone’s mouth. This makes it appear that a trusted figure has said something different than what was originally said. Anticipating the ever-growing challenge consumers face in distinguishing real from digitally manipulated content, McAfee Labs, the innovation and threat intelligence arm at McAfee, has developed an advanced AI model trained to detect AI-generated audio, using a combination of AI-powered contextual, behavioral, and categorical detection models to identify whether the audio in a video is likely AI-generated. It claims a 90% accuracy rate currently.

2024 will also see an explosion of small AI models built for niche use cases, resulting in fewer errors and less biases, which will make AI even more accessible than it already is.

Listen to our lates podcast episode on the Informing Choices mini-pod by Steve Wells, dedicated to “Deepfakes: The Good, the Bad, and the Ugly” with Tunu Wamai.

Published by Sylvia

Futurist - Futures Thinking & Strategic Foresight

Leave a comment

Design a site like this with WordPress.com
Get started