Written by Brian Lee, Karis Chan, Eugene Hong

In December 2016, American actor and writer Edgar Maddison Welch waltzed into the Comet Ping Pong, a pizza restaurant based in Washington, D.C., armed with an AR-15. His goal: to free rumored child sex slaves that were allegedly held by Democrats. Swiftly, he shot down the lock of the restaurant closet, effectively forcing it wide open. He ended up finding nothing but a tower of computer parts. 

Prior to the incident, Welch had been following online conspiracy theories about the Democrats’ alleged actions. After watching videos and reading about it for 3 days, he decided to take matters into his own hands. When the cops arrived at the scene, he surrendered in devastation. Soon after, he was charged with assault with a dangerous weapon and sentenced to 4 years in prison. This scandal, dubbed “Pizzagate,” is one that, although it happened 8 years ago, has heavy implications on the spread of artificially generated media, or deepfakes, across the internet today.

Deepfakes are realistic videos, photos, and audio recordings that are generated with artificial intelligence. This technology can be and has been used for multiple — and sometimes illegal — reasons, including activities as convenient as correcting slight language accents in audio-recordings or as serious as identity theft.

More specifically, though, deepfakes have been known to better reflect past scandals, many of which were the result of deepfake voice generations. One example: the CEO of WPP, Mark Read, was the target of a failed deepfake fraud attempt. The impersonator attended a meeting with an “agency leader,” using AI to impersonate Read’s voice. Another: during the 2024 presidential election, thousands of voters in New Hampshire received calls from a voice that sounded like Joe Biden, telling democrats not to vote in the state’s primary. It seems that this technology has been largely used to manipulate.

However, the essence of this issue comes down to the continuing development of AI, as it is responsible for generating such media. As a result, we decided to (try and) gauge the opinions on this issue within our community by collecting local data from teenagers in the Bay Area on whether or not they think society should continue the development of AI. Of the 97 responses gathered from the survey, 42.3% did not believe that society should continue the development of AI, while 57.7% said society should continue AI development. 

The close split between the two options points to the youth’s contention behind this matter. But regardless of what you believe, it stems also from the fact that the development of AI is one that likely can not be stopped. Even taking a brief look at online charts and reports from UN Trade and Development, and with projections of the AI market increasing from $189 billion to 4.86 trillion by 2033, it’s safe to say that the hype around AI will likely only increase, and so will its development. 

As youth, these problems can feel somewhat intangible. But, as the challenges of AI become increasingly prevalent, this is an issue we should start considering now: how will this increased spread of deepfakes, one that parallels the dramatic and ever-developing nature of AI, impact younger and future generations’ media and news consumption when distinguishing between what’s real and AI-generated is already a challenge?

Though a potential way we can try to limit the negative impacts of deepfakes is through regulations. In fact, some of these regulations have already been in place through laws. Although regulations vary by state in the United States, deepfakes cannot be used for or contain the following in California:

  1. Use of Likeness: Digital Replica 
  2. Political Advertisements: Artificial Intelligence 
  3. Contracts Against Public Policy 
  4. Defending Democracy from Deepfake Deception Act of 2024
  5. Elections: Deceptive Media in Advertisements 
  6. Crimes: Distribution of Intimate Images 
  7. Sexually Explicit Digital Images 

Read more about each enacted law in the hyperlink.

Although deepfakes can still sometimes be used in a positive manner, they have largely emerged as a serious threat, significantly undermining people’s trust in the media. Through the “Pizzagate” scandal, we can learn that while social media has helped spread conspiracy theories, the spread of deepfakes will increasingly become the new source of misinformation in the media, especially with the continuing development of AI. To best mitigate this in even the smallest way, we should think about legislation and increase discussions in the future surrounding the ethics and limitations that can be placed around deepfakes, just as we have progressively addressed AI as a whole. 

Leave a comment

Trending