Title: Combating Deepfakes Description: We're seeing more deepfakes on social media and in the news. These are videos, audio or images that have been manipulated using artificial intelligence. What's being done to combat these fakes? We find out from GAO's Brian Bothwell. Related GAO work: GAO-24-107292, Science & Tech Spotlight: Combating Deepfakes Released: March 2024 {Music} [Brian Bothwell:] It's a constant cat-and-mouse game with the need to keep developing more sophisticated detection techniques to counter the increasingly improved deepfake creators. [Holly Hobbs:] Hi and welcome to GAO's Watchdog Report--your source for fact-based, nonpartisan news and information from the U.S. Government Accountability Office. I'm your host, Holly Hobbs. We're seeing more and more deepfakes on social media and in the news. These are videos, audio or images that have been manipulated using artificial intelligence. Some are entertaining. For example, an AI generated Rihanna cover of a Beyoncé hit. But many other examples are troubling. Deepfake technology has been used, for example, to influence elections and to create nonconsensual pornography. So, what's being done to combat these fakes? Our new report, a Science & Tech Spotlight, takes a look at these efforts. Joining us to discuss this report is GAO's Brian Bothwell, an expert on emerging AI technologies. Thanks for joining us. [Brian Bothwell:] Thanks for having me, Holly. [Holly Hobbs:] So, Brian, what is currently being done to combat deepfakes? [Brian Bothwell:] So we looked at two main approaches to combat deepfakes. Now one is detection. So deepfakes typically have some kind of anomalies--unrealistic boundaries, unnatural colors, mouth movements in a video that don't match with the audio. So detection technologies are trained on real and fake media. And then they use machine learning to look for those anomalies to detect the deepfakes. Now, second approach is authentication. The one you hear most often is digital watermarks. In this method, the creator of the original media embeds pixels or audio patterns. And then if the original is altered, these patterns are changed or removed. And that enables the creator then to prove that the media has been changed. [Holly Hobbs:] So how effective are these efforts? [Brian Bothwell:] Well, deepfake detection is getting better, but so is deepfake creation. So it's a constant cat-and-mouse game with the need to keep developing more sophisticated detection techniques to counter the increasingly improved deepfake creators. Also, once a particular detection method becomes known, then deepfake creators can refine their models to evade that detection method. Authentication technology seems promising, but it requires additional work upfront to insert a watermark or embed some other data at the time of creation. [Holly Hobbs:] Is Congress considering any legislation or are there any laws about deepfakes? [Brian Bothwell:] Yeah. New legislation has been proposed in the past and there are three bills been introduced in the current Congress to address deepfakes. So that legislation has typically contained requirements like you need to disclose if it's a deepfake or a ban on things like nonconsensual deepfake pornography, among other things. But I'm not aware of any current federal laws that are really specific to deepfakes. There could be some existing federal laws on privacy or defamation or intellectual property that might apply. But there's also the other consideration in the First Amendment. You know, what limits are there on the deepfake creators' freedom of speech? There's also other government involvement in deepfake. In fact, DARPA, the Defense Advanced Research Projects Agency, is funding programs for developing deepfake detection models. [Holly Hobbs:] So this is a fast moving, evolving technology. What considerations or questions should policymakers and others think about as they try to combat this tech? [Brian Bothwell:] We raised three in our report. One consideration is laws and regulations, which we were just talking about. Another question is how could organizations coordinate on this deepfake technology and authentication technologies? Should there be standards developed to evaluate these technologies? And the third and maybe the most important is what organizations should make decisions about identifying and blocking deepfakes, or taking action against deepfake creators if that's warranted. Is that government? Is that private companies? That's the question. {Music} [Holly Hobbs:] So, Brian told us that as deepfake detection methods improve, so do the efforts to evade them. And while Congress considers actions on deepfakes, our new work includes some key policy questions for them to consider. So, Brian, beyond the lie--beyond that these are fakes--is there a broader impact of deepfakes? [Brian Bothwell:] Yeah, Holly, there's an old saying that a lie travels around the world while the truth is still pulling its boots on. After lie has been told, it takes a lot of effort to debunk the lie. Now, not every deepfake may be damaging. You may remember the Tom Cruise deepfake from 2021. It showed a deepfake Cruise doing various things, including a magic trick. That video went viral quickly, but it didn't seem harmful. But deepfakes can cause serious harms. They've already been used to attempt to undermine public trust in the election process, create nonconsensual pornography. They've been used to embarrass or blackmail people, to fuel unrest or violence, and they've been used for financial scams. [Holly Hobbs:] Brian last question--what's the bottom line of this report? [Brian Bothwell:] So these technologies hold promise, but they have some limitations. The cat-and-mouse game is going to continue as AI methods become more sophisticated. But beyond the technologies, there are important questions to consider for policymakers about laws and regulations, First Amendment considerations, enforcement mechanisms and coordination among government, and other organizations on these issues. [Holly Hobbs:] That was Brian Bothwell talking about our new science and tech spotlight on deepfakes. Thanks for your time, Brian. [Brian Bothwell:] Thank you, Holly. [Holly Hobbs:] And thank you for listening to the Watchdog Report. To hear more podcasts, subscribe to us on Apple Podcasts, Spotify, or wherever you listen and make sure to leave a rating and review to let others know about the work we're doing. For more from the congressional watchdog, the U.S. Government Accountability Office, visit us at GAO.gov.