The explosion of deepfakes is the subject of legislation in Congress and countless conversations among celebrities, tech companies and lawyers about how the issue should be addressed. On Wednesday, leaders from two Hollywood unions, an associate general counsel of Meta Platforms and a law professor provided perspectives on what federal regulation is needed, tools to spot a deepfake, and the potential commercial benefits of deepfakes, such as being able to meet your favorite celebrity virtually.
Here are four takeaways from Wednesday's discussion, hosted by Loyola Marymount University:
Celebrities Not the Only Deepfake Targets
Duncan Crabtree-Ireland, the national executive director and chief negotiator of SAG-AFTRA, recounted how last year someone created a deepfake of him when he was trying to talk to members about ratifying a new labor agreement after a months-long strike.
"Someone decided to create a deepfake of me and put it out on social media, arguing against the ratification of the very agreement that I had spent the last year and a half of my life, including months on picket lines, trying to get," Crabtree-Ireland said after showing the video to the audience. "Even watching it today annoys me."
Maureen Weston, a law professor at Pepperdine University Law School, said one should not have to be a celebrity to have legal protections from deepfakes.
"You don't need to be famous. As far as I'm concerned, that's a deep injury and a misappropriation and a crime, almost," she said.
Current and Proposed Protections
Currently, there's a patchwork of state laws regarding protections for use of someone's name, image and likeness, with 32 states having statutory protection and the remainder having common law protections, Weston said. Some states have more expansive laws than others do, she said.
"Now that everything is nationwide and worldwide, how do you really get control of that?" she said, adding that federal protection would bring more uniformity among states.
Two bills are in Congress: the No AI Fraud Act, which was introduced in December by Reps. María Elvira Salazar, R-Fla., and Madeleine Dean, D-Pa., and the No Fakes Act, by U.S. Sens. Chris Coons, D-Del., Amy Klobuchar, D-Minn., Marsha Blackburn, R-Tenn., and Thom Tillis, R-N.C. The latter bill has not been introduced yet.
Both measures seek to create causes of action for victims of deepfakes when they are done without their consent.
"Frankly, either one of them would be a massive improvement over the status quo, which is nothing at the federal level for protection of image, likeness and voice," Crabtree-Ireland said.
Russell Hollander, the national executive director of the Directors Guild of America, said he and his fellow members want Congress to pass restrictions that will protect them "from third parties who steal and mutilate their films and TV programs without authorization of the copyright holder or the director."
"Without these rights, the resulting mutilation of their work can affect both their reputation and their ability to be hired in the future and to be properly compensated," he said.
Possible Positive Uses of Deepfakes
Hollander noted that there are scenarios where using artificial intelligence to recreate or modify someone's image and likeness can be beneficial. He said he knows of a movie in production for which the technology is being used to age and "de-age" an actor, with their permission.
"Clearly it can be used correctly. The question is how you control it and how you regulate that, and I think that comes back a lot to the issues of consent and compensation," Hollander said.
The associate general counsel for Meta, Tearra Vaughn, said there are beneficial uses for this type of AI technology, when individuals consent to their name, image and likeness being used by a third party. She emphasized that she was expressing her own views and not speaking on behalf of Meta during the discussion.
"For example, the authorized use of the [name, image and likeness] of an individual may be used to generate an AI form of them that the individual could use to interact with others virtually," she said. "There can also be financial benefits, such as for product endorsements. And then providing experiences like interacting with different celebrities in their AI-generated forms. In commercial instances, the individual can manage compensation in addition to consent."
Crabtree-Ireland agreed that there could be positive uses for deepfakes if there "is real, meaningful informed consent."
"It wasn't OK before generative AI for people to go out and take people's image, let's say, and slap it on a can of coffee and sell it without their permission, and it's even less OK now," he said. "And the reason why it's less OK now is because it's one thing to have your image put on something; it's another thing to have your voice, your face, your movement, your body used to portray as though you have actually done, said or aligned yourself with something in a way that implies a very personal connection to it."
Tools for Spotting Deepfakes
Vaughn said there are AI-powered tools to detect deepfakes.
"Tools that perform facial analysis, flicker detection and physiological detection in pixels, for example. There are tools that can detect the presence or absence of blood or flow in pixels to determine whether or not it is a deepfake," she said.
Unfortunately, Vaughn said it is not possible at the moment to identify all AI-generated content.
Crabtree-Ireland said detection tools need to be broadly available and in real time.
"Because the problem is when you have deepfakes or other things that get out there, they take on a life of their own, and if it's not something that can be addressed immediately, it's too late," he said. "If you take the examples in the political world in particular, you can't ever figure out who's seen that. You can't ever get them a corrected message. It just exists."
--Editing by Adam LoBelia.
For a reprint of this article, please contact reprints@law360.com.