Tuesday's panel at the 10th annual TechTainment conference, organized by the Los Angeles Intellectual Property Law Association, focused on federal artificial intelligence regulation in general, with much of the conversation centering on the Nurture Originals, Foster Art, and Keep Entertainment Safe Act of 2024, or the No Fakes Act, currently under consideration in Congress.
Many speakers noted that with so many needles to thread and considerations to balance, it's difficult to discern what any federal law on deepfake technology will ultimately look like. But speakers agreed that any such law should focus on punishing people who misuse AI technology without limiting the technology itself.
"When we are engaging in legislation or policymaking, the point is it's not the technology that's bad," said Linda Quigley, senior copyright adviser for the Office of Policy and International Affairs with the USPTO. "We're at America's innovation agency, so we like to promote innovation, promote the technology, but we have to understand sometimes there are bad actors out there. They use it for bad things."
The panel also included moderator Konrad Trope of Trope Law Group PC; Andrew Foglia, deputy director of policy and international affairs with the U.S. Copyright Office; Jenni Katzman, senior director of government affairs for Microsoft; Jeff Martin of the Office of Policy and International Affairs for the USPTO, and Joshua Simmons of Kirkland & Ellis LLP.
The panel discussion returned many times to the No Fakes Act, a bipartisan bill reintroduced in August to combat the creation and distribution of fake replicas of people without their consent, and the discussion provided insight into what the important issues are for the technology industry, the IP legal world and two important government entities.
Here's a summary of some of the panelists' main points.
Microsoft Withholding Support for Now
Katzman said her company wants to support the No Fakes Act, but that there are still a lot of topics under discission, and it's not just Microsoft that has yet to support the bill.
"There are still quite a few debates remaining because it doesn't have much tech company support, and that includes small tech, medium tech and big tech," Katzman said.
She added, "My company, along with some other companies, would still like to be supportive of the bill. So I think there is still room to get to a 'yes' for a number of entities that are still not yet endorsers of the bill."
Katzman also discussed how the bill or any similar one would interact with Section 230 of the Communications Decency Act of 1996, which protects online platforms from liability based on content posted by their users. Section 230 has a carve-out for IP, she noted.
"The question with this new federal law is, is it an IP law?" Katzman said. "And if it is an IP law, would it fit into that carve-out and the safe harbor would not apply? Another suggested method that people have raised is instead of identifying it as an IP law, simply on its face carve it out from Section 230 and say Section 230 does not apply to this law, so there's the two options for getting it out of that arena."
Katzman also stressed that "[y]ou treat the actor, you don't create liability around a tool. And I think that's where the issues of liability and the enforcement issues are a little bit concerning with some of the way we are seeing bills, and that's not just No Fakes, it's the way we are seeing some of the [laws] in the states play out."
"And so those are sort of the key considerations that I know that a lot of the tech companies are weighing in on, you know, ensuring that liability is focused on bad actors rather than tool creators," Katzman added. "Especially when they just have the tools out there and don't have knowledge about how those tools are being used by a bad actor."
Near the end of the discussion, Katzman told the crowd it is paramount for the government to get any deepfake or AI regulation correct.
"If we don't get it right, it means that we've had this small lead in AI that goes and evaporates, and that allows other places to go and lead," Katzman said. "And that's why you hear so much talk about China in this space. And that's why there's all this concern, because if China leads in this space, that's a pretty dangerous thing in a lot of ways. Maybe not for people in this room but from a national security perspective, that's incredibly dangerous if we don't get it right."
USTPO Seeks Balance
Quigley told the crowd the No Fakes Act does have First Amendment protections, but that it does not apply to sexually explicit material.
"There's been some debate over that, whether or not that's problematic," she said.
Quigley also said that the "reason these bills are coming up now is currently there's a patchwork of state and some federal law that covers this type of behavior, but it is inconsistent. It depends what state you're in, what sort of rights you have, and so the thought was to have a federal law that would even that space out. And this is where the preemption question comes in, because there are many state laws on this issue. And if the state gives greater rights, and the federal law preempts, you might be taking rights away that people have in a particular state."
She added that it is a possibility preemption "could place after this date, so you're not actually taking away that which already exists, you're only discouraging future legislation in the states."
When Katzman commented that having state laws preempting federal would be "peculiar," Quigley responded, "I wouldn't look at it as state law preempting federal law, I think that the plaintiff would have the rights under both, they could pursue both causes of action."
Martin said the office has put significant effort into outreach with roundtable events, and that "[w]e have gleaned from our stakeholders that any legislation needs to be very narrowly tailored to fit the situation to allow for these beneficial innovations that can come up."
Copyright Office Says States Should Be OK to Go Further
Foglia reminded the crowd about his office's release in July of a report on AI deepfakes, which said there is "an urgent need" for new federal legislation on the issue. Among the recommendations in the report is that federal law set a basic standard, but that states be allowed to go beyond the federal law, which would likely be important in a place like California where protecting name, image and likeness is stressed in the entertainment industry.
"If California wants to go above the standard, in the office's view, that's OK," he said.
Foglia also said his office thinks that First Amendment considerations are very important when it comes to regulating deepfakes, such as for someone creating a parody video.
"Different stakeholders have different views on that," he said.
Foglia also said that while generative AI is new, harmful impersonations that raise legal questions are not.
"There's always been harms from different kinds of impersonation and replicas," he said. "You could previously Photoshop an image or hire someone to mimic or imitate someone's voice. It just took a special skill and maybe took a lot of time. Which generative AI, increasingly, it does not."
After Trope mentioned the 1988 Midler v. Ford Motor Co. Ninth Circuit ruling, which held that singer Bette Midler had a right of publicity claim when the carmaker used a soundalike performer in an advertisement, Foglia said that some states protect a voice impersonation while others don't, and that some protect name, image and likeness rights while others do not.
"That's the gap that No Fakes is seeking to fill," he said.
Legal Perspective: 'Please Preempt More'
Simmons said repeatedly that he is in favor of a federal law on deepfakes that preempts all state law because "having one law is really useful. So, in copyright, we have a preemption statute, and it says there are no state laws that can do what copyright does. If it's a copyrightable work, and you're talking about copies or derivative works or distribution, that's federal law."
He also said, "I, as a practitioner, I'm like, please preempt more even though there are rights that would potentially be taken away or not exist anymore, just for the ability to litigate them more straightforwardly and have the law be more uniform."
Simmons also commented that legislating AI can be a slippery slope with unclear lines.
"There was a lot of interest a few years ago on generative AI, which got people into a conversation we've been having for decades about artificial intelligence," he said. "But by lumping it all together you end up in a situation where you know, 'Oh, we should ban AI?' Or, 'You shouldn't be able to use AI.' And it's like, 'Well, which kind of AI?'"
"We see this a lot with the courts now, where some of them say you're not allowed to use AI, or you have to disclose the use of AI in court cases," he continued. "Well, Google's an AI. Westlaw's an AI. Am I not supposed to use those? Am I supposed to disclose, 'I will be using Westlaw when I write my briefs in court?'"
--Editing by Linda Voorhis.
For a reprint of this article, please contact reprints@law360.com.