THE LEGISLATURE is taking an interest in guarding against the deceptive use of artificial intelligence in electioneering and a major technology industry group pitched lawmakers Tuesday on the potential for AI to help in the fight against misinformation.

AI-created images or videos that depict situations, actions, or speech that never really happened, known commonly as deepfakes, have proliferated as AI technology has become more sophisticated in recent years, and policymakers have taken notice. The issue has taken on a sense of urgency as the 2024 presidential election campaign ramps up and the potential for AI to be used to deceive voters has become more clear.

The Joint Committee on Elections Laws took testimony Tuesday on a bill filed by Sen. Barry Finegold that would ban the distribution of any “synthetic media message that the person knows or reasonably should have known is a deceptive or fraudulent deepfake depicting a candidate or political party” within 90 days of an election involving that candidate or party unless it is clearly labeled as having been “manipulated or generated by artificial intelligence.”

Chris Gilrein, northeast executive director for TechNet, said the 90-plus companies his organization represents (including Apple, Google, Meta and OpenAI) “strongly support clear disclosure when AI is used in election communications.” But he also wanted lawmakers to give thought to including an “explicit allowance for the sharing of, potentially, media that might be considered deceptive for the purposes of fraud detection and identity verification.”

“We have a member of TechNet that was able to identify when that President [Joe] Biden voice deepfake call went out in New Hampshire, their technology was deployed and was able to immediately identify that yes, it was in fact a deepfake, and identify with a level of certainty what tool was likely used to produce it,” Gilrein said, referring to a robocall from January. He added, “We want to make certain allowances for using AI to combat misinformation.”

Finegold’s bill would also allow candidates targeted by a deceptive or fraudulent deepfake to seek civil legal action against people who create or post it, with damages of up to $10,000 per incident. Gilrein said TechNet wants to make sure that any legislation Massachusetts adopts makes clear that the responsibility for disclosing the use of AI “is on the sponsor or the creator of that communication, and not on an intermediary like the platform that it is posted on or the tool that might have been used to create it.”

A lobbyist for Public Citizen, the Washington, DC consumer organization founded by Ralph Nader, told the committee that 13 other states have already adopted legislation similar to Finegold’s. Craig Holman said that 2024 “is shaping up to be the first very serious deepfake election we’ve ever seen” and pointed to an incident from Chicago’s 2023 mayoral election as an example of what could be coming.

“Artificial intelligence has been around for a while. But only this year, this election cycle, we’ve seen startling new advances where artificial intelligence can depict a candidate saying or doing something that they never did. And it’s almost impossible to tell the difference between what’s real and what is just entirely computer fabricated,” Holman said.

He called Finegold’s bill “transparency legislation” and “very respectful of First Amendment rights.”

“It’s not a ban. It exempts news media, it exempts broadcasters and even social media platforms that make a reasonable effort to discern whether a communication is a deepfake or not. And it provides the targeted candidate with injunctive relief to try to stop further dissemination of that type of deepfake ad,” Holman said. “These types of transparency legislation are widely supported by the public. The Data for Progress shows about 80 percent of Americans support this type of transparency — Republicans, Democrats, independents alike.”

The House has already embraced a similar idea. Representatives adopted a mega-amendment to their fiscal 2025 budget last month that would impose a requirement that certain audio and visual political advertisements disclose if they were produced using AI.

The House language would require all audio or video communications intended to influence voters and paid for by a political candidate, campaign party or political action committee to include the words “contains content generated by AI” at the beginning and end, with similar disclosures required during portions of ads containing synthetic media. It calls for a fine of $1,000 for each violation.

Last month, Attorney General Andrea Campbell issued an advisory to AI developers, suppliers and users to highlight their respective obligations under Massachusetts’s consumer protection laws, calling attention to the technology’s “tremendous potential benefits to society” but also its dangers.

“However, AI systems have already been shown to pose serious risks to consumers, including bias, lack of transparency or explainability, implications for data privacy, and more. Despite these risks, businesses and consumers are rapidly adopting and using AI systems which now impact virtually all aspects of life,” Campbell’s office said.