Consumer Watchdog

Robocalls using voices generated by artificial intelligence are illegal, FCC rules

AI-generated voices on robocalls are banned effectively immediately

ThisIsEngineering via Pexels | Used by permission

Robocalls that contain voices generated by artificial intelligence are illegal effective immediately, the Federal Communications Commission announced Thursday.

The decision gives state attorneys general another way to go after scammers. The new rule targets voice-cloning technology, which is increasingly used by con artists to trick people into believing they’re talking with a relative, a celebrity or someone else whose voice they may know, and then take some kind of action that leads to financial losses. Many regulators and lawmakers are particularly concerned about “deepfake” calls aimed at deceiving voters in a big election year.

The announcement follows a letter from a bi-partisan group of 26 state attorneys general last month that supported categorizing calls with AI-created voices as illegal. “It is apparent that AI technologies will continue to both rapidly develop and permeate an already complex telecommunications ecosystem,” the letter said

Also last month, Reps. Eric Sorensen (D-ILL) and Juan Ciscomani (R-AZ) introduced a bill in Congress that would require robocallers to disclose when calls use voices created with artificial intelligence. The bill would also increase penalties for violators.

Robocalls have plagued consumers for more than 15 years, but the stakes are higher today, with technology that allows people to be tricked into making instant payments from their bank accounts. Currently, U.S. consumers get about 2 billion scam and telemarketing calls per month; the problem has not significantly improved despite a 2009 federal rule that prohibits prerecorded robocalls without consent and a 2021 federal law (passed nearly unanimously 514-4) that requires phone providers to combat robocalls with software or other means.

See the Campaign
Topics
Updates

Show More