The Federal Communications Commission (FCC) is pushing for a crackdown on robocalls that utilize AI-generated voices, particularly those impersonating individuals like President Biden.
The proposal aims to categorize the use of voice cloning technology in robocalls as fundamentally illegal, providing authorities with enhanced tools to prosecute the operators behind these fraudulent schemes.
An illustrative example is the recent surge in fake Biden calls in New Hampshire, where residents were falsely instructed not to vote. In response, the state’s attorney general confidently labeled these messages as an
“unlawful attempt to disrupt the New Hampshire Presidential Primary Election and to suppress New Hampshire voters.”
Under existing laws, voter suppression is illegal, and the perpetrators, once identified, are likely to face charges related to this crime.
The crucial shift proposed by the FCC involves deeming the use of voice cloning technology in automated calls as illegal in itself. If implemented, this would streamline the process of charging robocallers, as they could be prosecuted for the illegal use of AI-generated voices without waiting for an additional crime to be committed.
FCC Robocall Stance
FCC Chairwoman Jessica Rosenworcel emphasized the significance of recognizing this emerging technology as illegal under existing law. This move is designed to empower State Attorneys General offices across the country, providing them with new tools to combat these scams and safeguard consumers.
The FCC had previously signaled its intent to address this issue when it first emerged, and now, steps are being taken to formalize the recognition of AI-generated voice cloning in robocalls as a violation of existing laws.
The law here is being rapidly iterated as telephone, messaging and generative voice tech all evolve. So don’t be surprised if it isn’t entirely clear what is and isn’t illegal, or why despite being obviously illegal, some calls or scams seem to operate with impunity. It’s a work in progress.