The United States Federal Trade Commission (FTC) is set to enhance regulations addressing the growing threat of deepfake technology, focusing on the impersonation of businesses or government agencies by artificial intelligence (AI). The proposed rule, subject to final language and public feedback, aims to make it illegal for generative artificial intelligence (GenAI) platforms to provide products or services knowingly used for consumer harm through impersonation.
FTC Chair Lina Khan emphasized the urgency of protecting consumers from AI-driven scams involving voice cloning and other impersonation tactics. The updated government and business impersonation rule grants the FTC the authority to initiate federal court cases directly, compelling scammers to return funds acquired through deceptive impersonation.
The final rule will take effect 30 days after publication in the Federal Register. The public is invited to provide feedback during the 60-day comment period following its publication, with details on how to submit comments.
2. Scams where fraudsters pose as the government are highly common. Last year Americans lost $2.7 billion to impersonator scams.
— Lina Khan (@linakhanFTC) February 15, 2024
The rule @FTC just finalized will let us levy penalties on these scammers and get back money for those defrauded.https://t.co/8ON0G63ZjL
For example, the rule would enable the FTC to directly seek monetary relief in federal court from scammers that:
- Use government seals or business logos when communicating with consumers by mail or online.
- Spoof government and business emails and web addresses, including spoofing “.gov” email addresses or using lookalike email addresses or websites that rely on misspellings of a company’s name.
- Falsely imply government or business affiliation by using terms that are known to be affiliated with a government agency or business (e.g., stating “I’m calling from the Clerk’s Office” to falsely imply affiliation with a court of law).
Notably, the move follows the Federal Communications Commission’s recent ban on AI-generated robocalls, addressing the use of deepfake voices in spam messages. This decision, prompted by a New Hampshire phone campaign using a deepfake of President Joe Biden, reinterprets existing rules and underscores the need for legislative action.
While victims of deepfakes can theoretically pursue legal recourse using existing options like copyright laws, likeness rights, or tort claims, the process is often cumbersome. In the absence of comprehensive federal legislation, various states have passed laws criminalizing deepfakes, highlighting the broader efforts to curb the misuse of AI technologies.