Over 600 individuals within and adjacent to the artificial intelligence (AI) field have signed an open letter urging governments to implement strict regulations on AI-generated impersonations, commonly known as deepfakes.
While the letter may not immediately lead to legislation, it signals a growing concern among experts regarding the societal threats posed by deepfake technology.
The letter emphasizes that deepfakes represent an escalating danger to society and advocates for governmental obligations throughout the supply chain to curb their proliferation. Notable signatories include
- Jaron Lanier,
- Frances Haugen,
- Stuart Russell,
- Andrew Yang,
- Marietje Schaake,
- Steven Pinker,
- Gary Marcus,
- Oren Etzioni,
- Yoshua Bengio, and others, along with hundreds of academics from various disciplines.
Key Statement from the letter
Deepfakes are a growing threat to society, and governments must impose obligations throughout the supply chain to stop the proliferation of deepfakes. New laws should:
- Fully criminalize deepfake child pornography, even when only fictional children are depicted;
- Establish criminal penalties for anyone who knowingly creates or knowingly facilitates the spread of harmful deepfakes; and
- Require software developers and distributors to prevent their audio and visual products from creating harmful deepfakes, and to be held liable if their preventive measures are too easily circumvented.
If designed wisely, such laws could nurture socially responsible businesses, and would not need to be excessively burdensome.
This call for regulations aligns with ongoing debates in the European Union, where similar measures have been under consideration for years before being formally proposed recently.
The letter’s timing coincides with the EU’s deliberations and potentially indicates a response to the slow progress of the Deepfakes sign(KOSA), which lacks specific protections against deepfake abuses.
The potential misuse of deepfake technology, including its role in AI-generated scam calls that could influence elections or defraud individuals, has prompted the AI community to advocate for regulatory measures. The recent announcement of a task force without a clear agenda may have further motivated the experts to voice their concerns and propose practical solutions.
While the impact of the letter remains uncertain, it provides legislators with insights into the global sentiment within the AI academic and development community. Whether this call for action prompts legislative responses in an election year with a divided Congress remains to be seen, but it underscores the urgency perceived by those immersed in the AI field.