A recent paper titled “Computing power then governance of artificial intelligence” suggests that combating the misuse of AI might require continuously advancing more powerful artificial intelligence and placing it under government control. Researchers from OpenAI, Cambridge, Oxford, and several other institutions explore the challenges associated with governing artificial intelligence use and development.
The central argument of the paper revolves around controlling access to the hardware necessary for training and running advanced artificial intelligence models. The researchers propose that policymakers use “compute” – foundational hardware like GPUs and CPUs – to regulate artificial intelligence effectively.
AI governance
This approach aims to enhance regulatory visibility, allocate resources for positive outcomes, and enforce restrictions against irresponsible or malicious artificial intelligence development and usage.
Governments worldwide already exercise some form of “compute governance,” with restrictions on the sale of certain GPU models used for artificial intelligence training. The paper suggests that limiting the potential harm from artificial intelligence would require manufacturers to incorporate “kill switches” into hardware, enabling remote enforcement actions such as shutting down illegal artificial intelligence training centers.
However, the researchers acknowledge the risks associated with naïve or poorly scoped compute governance, including concerns related to privacy, economic impacts, and the centralization of power. They also highlight challenges posed by recent advancements in communications-efficient training, which could lead to decentralized compute usage, making it harder for governments to locate and monitor hardware associated with illegal artificial intelligence training efforts.
In conclusion, the researchers suggest that an arms race against the illicit use of artificial intelligence may be inevitable, emphasizing the need for society to deploy powerful and governable compute wisely to develop defenses against emerging risks posed by ungovernable compute.