Google DeepMind, the AI research and development division behind GenAI models like Gemini, has launched a new organization called AI Safety and Alignment. This initiative is aimed at addressing concerns related to the potential misuse of GenAI models, particularly in generating deceptive content and spreading misinformation.
The move comes in response to growing scrutiny from policymakers and the public regarding the ease with which GenAI tools can be exploited for disinformation and misleading purposes. Google, emphasizing its commitment to AI safety, is investing in this initiative, redirecting efforts toward ensuring responsible and transparent use of its advanced AI models.
The newly formed AI Safety and Alignment organization will consist of existing teams working on AI safety, with the addition of specialized cohorts of GenAI researchers and engineers. Anca Dragan, formerly a Waymo staff research scientist and a UC Berkeley professor of computer science, will lead a new team within the organization, focusing on safety around artificial general intelligence (AGI).
![Google DeepMind establishes AI Safety and Alignment organization image 143](https://i0.wp.com/nosisnews.com/wp-content/uploads/2024/02/image-143.png?resize=1024%2C462&ssl=1)
![Google DeepMind establishes AI Safety and Alignment organization image 143](https://i0.wp.com/nosisnews.com/wp-content/uploads/2024/02/image-143.png?resize=1024%2C462&ssl=1)
Google’s move parallels OpenAI’s Superalignment division, formed in July, emphasizing the industry’s collective efforts to address the ethical and safety aspects of advanced AI technologies. The AI Safety and Alignment organization aims to develop concrete safeguards for Google’s existing and future GenAI models, ensuring responsible usage in various applications.
Despite the company’s commitment to transparency and safety, skepticism around GenAI tools remains high, especially concerning deepfakes and misinformation. Recent polls indicate public concerns about the potential spread of misleading information through AI-generated content. Google’s efforts to invest in AI safety initiatives reflect the industry’s recognition of the need for responsible AI development and deployment.
As GenAI tools continue to evolve and play a significant role in various applications, addressing safety concerns becomes imperative for both industry players and regulators. The success of initiatives like AI Safety and Alignment will be closely monitored to assess the effectiveness of measures taken to mitigate the risks associated with advanced AI technologies.