The “godfather of AI” left Google and joined the growing chorus of experts who are cautioning against hastily implementing artificial intelligence.
Why it matters: It’s important to pay attention when some of the brightest minds working on a technology warn that it could turn against us and destroy our institutions.
Leading the news: Geoffrey Hinton, a leading figure in machine learning, claims he left Google so he could openly discuss the risks of releasing generative AI products too quickly.
“It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton, 75, told The New York Times.
Axios asked AI experts — developers, researchers and regulators — to sketch their most plausible disaster fears. Their top 5:
1. Cyberattacks explode. The right prompts can now generate working malicious code, meaning more, bigger and increasingly diverse cyberattacks.
Dario Amodei, CEO at Anthropic, which offers a rival to ChatGPT, told Axios CEO Jim VandeHei that a massive expansion of such attacks is his biggest near-term worry.