Experts in Artificial Intelligence Warn that AI Puts Humans at Risk
A group of artificial intelligence (AI) industry leaders, including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis, have signed a new statement making clear the danger that the development of this technology can represent for society. And they only needed 22 words in English: “Mitigating AI extinction risk should be a global priority along with other societal risks such as pandemics and nuclear war.”
The sentence has been published by the San Francisco-based nonprofit Center for AI Safety. Among the approximately 350 signatories, including researchers, executives, and other personalities, there are also Geoffrey Hinton and Yoshua Bengio, two of the great pioneers in the development of artificial intelligence.
This is not the first time that the scientific community has warned about the dangers brought about by the development of artificial intelligence, which is currently at its peak thanks to the enormous success achieved by the talking machine ChatGPT.
A couple of months ago, important executives, scientists, and humanists signed a more extensive letter requesting a stoppage in developing new technologies to develop security measures that prevent their harmful effects, ranging from the destruction of jobs to the generation of misinformation.
Sam Altman, CEO of the company behind the creation of ChatGPT, has been warning for days that artificial intelligence should be treated similarly to autonomous weapons. This was made clear last week during a fleeting visit to Madrid, in which he advocated the creation of an international organization similar to the IAEA (UN International Atomic Energy Agency), which controls the development of the AI.
Nor is it the first time the scientific community has pointed out that artificial intelligence can put humans at risk of extinction. A study published by researchers at the University of Oxford and Google shared a study late last year stating that, eventually, when machines reach a level of sophistication that they are nowhere near, they could end up competing with humans to meet their energy needs.
“In a world with infinite resources, what would happen would be extremely uncertain. In a world with finite resources (like ours), there is inevitable competition for these resources,” said Michael Cohen, a researcher at the University of Oxford, in an interview with Vice conducted in this regard. “If you compete with something capable of surpassing you at all times, you should not expect to win,” the expert settled.