Artificial intelligence’s existential threat to humanity is put under the spotlight

AI may not be the dire existential threat that many claim it to be. According to a new study, Large Language Models (LLMs) can only follow instructions, cannot develop new skills on their own, and are inherently “controllable, predictable, and safe,” which is good news for meatbags like us.

The President of the United States announces to the public that the nation's defenses have been transferred to a new artificial intelligence system that controls the entire nuclear arsenal. With the push of a button, war is eliminated by a super-intelligent machine that makes no mistakes, can learn any new skill it needs, and grows stronger by the minute. It is effective to the point of infallibility.

As the President thanks the team of scientists who designed the AI ​​and toasts a gathering of dignitaries, the AI ​​suddenly starts texting without warning. It makes drastic demands that threaten to destroy a major city if they are not immediately obeyed.

This sounds a lot like the nightmare scenarios we've heard about AI in recent years. If we don't do something (if it's not too late), AI will evolve on its own, become conscious, and make it clear that Homo Sapiens has been reduced to the level of pets – unless, of course, it decides to destroy humanity.

The strange thing is that the above analogy is not from 2024, but from 1970. This is the plot of a sci-fi thriller. Colossus: The Forbin ProjectIt's about a supercomputer that conquers the world with disheartening ease. It's a story idea that's been around since the first real computers were built in the 1940s, and has been told over and over again in books, movies, television, and video games.

It is a very serious fear of the most advanced thinkers in computer science, dating back to almost the same period. Not to mention the magazines talking about computers and the danger of them taking over in 1961. For the last sixty years, experts have repeatedly predicted that computers will demonstrate human-level intelligence within five years and far exceed it within 10 years.

The thing to keep in mind is that this isn’t pre-AI. AI has been around since at least the 1960s and has been used in many fields for decades. We tend to think of this technology as “new” because AI systems that process language and images have only recently become widely available. They’re also more relatable examples of AI to most people than chess engines, autonomous flight systems, or diagnostic algorithms.

They have also instilled fear of unemployment in many people (including journalists) who have been spared the threat of automation.

But the legitimate question remains: Does AI pose an existential threat? After more than half a century of false alarms, will we finally be under the thumb of a modern-day Colossus or Hal 9000? Will we be plugged into the Matrix?

According to researchers from the University of Bath and the Technical University of Darmstadt, the answer is no.

A study published as part of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024) states that artificial intelligence, and specifically undergraduate law programs (LLMs), are inherently controllable, predictable, and safe.

“The widespread narrative that these types of AI are a threat to humanity is preventing the widespread adoption and development of these technologies and is also a distraction from the real problems we need to focus on,” said Dr Harish Tayyar Madabushi, a computer scientist at the University of Bath.

“As models get bigger, there has been a fear that they could solve new problems that we can’t currently predict, which raises the threat that these larger models could acquire dangerous capabilities such as reasoning and planning,” added Dr Tayyar Madabushi. “This has triggered a lot of discussion – for example, at the AI ​​Security Summit held at Bletchley Park last year, where we were asked to comment – ​​but our work shows that the fear that a model will disappear and do something completely unexpected, innovative and potentially dangerous is not valid.

“Concerns that LLMs pose an existential threat are not limited to non-experts and have been voiced by some leading AI researchers around the world.”

When these models are examined closely by testing their ability to complete tasks they have not encountered before, it becomes clear that LLMs are very good at following instructions and demonstrate proficiency in languages, even when shown only a few examples, such as answering questions about social situations.

What they can’t do is go beyond those instructions or master new skills without explicit instructions. LLMs can exhibit some surprising behaviors, but this can always be attributed to their programming or instructions. In other words, they can’t transform into anything beyond what they were built for, so there are no god-like machines.

But the team stresses that this does not mean that AI poses no threat. These systems already have remarkable capabilities, and they will become even more sophisticated in the very near future. They have the frightening potential to manipulate information, create fake news, commit outright fraud, provide lies even if unintentionally, be misused as a cheap solution, and suppress the truth.

As always, the danger lies not in the machines but in the people who program and control them. Whether malicious or incompetent, it is not the computers we need to worry about. It is the people behind them.

Dr. Tayyar Madabushi explains the team's work in the video below.

AI Safe

Source: University of Bath

Leave a Comment