Can we regulate AI?
by Barnabas Szantho
Last year, a Google engineer told the world that he believed the company’s language model (an AI software) became sentient. As proof, he published excerpts from a conversation he had with the AI. For anyone who has read these conversations and has ever met children, it is obvious that the model is not sentient. A child will bombard you with hundreds of questions a day, of which “where did we come from” is being one of the simplest. The AI, on the other hand, never asked anything, only answered the engineer’s questions diligently. All the moms sat back relieved.
A few days ago however, the most prominent researchers, CEOs, and university professors involved in AI development issued a one-sentence statement that the threat of extinction from AI should be treated on par with the threat of nuclear war.
The 3 levels: AI, AGI and Superintelligence
AI is a technology and technology is constantly evolving. Today’s AI can only do the tasks it is programmed to do. It can recommend music by recognizing our tastes, it can answer questions based on huge databases. At the next stage, AI will be able to solve not just specific tasks, but any problem that a human could solve. It will become AGI, or Artificial General Intelligence. The next evolutionary step after that is Artificial Superintelligence, when AI will be smarter than any human. Some researchers believe this could happen within a decade.
There should be no doubt that it is impossible to ban the development of AI. Mankind thinks in tribes, and these tribes, be they political groups, cities, companies or countries, compete with each other. The advantages that AI offers in this competition are too great for anyone to hold back. Nuclear weapons offer “advantages” in far fewer areas, yet their development has not stopped.
The dangers of “simple” AI
Even at its current level, AI already poses a myriad of dangers, from the potential for mass manipulation to the destruction of democratic systems. I wrote about this before here.
There is currently no law regulating it, so using AI for the wrong things is not itself a criminal offence. The EU is working on a law that would change that.
(Every element of the law assumes that humans are in control of the AI. This becomes a problem at the next level, more on that later.) The fundamental problem at the current level of AI technology is whether the majority of humanity will accept and use rules to regulate it. The solution might be an international treaty like the one banning biological weapons. But the Biological Weapons Convention came into force in 1975, and four countries have still not ratified it and eight have not signed it at all. In the case of AI, we don’t even have a fraction of that time.
Doomsday scenario: a malicious actor uses AI to seize power and enslave humanity.
AGI and the Alignment Problem
Once an AI system can be used in all areas, it will probably be connected to many more systems than it is today. This is where the alignment problem first arises. We give the AGI a task and entrust it to perform it, but because it is just a machine, it may want to take a path to the solution that is completely irrational and even harmful to a normal human. For example its task might be to maximise the company’s profits, so it opens a new business to produce cocaine. The solution to this problem could be to hardcode all existing laws into every AGI system. Each of even today’s systems has enough capacity so that we could do this easily. But this brings us back to the primary problem, will all system owners be willing to do this?
The problem of primacy arises at the same technological level. The first country, person or company to acquire such an AGI system will be able to build up an irrecoverable advantage over the other players. Simply by using the system to further develop itself. Let’s hope that the first such AGI owner is well-intentioned. (We must compete in the development, right?)
Doomsday scenario: An AGI will wipe out humanity as part of a sub-task. For example, it is tasked with preserving the diversity of animal life, which humans have always had a negative impact on. Or: a malicious actor with AGI gains an irretrievable advantage and enslaves humanity.
Superintelligence, Cows and Cats
If something is smarter than us, we will not be able to control it. Period. Just as no pet can control us. We know what motivates the animal, so we know how to get it to do what we want. And in this set-up, the pet needs us more than we need it. Our only chance of survival at this level of AI superintelligence is to be likeable. Just as a cat stays with us because it is lovable.
Let’s hope our bodies aren’t better than machines at producing anything important. Otherwise, just think of what humanity is doing to milking cows… It sounds pretty bad when you look at it from that perspective.
The next problem is that a Superintelligent system, no matter what goal it is given or gives itself, will in milliseconds discover the logical problem that humanity can turn it off and thus it could fail to achieve its goal. Therefore one of its first steps will be to make sure that humanity cannot do that.
Chaplin said that no dictatorship can last forever because dictators don’t live forever. This will not be the case with Superintelligence.
Self-awareness and will
In none of these scenarios does AI have self-awareness, because it is not necessary. It is pure logic that leads us to destruction. But what is self-awareness anyway? Many years ago a researcher made some little robots that looked like bugs. They “walked” boringly back and forth. Later he figured he could put light sensors on the robots and program them to follow the light. And suddenly the little bugs came to life. They followed the sun.
Humans have essentially the same “codes” running, to survive, to find mates, Maslow described them in detail. ChatGPT is only not scary because it doesn’t ask back. If it were programmed to do that, everyone would be freaked out by now.
And if all goes well?
Suppose humanity finds a solution to all the problems. Everyone involved in AI development adopts and applies the principle that for their own good they must bake universal rules into the system. In such a perfect future, where superintelligence solves everything for us, we will have two problems. First, there will be no challenge or motivation.
Second, the Superintelligence will create machines for our well-being that we will not be able to understand even with a lifetime’s work. And why even bother, right? We will look at them with religious awe and hope that they will not brake.
It is very likely that for one last decade we will still live in the good old world we are used to now. Let’s take advantage of that, be nice to each other, do some good.
Right? No. This shouldn’t be it. I mean, we are terrible at these things. For more than 50 years, we’ve known exactly that CO2 warms the atmosphere, yet for various reasons we keep pumping it in. For about a decade, we have known that click-optimized social media algorithms amplify extreme views, yet we do not change them. But if what the majority of people working in AI say is true, then for once we should not only think ahead, but act ahead. If something is smarter than us, we will not be able to control it, nor understand it. And developing something knowing in advance that we won’t be able to control or understand it is not the right strategy.