The hard problems for superintelligence
Superintelligent AI is often likened to a genie that grants any wish. As AI becomes more capable, we will get better at telling which problems melt away under intelligence, and which barely budge.
Artificial superintelligence (ASI) is godlike, in the sense that it transcends our human intelligence. But there are gods and there are gods. Some are omnipotent, reshaping the entire world at will. Others, like the Greek or Norse gods, have immense but limited powers. There is a scale also in the godlike, and as AI becomes more advanced, we will learn what that scale looks like.
For this essay, I’ll define intelligence as the ability to solve problems. (An oversimplification, but useful for the argument.) But there are many different types of problems, which some quite intelligent people sometimes miss. (I can't help thinking that some high-IQ people overrate the power of the brainy type of intelligence.) The type of artificial intelligence being developed now has a much better shot at proving new mathematical theorems than making peace between Israel and Palestine, for example. It is better at optimizing code, finding new battery materials and identifying diseases, and worse at making people actually take their medicine or creating political action on climate.
Having more intelligence at hand is arguably useful regardless of the problem you're trying to solve, but in some cases it only makes a negligible difference.
When Nathan Labenz of the Cognitive Revolution, was interviewed by Liron Shapira in Doom Debates, he asked whether Liron thought that an ASI would have been able to change the outcome of the 2024 US presidential election. The answer was "Of course it would!" And while that's true for an omnipotent ASI, it is much less obvious for a Greek-God ASI that might be thwarted by cunning humans, unexpected events, or just the inertia of the crowd.
Here are some ideas for types of problems that will be difficult to handle, even with advanced (but not omnipotent) AI:
Coordination problems such as tackling climate change and making US and China agree on AI safety.
Habit changes such as changing health habits and consumption patterns. A healthy diet, regular exercise and quitting smoking isn't achieved just by information alone.
Value changes such as religious or cultural conflicts. Values are often starting points, not conclusions, and are seldom changed by arguments.
Distribution problems such as making people give up their privileges. We don't need to invent new technology to end starvation and poverty.
From a safety perspective, it is important to track what kind of intelligence AI becomes superhuman at. Coding, mathematics and natural sciences get a lot of attention, but there is increasing evidence that AI is getting good at social intelligence and manipulation. If AI becomes superhuman at persuasion and manipulation, then even the ‘hard problems’ above may no longer be beyond reach — but in ways that could be deeply unsafe.
There is of course a risk that the magic wand of ASI would be used for solving problems that gain the wielder, but harm the world at large. The question is not only what AI can solve, but whose problems it ends up solving. Even narrow domains, like AI improving its own code, could bootstrap broader capabilities much faster than expected. From a risk perspective, it makes sense to have advanced AI that is strong in areas that are difficult to misuse or lose control over.
What are your thoughts? Which problems do you think AI will prove surprisingly good at – and which will remain stubbornly human for some time?
Mycket intressant inlägg! Bra liknelse med nordiska och grekiska gudar som ofta ställde till med mer peoblem än de löste.
Beteendevetare, idrottare och diplomater blir framtidens inneyrken. Medan AI Guden Krynos hjälper oss att kolonisera Mars. Matte tas bort som skolämne och ersätts av det nya ämnet kulturgeografi…