Three critical dimensions for understanding AI impact
Development speed, ultimate capability, and adoption rate interact to shape AI's societal impact. This is the first blogpost in a series about how AI affects knowledge, knowledge work, and education.
Yesterday I missed what must have been a really interesting workshop at Royal Swedish Academy of Engineering Sciences due to prior engagements. The topic was how AI affects knowledge work and the role of knowledge. Luckily, I had the opportunity of talking to two of the organizers for an hour before the workshop instead.
I’ve spent a lot of time trying to understand how AI affects knowledge, knowledge work and education. The discussions yesterday inspired me to summarise (parts of) my thoughts in a blog post – which turned in to multiple posts.
This first blog post contains thoughts on three aspects that I have found crucial in forecasting impacts of AI.
Delimitation: focus on LLMs
The term artificial intelligence can mean many things. In this blog post, as in most of the work I do, I focus on language-based AI – which in practice means large language models (LLMs), including their close relatives multi-modal language models and reasoning language models. I also include scaffolding put around LLMs, such as terminal agents (like Claude Code).
This delimitation is based on LLMs being general-purpose AI. I prefer this more narrow scope over generative AI in general – while creating images, videos and music is fun and sometimes useful, only LLMs can control a computer. That’s a huge difference.
There are other types of general-purpose AI emerging, where humanoid robots is the most obvious, but they are far behind LLMs when it comes to their current capacity and maturity (but could become really useful in just five years).
I consider machine learning (ML) in general as a general-purpose technology too, but LLMs are by far the most impactful application. Most ML applications are single-purpose and take a lot of time and expertise to develop. LLMs use – as well as its capacity – is growing at a staggering pace.
Another thing that make LLMs particularly interesting is their capability of mimicking thinking – one of the most defining traits of us as humans. Language is our primary way of expressing and organising thoughts, and as machines become better at using language, they increasingly give the appearance of thinking.
Three aspects that gives a frame to forecasting AI impacts
Three closely related aspects play a big role in forecasting impact of AI. (Again: with AI I mean LLMs unless I say otherwise.)
The speed of progress: High speed make adaptation difficult and increases the likelihood of disruptive effects. It also increases uncertainty in predictions.
How far AI capacity will advance: Sooner or later, AI capacity will level out. It matters a lot whether happens at a level where they “merely” make meaningful contributions to the work of human experts, or can cognitively outperform basically all groups of human experts – or somewhere in between.
Diffusion: Fast adoption mean better chances of augmenting ourselves and adapting to new technological capabilities. Low adoption rates increase the risk of extreme power concentration, AI rendering (groups of) humans economically worthless, and disruptions to society.
The development speed
The most important thing to understand about AI is how quickly the technology is advancing.
GPT-2, the leading LLM in 2019, could barely count to ten. In summer 2025, two different LLMs achieved gold medal results in the International Mathematics Olympiad. In september 2025, AI outperformed all humans in competitive coding. We’ve seen LLMs make meaningful contributions to scientific research, and in January 2026 (possibly December 2025) we started seeing AI systems solving previously unsolved Erdős problems – difficult mathematical problems.
The arguably best quantitative measurement of AI progress is the METR track of frontier models’ ability to complete long tasks in software. It says that the length of tasks AIs can complete with a 50 percent success rate is doubling every seven months (possibly every four months since reasoning models were introduced). Claude Opus 4.5, released late november 2025, reaches close to five hours on this scale, which means that this benchmark, like many others, are close to saturated. METR have few tasks that take human experts more than five hours to compare with, which makes results on this end of the scale unreliable.
All of this has happened in the space of less than seven years. From not being able to count to ten, to solving Erdős problems and contributing to scientific research.
It is far from certain that advancements will continue in the same pace for another seven years, or even two years. But considering the potentially huge consequences, it should not be ruled out unless we have very solid reasons.
We don’t.
This brings a lot of uncertainty to forecasting the impacts of AI.
How far will AI advance?
Tightly coupled to the question of development speed is how far AI will advance. That even Nobel laureates claim to have meaningful conversations with chatbots – within their subject of expertise – means that we have passed the time where LLMs are described as mere stochastic parrots.
While LLMs are impressive in many cognitive aspects, they are also surprisingly stupid in others. With each new generation of frontier models these stupidities decrease, and any particular type of mistake is likely to be eradicated as soon as the AI labs put attention to it. But the ocean of stupidities is vast, and it is unclear whether the stupidities will become so rare that LLMs perform any cognitive task as well as an average human.
On the other end of the scale, we have the top achievements of AI. They are already better than any human in their breadth of knowledge – it knows more languages and is a decent expert in more areas than any one human. But there are very few areas where LLMs outperform the top human experts.1
One exception to this is competitive coding, where AI now outperform all human beings on this planet. This is not a coincident: Coding is an area where you can verify valid results, which makes training AI on coding easier. It is also an area with high economic returns – also for the AI labs themselves – which makes it easier to invest in increasing AI coding capabilities. Finally, there is a very tight improvement loop, since the people developing AI to a large degree themselves write code and are early adopters of AI-assisted development.
It is an open question whether we will see AI outperform top humans in many different fields. The answer matters.
If AI performs at or above human expert level in a few fields, these will be heavily affected – but the system containing these fields will accommodate the changes.
If AI performs at or above human expert level in all or many fields, the encompassing system will cease to exist, either by being deliberately replaced or by breaking.
While pacing plays a big role in these effects (hence the previous section), this particular aspect of analysing AI impacts focuses on the hypothetical end state or a state that remains for a long time.
The question that this aspect asks is how large share of human cognitive work that we should expect AI to outperform us in. One part of that question is whether AI (eventually) simply will outperform humans on cognitive tasks, like humans outperform chimps.2
Diffusion
The pace of AI progress is staggering, but matters less in isolation than in relation to how quickly the technology spreads through society. If society adapts quickly to new AI capabilities, rapid technological advancement becomes less disruptive – the society changes more quickly measured in months and years, but the differences between different parts of the system are less.
In many ways, AI is being adopted much faster than previous powerful general-purpose technologies like electricity, personal computers, or the internet. In many parts of the world, half of the population describe themselves as AI users three years after the launch of ChatGPT. The rapid adoption can be explained by several factors:
The infrastructure is already in place – most people already have computers and internet access.
The up-front cost of the technology is paid by the AI companies creating the LLMs.
LLMs have a much lower technical threshold than personal computers or the internet. Starting to use an AI-powered chatbot is as easy as sending a text message.
There are immediate visible gains from AI-powered chatbots, or at the very least perceived gains.
However, “adoption” is not an on-off phenomenon. There’s a vast difference between using ChatGPT as a search engine and restructuring the workflow when stages that previously took weeks can be completed in minutes. Effective use of AI requires good tools and often considerable skill development. There is also a temporal aspect: as long as AI advances rapidly, adoption must be an ongoing process, measured relative to frontier AI capabilities.
On top of this, diffusion happens at multiple levels: individuals learn to use AI tools, organizations integrate AI into their workflows, societies adapt their institutions and regulations. That many employees use AI does not necessary mean that the workplace has adopted AI in its processes. One or two levels up, adopting to AI may mean that the business or sector changes radically, and that companies may change form or disappear all together.
Fast diffusion helps society keep pace with technological change and reduces certain risks. Slow diffusion of AI can create extreme power concentration – where those who have access to and competence with frontier AI gain enormous advantages over those who don’t.
When advancement significantly outpaces adoption, we can also get a “capability overhang” – a gap between what AI can do and what most people actually use it for, or are even aware of. There is currently a significant AI capability overhang, which risks causing disruptive avalanches that could harm society. In the extreme case, we risk something akin to a “technology sonic boom” when institutions are not forced to painfully rapid changes, but are simply rendered irrelevant.
Diffusion can also be deliberately accelerated or decelerated through various means: culture, attitudes, legislation, education, economic incentives, or infrastructure investments. In extreme cases, AI labs or leading countries could halt diffusion by keeping new AI models to themselves. Diffusion is thus, in contrast to development speed and end state, something that individual countries and organizations can affect.
High uncertainty calls for scenario planning
These three aspects – development speed, ultimate capability, and diffusion rate – interact to shape AI impact on society. They are not the only factors, I find them overshadowing the other.
The uncertainty in these dimension, and especially in their combinations, makes scenario-based thinking essential for planning education, policy, and institutional adaptations. This will be the topic of the next post.
Have any thoughts on these three aspects? Please share in a comment!
In contrast, there are many examples of “narrow AI” outperforming top humans in things like playing chess or identifying cancer cells. This is not LLM-based AI and not general-purpose AI, and of less interest in this context.
I write this knowing that there are some cognitive tasks where chimps outperform humans. There may be some niches where humans outperform a generally superior AI, and it will matter as much as chimps being better than humans at photographic memory.


