What drives me
This post explains why I left my job to work full-time with AI. It is a translation of a post in Swedish from June 2024, still relevant in September 2025.
Original post, in Swedish, is found here.
I’ve been passionate about educational issues since middle school, and I consider education to be one of the most important things in a modern society. Since 2018, I have been interested in AI, and have closely followed its development since December 2022. In other words, I think both education and AI are very important issues.
The best opportunity to influence AI in Swedish schools
From spring 2023 to summer 2024, I led the Swedish National Agency for Education’s (Skolverket) work on issues related to AI and education. This may sound like bragging, but I want to claim that the group I led started from more or less nothing, and managed to take Skolverket to a place where the agency was considered proactive in the fast-moving area of AI.
In little over a year, we reached an estimated 20,000 educators through webinars, lectures, conferences, and videos. We met with around 30 agencies and national organizations that also work with AI and education, helping all of us find collaborations and get an overview of what work is being done – and not being done. And we also collaborated with organizations outside of Sweden.
When we started, Skolverket was at most being mentioned in a subordinate clause when AI and schools were discussed in the media. After a year, we were instead regularly featured in newspapers, radio, and TV to explain what AI means for education. We started work on strategic foresight about AI at the agency, to gain better insight into what the development might mean for schools and Skolverket in the long term. And we succeeded in engaging managers at all levels in the question of what AI development means for schools and education.
In short, the group I led did really well. It’s reasonable to say that the job I had offered the best opportunity to influence how AI lands in Swedish schools. This is an enormously important issue. Not just for schools but for society, since education will play a major role in how society can handle the AI transformation.
Yet I chose to resign, without even having another job to go to. Why?
AI poses great risks
My assessment is that AI development will likely bring enormous changes to society, in every country. The information landscape will change, and all content that is not verified could have an AI as its sender – from blog posts and Instagram images to phone calls and online friends. The job market will be disrupted, quite possibly so frequently that it barely has time to get back on its feet before falling over again. Education, healthcare, communication, warfare, and economics will change. Power structures will shift.
All of this means great stress on society, with the risk that parts of it break down. And this could happen even if AI development slows down, if the technology that exists today is used offensively. If development continues at the same pace or accelerates, which there are good reasons to believe, the risks of cracks in society become greater.
On top of that, there are scenarios where AI is actively used to cause harm. AI development is on the way to making intelligence more accessible, for everyone. This is a good thing in many ways, but it’s also a tool for, for example, organized crime. We must expect more and better automated cyberattacks, and more powerful fraud, threats, and extortion. We must expect terrorist organizations to get better opportunities to plan and carry out attacks, authoritarian states to find it easier to monitor and control its population, and extremists and doomsday cults to try to use AI to cause as much damage as possible – even if it means dying themselves.
It’s important to reduce the risks
There are more examples to give, but my point is that the AI transformation that is underway contains risks at a level that could be dangerous for society, world order, or even our civilization as a whole.
Experts make different assessments of how large these risks are, from negligible risk of societal-level damage to more than 99 percent probability that all humans die. There are several compilations and weightings of experts’ assessments, and there are various attempts to make systematic calculations of the actual magnitude of these risks. My position is that none of these can give exact or reliable answers – but also that we don’t need exact percentages. The probability of AI development causing major damage at the societal level within ten years is high enough to take seriously.
I chose to resign from my job with AI and education to try to contribute to reducing the large-scale risks of AI. You don’t do that by influencing how AI is used in schools in Sweden (or the EU). To do that, you have to influence the direction of AI development itself – globally.
Safe AI provides enormous opportunities
Today’s AI has some fundamental flaws. We cannot oversee how AI models work. We cannot explain why they make certain decisions or deliver specific results. We cannot predict how they behave in new situations, and we fail to prevent them from behaving inappropriately.
To be able to take full advantage of AI, we must be able to trust AI systems much more than is currently the case. With reliable AI, we can improve decision-making in society, accelerate research in everything from cancer to renewable energy, and provide high-quality education and healthcare to everyone.
Research to create reliable AI is moving forward, but so much more resources are being put into creating larger, more capable, and more profitable AI models – models that have the fundamental flaws mentioned above. The difference between them is like a running race with a hare and a tortoise, but without any friendly narrator ensuring that the tortoise has a chance to catch up.
To give reliable AI a chance to win, a number of things are needed. More resources are needed for AI safety research, but we also need rules that ensure the largest AI models aren’t the most profitable unless they are sufficiently reliable. Diplomacy and global cooperation are also needed, since many AI risks have global effects. And more work is needed to even know what work is needed.
My goal is to contribute to reducing societal-scale AI risks, and improve the conditions for reliable AI. It’s a bold and ambitious goal, but also important enough to bet on.
Falk AI is a think tank and consulting company that I use to build competence and experience to be able to get hired at a place where I have a chance to reduce global AI risks.



” we also need rules that ensure the largest AI models aren’t the most profitable unless they are sufficiently reliable.” That-seems to me to be the foundational key to ensuring the survival of AI-and humans. Kudos for your Credo.