Episode 273: America’s AI Strategy: Balancing Innovation, Governance, and Strategic Advantage in the Global Technology Race with Professor Adam Chalmers
Today Dominic Bowen hosts Professor Adam Chalmers on The International Risk Podcast to explore the global race for leadership in artificial intelligence. They discuss the United States’ AI Action Plan and China’s AI Plus Plan, how these competing strategies reveal different models of governance, regulation, and ideology, and what this competition means for innovation, global influence, and risk. Together they examine how the U.S. approach emphasizes open innovation and technological dominance, how China’s plan embeds ideology and state control, and how the European Union’s AI Act represents a third path prioritizing human-centric regulation.
Dominic and Adam also dive into the economic and geopolitical stakes of the AI race, from workforce disruption and re-skilling to public trust, data sovereignty, and the challenge of building safe and transparent AI systems. They explore how governments can manage risk while fostering innovation, how universities and industries must adapt to rapid change, and what it means for democracy and international stability as artificial intelligence becomes a driver of both progress and power.
Professor Adam Chalmers is an Associate Professor of Politics at the University of Edinburgh, the CEO and Founder of Resonate AI, and a leading voice on the intersection of political economy, technology, and governance. He has advised governments and organizations on AI strategy and risk, and his work bridges academic research with practical solutions for emerging technologies.
Drawing on his research and field experience, Adam explains how AI is reshaping global politics, why public trust and ethical frameworks will define its future, and how democracies can respond to the rapidly evolving risks of the digital age.
The International Risk Podcast brings you conversations with global experts, frontline practitioners, and senior decision-makers who are shaping how we understand and respond to international risk. From geopolitical volatility and organised crime, to cybersecurity threats and hybrid warfare, each episode explores the forces transforming our world and what smart leaders must do to navigate them. Whether you’re a board member, policymaker, or risk professional, The International Risk Podcast delivers actionable insights, sharp analysis, and real-world stories that matter.
Dominic Bowen is the host of The International Risk Podcast and Europe’s leading expert on international risk and crisis management. As Head of Strategic Advisory and Partner at one of Europe’s leading risk management consulting firms, Dominic advises CEOs, boards, and senior executives across the continent on how to prepare for uncertainty and act with intent. He has spent decades working in war zones, advising multinational companies, and supporting Europe’s business leaders. Dominic is the go-to business advisor for leaders navigating risk, crisis, and strategy; trusted for his clarity, calmness under pressure, and ability to turn volatility into competitive advantage. Dominic equips today’s business leaders with the insight and confidence to lead through disruption and deliver sustained strategic advantage.
The International Risk Podcast – Reducing risk by increasing knowledge.
Follow us on LinkedIn and Subscribe for all our updates!
Transcript:
[00:00:01] Adam: The action plan is a risk mitigation plan, but in a way that’s a little bit backwards, whereas the EU is risk first and saying, we’re gonna actually rate risks and we’re gonna [00:00:10] tell you what kind of risks are acceptable and what level of risk is unacceptable.
[00:00:22] Dominic:
Hi, I’m Dominic Bowen and welcome to The International Risk Podcast, where we unpack the risks that are shaping our world.
Today we’re going to be [00:00:30] exploring the global race to lead artificial intelligence, the strategic choices being made by the United States as it seeks to maintain its technological edge and what the impacts are on [00:00:40] businesses and governments in Europe and around the world.
As AI systems are becoming much more powerful and much more pervasive, governments are grappling with how to balance innovation with responsibility [00:00:50] and how to balance competitiveness with collaboration. Now, with the release of America’s AI Action Plan, the United States has laid out a strategy that touches on everything from national [00:01:00] security and infrastructure to workforce development and even governance.
But beneath these bold declarations, there are really complex questions that remain unanswered. To help us navigate these issues, [00:01:10] I’m joined by one of the leading voices on the intersection of technology, governance, and strategic foresight. Our guest today is Adam Chalmers. Adam is the CEO and founder of [00:01:20] Resonate AI and he brings together deep expertise in political science and advanced technologies to help bring innovative solutions to what are really complex societal challenges.
He has a [00:01:30] PhD from McGill University and he is also an Associate Professor in Politics at the University of Edinburgh.
Adam, welcome to The International Risk Podcast.
[00:01:37] Adam:
Dominic, thank you very much. It’s a very generous introduction. It’s a pleasure to be here.
[00:01:41] Dominic:
Fantastic. The release of America’s AI Action Plan in mid-2025 really marked a significant moment.
It’s framed as a strategy to [00:01:50] cement US leadership in artificial intelligence and has this three-pillars approach across innovation, infrastructure, as well as international diplomacy. But beyond this blueprint, [00:02:00] there is just this unprecedented level of investment, the rapid adoption, but at the same time, there’s growing concern and discussion around regulation, around trust, and even fragmentation.
So if we begin with that, [00:02:10] how do you see the US positioning itself vis-à-vis China versus the European Union, which interestingly wasn’t even mentioned once in the AI Action Plan? What does this mean for the world? Is it actually worth continuing, even talking about the AI Action Plan? Is it something that’s important?
[00:02:25] Adam:
Great question — absolutely something that’s important. What’s a good place to start [00:02:30] is with the idea here that this is an action plan to win the race. This is about America winning the race against competitors and in particular rivals.
So I think there is a really important [00:02:40] comparison and contrast we can make between the AI Action Plan and China’s AI Plus Plan. And it’s no coincidence that these are things that were issued around the same time, in the [00:02:50] same month, in the summer. The comparisons are really interesting to dig into.
Definitely a far-reaching, future-oriented action plan that is [00:03:00] meant to pave the path essentially for American AI dominance across a broad range of different fields — technologically and in terms of regulatory standards. It’s interesting; we can talk [00:03:10] about the EU as well. I think it also makes for an interesting comparison.
[00:03:13] Dominic:
Fantastic. We’ll dive in and explore China’s AI Plus Plan — which, it’s always great just to put a plus or AAA rating on the name [00:03:20] of a plan and give it that extra muscle — and Europe as well.
If we keep looking at the US, is there a single policy lever? There’s lots of plumbing in this plan — blueprints, directives. Is there a single policy lever that really [00:03:30] stood out to you as something that’s more measurably going to move the US?
Its productivity, adoption within the next 12 to 18 months — what’s the KPI we should be judging success by when we look at July 2026?
[00:03:42] Adam:
If we had to lay out KPIs for it, it would probably be success in rolling out this America-oriented [00:03:50] Action Plan, which is saying we’re talking about developing AI — in particular, early-stage Gen AI — with American standards, and there’s a specific flavor to that.
You could say, is that something that we’re seeing? Which means [00:04:00] creating these LLMs without ideological bias or any kind of ideology baked into them. Objectively driven LLMs. Do we see that now in these American LLMs?
And then, internationally, [00:04:15] the pillars of the Action Plan are innovation, infrastructure, and international engagement. The United States has a priority on exporting the full AI stack to its allies — that’s everything [00:04:30] from apps to chips. If we’re seeing the adoption of that technologically driven spreading of norms and standards, that’s another KPI indicating success. Both of these things are super interesting.
[00:04:41] Dominic:
How does that differ? The US AI Action Plan is quite orientated toward maintaining and expanding US dominance through rapid [00:04:50] innovation, deregulation, investments, streamlining permits, and regulatory improvements or removing bottlenecks.
It seems that innovation and competition are really [00:05:00] prioritized, perhaps over risk management. Comparatively, the Chinese AI Plus Plan focuses on deep integration of AI across all Chinese sectors and industries. It’s about empowering the entire economy, upgrading value [00:05:10] chains, transforming governance models.
Is there a better one? Which do you think is the better plan? Can you score them? Grade them? Which part would you want to be a part of?
[00:05:21] Adam:
We should score them at the end — maybe it would be a bit of fun to say if there’s a score. I don’t think they’re entirely distinct.
I think at [00:05:30] the core there’s a lot in terms of investment. These are innovation-centric policies. It’s about speed, but also mitigating risk. But I think it’s speed first, and in both [00:05:40] instances the chief concern is: what ideology pervades these models?
In the US case, it’s supposed to be ideologically [00:05:50] agnostic — saying, we’re going to have a light touch in terms of the guardrails that are used by AI companies when they train their models so that what you’re getting back is not skewed in any way.
There’s specific language around removing, for example, [00:06:00] things related to climate science or DEI initiatives, but it’s meant to prioritize free speech, democratic values, transparency, and public trust.
Then the Chinese side is saying, we’re going to bake ideology into this — a specific Chinese socialist ideology — and it’s about [00:06:20] cultural confidence and coherence and these types of things. But ultimately, I think we could debate — and I’d love to hear what you think — whether the American version is really ideologically [00:06:30] agnostic, or if it’s just a different ideology.
[00:06:32] Dominic:
I think that’s a really interesting point, and I think we all have biases. One of the things we speak about a lot on the podcast is recognising that self-reflection: even if we’re trying to do objective analysis — collect the data, quantify everything, [00:06:50] then provide an objective analysis for our clients — we have to accept that how we collect the data brings biases.
I go to one site; I go to the Australian Government’s DFAT Smart Traveller. I think it’s a great website when looking at risks in countries, but I’m biased — I’m Australian. There’s no [00:07:00] coincidence that’s often the first site I go to. I go to the Brookings Institution, I go to Carnegie, I go to the International Crisis Group — because they’re the sources my universities recommended decades ago.
So we bring these biases to everything we do. It’s hard to imagine that US scientists based in the US, who are building, managing, refining, and improving these models, are not bringing biases when they come to the table.
[00:07:22] Adam:
It’s very interesting if you look deep into the consultation that was conducted prior to the Action Plan. There was a large [00:07:30] stakeholder consultation, so you get a lot of voices from the public and from industry actors and leaders — and Elon Musk in particular. It’s supposed to be open, objective, democratic, free-speech driven. These are [00:07:40] democratic principles, thinking about public trust and transparency — free of ideology. It’s just a different form.
The international part of the Action Plan is about spreading those [00:07:50] values through technology. It’s really interesting — soft power. You’re spreading these norms and ideals of a specific state through the world via technological innovation.
It’s [00:08:00] a type of techno-diplomacy.
[00:08:01] Dominic:
I agree with you completely. This is a classic form of soft power, and I think it might even turn out to be — I’m sure there’s someone doing their PhD on this right now — but it might even be a third way or a middle way between hard power and soft power. Then you’ve got AI influencing how people think and research.
You just mentioned Elon Musk and, yes, he’s been criticised for some of his ideologies, but he’s widely recognised for his innovation. That’s generally seen as his secret sauce — the gold dust that he brings to companies. And the US AI Action Plan really positions innovation as a central pillar.
Are you seeing in practice that there are specific areas of innovation being prioritised in the US and even across Europe? And how do they relate to the broader geopolitical goals of European states and the current US administration?
[00:08:45] Adam:
I mean, the first thing in terms of the Action Plan is that it’s supposed to be kind of horizontal when it comes to the sector.
So it’s meant to not be picking out champions. It’s meant to be something that is more market-driven and picked up in all potential sectors. So it’s an interesting comparison because the Chinese AI Plus is meant to permeate all sectors — but that’s state-led. In the United States, it’s meant to be sector-agnostic.
The market drives it into all relevant sectors, so it’s not picking champions at all. So, you know, we’re talking about pharmaceuticals, healthcare, agriculture, technology, all kinds of stuff. And then of course it is — you know, chips to apps, hardware to software — it’s super far-reaching.
I think it would actually be against the spirit of the act to pick a champion, and I don’t have any anecdotes from myself about where I see a lot of investment going. But I do think there’s something to be said about chip development and innovation.
And then you think about the work of TSMC or Nvidia and their role in creating these massive supercomputer factories and this type of thing. It’s amazing to watch, actually. Have you heard anything about industries being championed or what’s developing more quickly in the States?
[00:09:54] Dominic:
Not across the States. And when we look at Europe, I actually had a debate with someone yesterday afternoon. They were quite concerned about this race forward that we’re seeing in many regions — primarily between the US and China — but also about making sure that Europe is regulating enough to make sure we’re safe.
And, you know, I think I was taking the opposite argument in that debate. I think Europe runs the risk of falling behind. If you look at just the numbers — if you took away the name US, Europe, Russia, East Asia — Europe should be performing better.
It’s got the universities, the population size, the demographics — it’s got so much going for it. Yet when you look at the outcomes, whether it’s the number of European companies performing or the market cap of the top 10 tech companies in the US versus Europe’s top 10 — there’s just no comparison.
You just have to scratch your head. Why is that? Part of it is cultural. There’s this American attitude of “race first, break things, and as long as you succeed on the sixth try it doesn’t matter if you fail the first five times.” Whereas in Europe, the thought of failure is an all-encompassing fear.
I think, combined with that, the heavy amounts of regulation do make risk-taking and racing forward quite difficult.
[00:10:59] Adam:
I think that’s right, yeah. We’re talking about the EU AI Act in particular as one example, because the UK has its own agile, innovation-first strategy that looks a little bit like the United States.
I’ll give you some insight from my own field in a second about where I see a lot of things being prioritised or racing forward. But there’s a big misconception here that I think is worth clearing up.
You hear a lot about the EU being a rule-maker and the US being an innovation leader — as if they’re playing opposite roles. I think they’re playing different ones. It’s a bit of an apples-and-oranges situation.
If you look at the US AI Action Plan, it’s not a set of prescriptive regulations. It’s not about saying “let’s regulate these things.” It’s more about making sure we cut regulations to promote innovation.
We know this. But it’s not meant to say, “What are we going to do when we get to a level of artificial general intelligence?” It’s saying, “We’re at early-stage generative AI, and we want to win the race.” And this is a roadmap for that.
The EU AI Act is different. It’s not talking about innovation and LLMs and early-stage Gen AI — it’s saying, “We’re going to set up some rules that protect users against things like social scoring or facial recognition.”
So the EU is focused on use cases, and the American approach is focused on innovation — new technologies.
I think the better comparison is between China and the US. The EU sits above that, doing something different: focusing on use rather than innovation. That’s my take.
But I’ll add one anecdote from my own experience, since I run a company that sits at the intersection of tech and AI. We haven’t trained a large language model. We’re not a billion-dollar company. But we are one of the many companies racing to figure out good use cases for AI, for all sorts of purposes.
What I’m seeing a lot of talk about right now is agentic AI. I don’t know if that term is everywhere, but this seems to be something that’s gained traction over the last six months to a year. You don’t see a huge number of amazing agents running around doing crazy things yet, but there’s a lot of innovative work happening in FinTech, crypto, and Web3.
How can we use AI in the space of cryptocurrencies, or even NFTs? There’s a lot happening there. That’s not my area, but I know people working in those spaces. I think agentic AI is one thing, but the bigger question is how companies that aren’t creating their own language models can use them for specific ends.
And then it’s like, how do we get creative in terms of sitting on top of, or leveraging, a large language model for all kinds of amazing use cases? That’s what I’m trying to do. You nodded your head — you’ve heard of agentic AI, and it’s kind of a buzzword right now, right?
[00:13:26] Dominic:
Definitely. I think AI was something that five or six years ago not many of my clients were speaking about, but clearly they were working on it. As soon as it became mainstream, they’d say, “Oh yeah, we’ve had this program for years.”
Not surprising, clients in the financial trading sectors have been at the forefront of this. It actually draws me back to high school — I remember one of my physics professors leaving to work in a trading firm. When I think about the work he said he’d be doing, I realise that it sounds a lot like the models we’re using today.
It makes sense — autonomously managing complex trading strategies, identifying real risks in real time. Those are logical use cases.
We’ve also spoken on this podcast to some fantastic guests. One, in particular, is working with the US Navy. Their company uses AI to improve maintenance processes for the Navy. They told us that about one-third of ships are in maintenance, one-third in training, and one-third deployed at any given time.
Imagine if you could speed that up — if you could get ahead of maintenance issues using things like agentic AI. I think there are some great use cases out there.
But for many people, that’s still a bit abstract. Of course the financial and defence sectors are using it, but what does it mean at an individual or small-business level? What can they be doing with it? Do they just need to wait and buy something off the shelf?
[00:14:42] Adam:
Find some novel use cases for it, because I think it’s possible. It’s more about asking: what discrete functions do people not want to do, and how can we improve their lives by creating an agent for it?
I think there’s a bit of an arms race in agentic AI — creating these agents — and also something called retrieval-augmented generation, or RAG systems.
RAG systems make large language models more accurate, reliable, and transparent because they bolt on additional information. It’s a way to make AI point to specific, verifiable data, which reduces hallucination.
Maybe your company has hundreds of thousands of internal documents. You can create a RAG system powered by an LLM and train it on those documents so that your organisation can respond faster and know its own information more reliably.
I think people are racing to find really innovative applications for these technologies. It could be in FinTech, EdTech, RegTech — all sorts of fields. Really interesting.
[00:15:00] Dominic:
You talk about racing ahead, and earlier you talked about these speed-first approaches in both the US and in China.
I think that often brings risk when we’re focused on speed first. Looking at safety, governance, and coordinated risk management can sound a little bit boring, but it really does have to keep at least close to the pace of development and competition.
You mentioned your company, Resonate AI. When you’re advising clients and helping them to race, to be competitive and ahead of their competition, how do you encourage and support them to do that in a way that’s still safe — still identifying and mitigating the potential risks of racing ahead?
[00:15:40] Adam:
Good question. So, our company helps other companies be compliant with sustainability regulations. We’re actually kind of a reg-tech company.
But there’s risk there, because the risk is saying, “Hey, we’ve got answers for you that you’re going to turn around and use with confidence.”
We provide that confidence because what we’re doing here is not just making guesses. We reduce the guesswork to zero and provide full transparency with every piece of information we give.
That comes from my own background as an academic, using a level of research rigour that’s specific to high standards of confidence. There’s no guesswork. So far, our clients really respond to that.
But I think risk is a big thing. Maybe we narrow it down, because the podcast here is about risk in these broad terms. What’s the risk dimension in the AI Action Plan, for instance? How do you mitigate risk by taking away the guardrails through deregulation?
I think that’s a fair point. If we go back to that, the Action Plan is a risk mitigation plan — but in a way that’s a little bit backwards. Whereas the EU is risk-first, saying, “We’re going to actually rate risks and decide what kind are acceptable and what aren’t,” the US approach is to mitigate risk by moving fast and encouraging innovation.
This is almost like a risk-rating system in itself.
[00:17:58] Dominic:
I think that’s really interesting, because the US AI Action Plan highlights workforce development and re-skilling.
I think it was Fortune magazine that calculated about 10,000 job cuts in the US linked to AI-driven automation. These are primarily entry-level roles — predictable, knowledge-intensive, and junior tasks.
So how is the US approaching these challenges? We’re hearing about the risks for new graduates with excellent degrees and MBAs from prestigious colleges. What needs to be considered when we think about the risk emerging from outdated education and training systems, and whether they’re keeping up with the current world?
[00:18:38] Adam:
It’s really interesting, because this is a fast-moving space.
I’m a professor by training, an academic, and I’m seeing students come through. You and I had a quick chat about the use of ChatGPT by students — that’s barely the tip of the iceberg. The real question is: what’s the value of education in a world with AI?
We’re talking about a world where a lot of entry-level white-collar jobs could be decimated in the next five to ten years.
You’re already hearing about new strategies prioritising degrees in trades — what we used to call college-level qualifications. I’m from Canada, and we used to distinguish between college, which focused on trades and practical skills, and university, which focused on higher education. That’s being flipped around now.
Education is being radically challenged and will have to change. Unfortunately, universities aren’t well-equipped to be agile, and they’re going to face serious challenges.
And then, in terms of labour issues and workforce retraining, if you go back to the American example, it’s about balance. The goal is to make sure we’re training the cream of the crop when it comes to AI — promoting AI training — but at the same time, shoring up defences against mass displacement.
It’s not about AI replacing workers outright, but about productivity rising so fast that fewer people are needed. These are contentious issues, right? We just don’t know yet. We’re trying to see the future as it unfolds.
[00:20:03] Adam (continues):
I’ll throw in a little fun fact for you there. This idea of Vin’s Paradox — the idea that efficiencies in labour and productivity should, in theory, increase demand for labour rather than reduce it.
Again, we’re looking into the future and trying to predict what will happen. The doomsayers say we’ll lose tons of entry-level white-collar jobs — lawyers, doctors, and so on. Others say it’ll make us all more productive and increase demand for these jobs.
[00:20:40] Dominic:
Yeah, it’s a really interesting paradox. I think it was Goldman Sachs that released analysis showing AI-driven innovation could raise labour productivity by about 15%, but at the same time displace about 7% of the US workforce.
They didn’t say “make unemployed” — they said “displace.” That’s the complexity: balancing displacement with efficiency.
At a time when public trust in AI is still fragile, and many people are unsure what the future looks like, what role can the AI Action Plan and policy play in addressing people’s fears — about bias, job loss, surveillance — while still encouraging innovation?
[00:21:26] Adam:
I think at the core of the Action Plan is public trust. It’s about building trust from the ground up.
It starts with open source and open weights. It’s about how models are trained and reassuring the public that their chatbot isn’t curating information in a specific ideological direction.
If you think about it positively, the Americans are prioritising LLMs that are radically objective — and that’s good, because people will see that in how they interact with them.
Then trust is nurtured through the sense that users aren’t being indoctrinated or talked down to. If people want information, they get it — not propaganda.
But it’s not just about objectivity. You also need trust around the economic side — can the public trust the government to maintain guardrails against job loss and disruption?
I think that’s it, Dominic. Public trust is probably at the centre of everything, or at least it could be. It’s meant to be ideologically agnostic and radically objective — saying, “Maybe you’ve had bad experiences, maybe you’re unsure, but this is what we’re doing.”
This is the American approach, and it stands in contrast to the Chinese one. Follow us — that’s the international part. We’re going to set the international standard by exporting this technology globally.
[00:22:00] Dominic:
I’d love to pick up on that part about exporting it, and obviously the US Commerce Department and State Department are really pushing and promoting the American AI exports, and that includes hardware, software models, and even standards. Part of this is to counter Chinese influence and foster these AI alliances around the world.
At the same time, we know that the US AI Action Plan has national security as a core component. It’s this whole-of-government approach with the US Department of Defense, the Department of Energy, and Homeland Security collaborating with academia to assess risks.
You’ve got what I see as this push-pull dynamic with the US administration — having export controls while pushing out US-led hardware, software models, and standards. There’s so much back and forth, it’s hard to track — about Nvidia chips, for instance — which ones are allowed to be exported to China and which aren’t.
Then there are companies like Huawei, which has been pushed out of many government contracts and facilities; Chinese modems like TP-Link, largely discredited as a risk; and rumours about BYD cars — whether driving one could expose personal data.
So how do you see this landing? What will the US administration open the doors to, and what will they close them on? How is this likely to impact relationships, given that China is pushing and sharing its technology? They even have the equivalent of a Digital Silk Road.
[00:24:50] Adam:
I’m not sure exactly if that’s the case, but yes, they have this — they’re also looking to have an international perspective for sure.
It would be through their established Belt and Road connections. I think there are competing systems. When you think about the exporting of the American AI full stack, it’s not to the world — it’s to their allies.
It’s very specific. It’s still a compartmentalised world we’re seeing — not a fully global one. The Chinese are doing something similar — promoting ecosystems within their own sphere of interest.
So you’re seeing two competing systems. It’s a question for the future to see what this looks like in terms of trade deals and how it actually works — how much is open source, how much is shared, what parts of the stack — from apps to chips — are included in these deals.
That’s probably where the real challenge lies. So far we have a nice roadmap, but how it plays out remains to be seen.
At the very least, it’s two competing systems and two major global actors, each with its own ambitions and international dimension. China is more inward-looking for all kinds of reasons — not least because it’s struggling with its own economic systems and maintaining progress.
This is a way for China to pivot or reframe those challenges. But at the same time, it’s definitely international in outlook. Both systems are quite similar — it’s just different worldviews.
Which worldview do you have baked into your LLM? Is it democracy or socialism? Those are the two major paths.
[00:26:22] Dominic:
It is quite interesting. We know there are government agencies like DARPA — the US Defense Advanced Research Projects Agency — and from what we can see externally, they’re involved in cutting-edge AI research and development.
You’d expect that, given how many innovations originated with DARPA over the decades. But most of the visible momentum seems to be coming from the private sector.
I’d love to hear from you about that relationship — between governments, defence, intelligence, and industry — and how it’s evolving under this new strategy.
Are there particular developments or projects that you’re especially excited about? Because we talk a lot about risk on this podcast, but risk is only half the equation — the other half is opportunity and what we should be excited about.
[00:27:02] Adam:
I think in the American context it’s going to be private-sector-led innovation for sure — but supercharged through the Action Plan and investment.
There will be a lot of interesting dynamics where you see private-sector AI companies being employed in defence and security. We’re already seeing some of that.
But what really excites me is the large number — the plethora — of AI companies popping up to serve every possible need and fill every possible gap that exists out there.
In fact, just the other day I was doing something for my own business — a sales and marketing project. I was going through the paces old-school — email lists, lead generation, the usual.
There are companies that sell you bespoke email lists. I thought, there must be an AI that does this for a fraction of the price. Within two seconds I found five AI companies that do it at a really high level.
That’s what gets me excited — all of these niche applications. Not the “do-everything” AI, but bespoke, specialised systems that fill specific gaps. Watching these emerge and evolve so quickly is amazing.
I’m also involved with a company in Brussels that focuses on lobbying. It’s an AI company providing a one-stop shop for information for lobbyists. It helps government affairs professionals know what’s happening in the European Commission and Parliament so they can act when needed.
[00:28:46] Dominic:
Isn’t that interesting? Yeah. It’s touching all parts of our lives.
[00:28:50] Adam:
You can bet they already have one in Washington. It’s been around for a while — there are a few competing examples — but it’s fascinating to see how AI is touching every aspect of our lives.
[00:28:58] Dominic:
Very, very interesting. And when you look around the world, Adam, there’s a lot of international risk. We’ve spoken about some of the opportunities, but when we look at artificial intelligence and its future — policy, government projects, and emerging ideas — are there international risks that concern you more than others?
[00:29:15] Adam:
In terms of specific risks, you start to think about existential ones — security, defence, and our freedoms. That’s why it’s useful to look at the EU’s AI Act, which focuses on human-centric risk regulation.
It’s about setting up a framework because there are genuine issues around personal liberty and sovereignty over one’s own data. That’s a big thing.
When you move into security and defence, the issue that really captures my imagination — and people’s imaginations for decades — is the existential risk of AI killing us all. That might sound dramatic, but there are serious debates about it.
A lot of leaders in AI — including Elon Musk and some of the grandfathers of the field — have weighed in. They rate the likelihood of AI wiping us out at anywhere between 10 and 90 percent. Some even say 90.
If you want, you can check out my LinkedIn post about this — I put together a diagram showing different “P-doom” predictions — the “probability of doom.” Elon Musk puts it at about 10 to 20 percent.
That’s the kind of thing that keeps people up at night. When we start thinking about artificial general intelligence and superintelligence, we realise the biggest risk is us. That’s the real implication.
[00:30:37] Dominic:
Yeah, when we’re talking about complete destruction of humankind, I think any percentage above 0.1 is worth paying attention to.
We’ll link to your post on that — it’ll be a fun, if slightly terrifying, read.
But Adam, thanks very much for coming on The International Risk Podcast. It’s been a fascinating conversation.
[00:30:55] Adam:
It’s been great fun, thanks a lot — I really appreciate it.
[00:30:57] Dominic:
That was a really great conversation with Professor Adam Chalmers, whose work sits at the intersection of political science, artificial intelligence, and strategic governance.
I really appreciated Adam helping us walk through and navigate these complex trade-offs between innovation, legitimacy, governance, and global coordination.
Today’s episode was produced and coordinated by Katerina. I’m Dominic Bowen, your host. Thanks very much for listening to The International Risk Podcast. We’ll speak again in the next few days.
One Comment
Comments are closed.