Episode 251: The Future of National Security and AI From a Military Perspective with Zac Staples

Coordinated and Produced by Elisa Garbil

This week Dominic hosts Zac Staples and they dive into how AI-native defence systems are reshaping the battlefield—and why getting this wrong isn’t just bad for business, it’s catastrophic for global stability. 

  • They discuss the challenges and opportunities AI brings
  • What AI’s role could be
  • How AI can enhance security
  • Predictive maintenance and it’s strategic importance
  • Global security
  • Tempest OS
  • Nuclear deterrence
  • and more!

Zac Staples is a former U.S. Navy officer and the founder and CEO of Fathom5, an industrial defence-technology company headquartered in Austin, Texas. Since 2018 he has led Fathom5 to uninterrupted growth and profitability. Zac and the Fathom5 team are building Brilliant Machines with new actuators, cyber resilient designs, and AI optimization. Among the company’s many accomplishments are the first program of record deployment of AI aboard a Navy warship, 17 fully awarded patents for actuator and cybersecurity technologies, and several examples of accelerating defense innovation.

Before launching Fathom5, Zac served in the U.S. Navy for over two decades. His career of shipboard service culminated in Zac’s selection as the Secretary of the Navy’s 2017 “Innovation Catalyst” award winner for pioneering efforts to develop new digital and cyber capabilities. His background steers Fathom5’s vision toward tactical edge use cases for defence innovation.

The International Risk Podcast is a weekly podcast for senior executives, board members, and risk advisors. In these podcasts, we speak with experts in a variety of fields to explore international relations. Our host is Dominic Bowen, Head of Strategic Advisory at one of Europe’s leading risk consulting firms. Dominic is a regular public and corporate event speaker, and visiting lecturer at several universities. Having spent the last 20 years successfully establishing large and complex operations in the world’s highest-risk areas and conflict zones, Dominic now joins you to speak with exciting guests around the world to discuss international risk.

The International Risk Podcast – Reducing risk by increasing knowledge.

Follow us on LinkedIn and Subscribe for all our great updates!

Tell us what you liked!

Transcript:

Dominic Bowen: Welcome back to the International Risk Podcast. I’m Dominic Bowen, your host, and today we’re diving into a topic at the intersection of strategy sovereignty and Silicon Valley. Our guest today is Zach Staples. He’s a former US Navy officer and the founder of Fathom five. They’re a defense tech company that just rewrote the rules of what’s possible.

They deploy the first AI systems aboard a US Navy warship, and Zach isn’t just playing with toys in the lab. He’s really building what he calls brilliant machines, ones that operate in the most unforgiving environments. And I think his work from what I see and what I’m really looking forward to exploring with him, confronts some of the most important and critical questions in defense today.

How do we adopt AI technology at the tactical edge and how do we do so without surrendering control? How that giving up integrity and how to maintain democratic oversight. So I think today’s conversation is much more than just about automatization. It’s about risk ownership, strategic alignment, and designing systems that work when the cloud goes down, when the satellites go dark, and when rounds start flying.

Zach, welcome to the International Risk Podcast.

Zac Staples: Dominic, man, it’s great to be here.

Dominic Bowen: I’m really looking forward to unpacking with you today about how look, AI native defense systems can reshape and maybe you already are reshaping the battlefield and why this isn’t just a story for businesses, it’s about global security as well.

Zac Staples: That’s great. I think we’ll have a great conversation today.

I’m really looking forward to talking about a little bit of the work that we’re doing. I think some of the work that we are going to do in the future, we looking for collaborators and teammates, and so I’m always willing to put the word about that and then even some work that we think is important that’s outside of our wheelhouse, but it’s really around how nations and organizations can ask for the right thing. To get a governable artificial intelligence system that, ends up solving the problem for which they went to the marketplace.

Dominic Bowen: Yeah. Amazing. No, that’ll be great to hear about. And right now, Zach is, you know, anyone that’s watching the news, we’re seeing this real time and continuing conflict in the Red Sea that’s disrupting shipping.

Yeah. We’re seeing gray zone tactics in the South China Sea. We’re seeing hybrid warfare in the Baltic seed nearly every week. So I’d love to hear from you, how is AI reshaping maritime conflicts and what are the risks that we’re underestimating when it comes to the current conflicts we’re seeing in the maritime environment?

Zac Staples: Yeah, I think let’s just step back from that a minute and look at the larger question, right? when you’re seeing gray zone conflicts, what you’re, probably referring to is things short of war, cyber attacks spoofing, GPS all of these sort of information system attacks.

And then certainly. You have, missile exchanges going on in the Red Sea. I want you to think of those as like, there’s a great deal of information loaded into those missiles. And so what we’re really seeing is the dawn of what information age conflict will look like on there is none of the systems that are fighting today, whether it’s drones in Ukraine, precision, missiles updated in flight.

In the missile warfare going on the Red Sea or the Gray Zone activities you’re talking about where computers, computing and the programs that they run are intimately involved in the. Find, fix, track, target, and engage process. And you can go back. Even if you just go back to the Gulf War just 20 or 30 years ago, you had, you certainly had precision.

strike technologies that were highly informalized, but you still had a lot of other technologies that were just mechanical engineering inventions and other physics inventions. Like, night vision glasses, that were an invention, but not necessarily an information invention, but were an invention.

That was new to warfare and so that was important, but what we’re seeing now is almost every single thing that we’re talking about that’s reshaping how nation states compete against each other. Even in the ideas domain as well as in the military force domain. Start and begin with a program somewhere.

And I think that’s kind of the strategic kind of tectonic shift about what’s different in what we’re seeing now than what we’ve seen in the past.

Dominic Bowen: Yeah, it’s really interesting. And in that landscape, the landscape that really is dominated by what’s hybrid warfare today? Yeah. Cyber attacks, eminent economic coercion, disinformation, attacking critical infrastructure.

Can you talk, tell us about the role that AI native defense systems are actually playing in strengthening our resilience, improving our security, and how are we making sure that these assets are contributing positively, in this, you know, what’s really a shifting and escalating geopolitical environment?

Zac Staples: Yeah, I think that’s a great question. Let’s talk about, kind of a framing idea around what an ai capability is. That it’s useful now, right? So I end up on planes, traveling a lot for various things. People ask me, what do you do? And so I’m always trying to find like analogies that really punched this home for the doctor who’s sitting next to me, or for the mom and her kid who were on the other side, right?

And I think the analogy that’s most compelling for me now is a vision of artificial intelligence and decision support systems that are the equivalent of eyeglasses in the 18 hundreds. And I think this is interesting because, for, you know, 10,000 years, 2000 years of recorded human history, and who knows how long before that, you turn 30 or 40 and your eyes start going bad, they start wearing out.

And then some people, have that from the time they’re born. And in the 18 hundreds we had this invention that was affordable, scalable, and that everybody could see a little sharper. And so just in the period of 50 years when eyeglasses became prevalent, and most of civilized society like that was a tectonic shift from the way human beings had operated for the previous 10,000 years about the clarity of their vision.

And so I like to think about AI over this period in the same way is that the best artificial intelligence allow us to have clarity of vision in data sets that are too large for us to find interesting points. Without that clarity. And so, early on, I think the first way that we can hope that artificial intelligence shapes the modern battlefield is providing clarity of vision in the enormous sensor loads that we have in coming in and the data pools we have that, that need to be examined.

Dominic Bowen: And I think when we look around the world, I think it’s safe to say, and I don’t think too many people will find this offensive that the Wests, and we could say, we could call that multiple different things. Sure. But say nato, nato, the West, the security architecture and the security systems are really under pressure.

Whether we look at Ukraine, whether we look at Taiwan. Whether we’re looking at it digital espionage to sabotage at critical infrastructure. These are real world things. I’ve got clients that I speak to at least once a week suffering from espionage or from sabotage at critical infrastructure. These are real world threats that are happening across Europe. And I think also in North America, that’s to

Zac Staples: Oh, absolutely. Yeah. All over the world.

Dominic Bowen: And AI is being introduced sometimes without a clear doctrine, and we know that, especially in the military environment, doctrine is critical.

We live and we breed and we learn from our doctrine, we adapt and we improve our doctrine, but we need that as a baseline. However, in Silicon Valley you know, the tech sector is really works on that. Move fast, break things. But I wonder when we’re working in the defense sector where there’s lives are at risk and governance and democracy is.

How do we make sure that we are not moving too fast? How are we making sure that we are staying in line with our democratic norms and the military is catching up with the doctrine of ai?

Zac Staples: That’s a great question. Yeah. I love answering this one, right, because if the answer is we should go slower, that’s probably means that, central driven decision making systems with a different ethics and value system that we have may end up further down this technology roadmap and we could lose an existential conflict. So go slower is probably a really bad idea. And then, so the question is how might we go as fast as possible, which almost inevitably entails some bad ideas.

And do it in a way that doesn’t pose unbearable risk to the things we hold most dear. And so, I’ll share with you because I’ve wrestled with this question quite a bit and, where we’re at. Is that if you pick the domains in which you want to solve your policy and governance challenge where the domain for application of AI is in general low risk, high reward, then you can figure that all out. ‘ Cause as you know, if we asked governments to write the policy on AI before we even knew what the systems were capable of. We’ll write a bad policy and then it won’t apply. And and we’ll have hamstrung ourself. So in fact, usually the best way to build policy is to go do it and then figure out what are the elements of governance and policy to allow us to scale success while minimizing risk?

Which means you gotta go out and do it, which means you need to find some place you can do it where there’s a high reward opportunity and a low risk opportunity And so lemme give you an example of a couple of those that I think are actually useful. The first place humanity, mostly humanity, experience artificial intelligence was exceptionally high risk.

And I think I’ll talk through like some of the ways that it was high risk as kind of the counterpoint to what wouldn’t So almost everybody in the world, particularly in kind of the West, as you described, experienced ai, whether they knew it or not, through social media. And through advertising.

So, since 2012 when we realized kind of the power of deep neural nets and convolutional nets to actually do categorization and examine large data sets, the first application of that was ad targeting and content targeting. And it just blows me away. ’cause if you abstract that a little bit, we basically said, and this is the only technology domain, we did this in like forever, said, Hey, here’s what could be one of the most powerful technologies humanity has invented.

Let’s start with psychological experiments on people. That’s just stupid. I mean, it doesn’t take much to say, oh wow, that’s, yeah. And then we’re like, and you know what? We should make sure that the people we’re experimenting are young people, particularly young girls. And see how that goes. that was just ridiculous.

And so I think, and so now we have all of these findings coming out. You know what, particularly about Facebook and Meta, what they knew all along, it’s like as bad as the big tobacco data was And my opinion there is that was a bad domain to allow this very advanced, powerful technology to be experimented with.

And so where we are at now from a domain selection is we think machines and machine optimization is the ideal domain for this. For example, if we’re using AI to predict when to change a bearing, but we’re using AI to predict how many hours of life are left on this particular machine.

Then the worst case scenario is that machine breaks, The best case scenario is that we increase availability and readiness of military assets, so we’ve got more players to put on the field, and that leads to all of the mass related arguments of having more options to the commander, One of the worst things about that first area of social experimentation with AI was the time it takes from when you’re doing something that can cause harm until you see the harm realized. So what we saw with the early ad targeting and content targeting was that 10 years of exposure to those sorts of algorithms had deep and lasting societal impacts on young people.

That’s pretty well established Well, 10 years is a long time to find out if you’re doing the right is too long. And so one of the other things we love about predictive maintenance as a military domain for AI, is that, if you’re wrong about that pump, you’re gonna find out in a month or two.

If you’re saying, Hey, it’ll last a year and it breaks in 90 days, well, you basically just learned the quality of your AI 90% faster than we learned in the social space. That, That it was a bad idea. So we love machine predictions, machinery, health predictions as a safe space to build governance and AI ML models to the tactical edge and back because.

Machines are at risk, but human psychology and human lives are not. And then you can say, okay, well what is the governance and data sets and access and everything else that makes this possible and scale it out to other ways? And then on the flip side of that, let’s through at the macro grand strategy level, like the US problem countering China.

And you know, the now CNO has given this talk a couple times, but I think it’s worth talking about admiral Coddle said on many times, there’s about 300 ships in the ships and submarines in the US Navy at any given time, about a hundred of them are at sea or available for tasking. There’s about a hundred of ’em that’re taken apart and long lead time items to get back to sea and are probably unavailable for a conflict from about a year after it until about a year after it starts.

But It’s that other a hundred, that third chunk that is not available. Now it’s in some form of maintenance or it’s in some form of crew training or whatever that if we could accurately predict exactly what it would take to get them ready, that doubles the size of the ready available force.

And so predictive maintenance is not only a low risk opportunity for the deployment of ai, it has this exceptionally high reward curve if you get it right. ‘ cause it effectively doubles the US battle fleet to deter conflict with China. So as an example that, that just kind of an extended example there are ways to deploy military AI with very high reward and low risk.

Domains that allow us to go as fast as we want, get it wrong a couple times, and establish good precedent that can be expanded into other, more probably higher risk domains like command and control targeting, et cetera.

Dominic Bowen: Yeah, that’s that’s a great example. I mean, predictive maintenance is not necessarily the most glamorous topic but as you said, it really is a low risk horror award way where we can start implementing many of these technologies.

And I loved your comment about giving commanders options and the importance of that. I think any young officer in the Navy, air Force Army learns very quickly that you need to limit your adversaries options whilst yes. Maintaining your information, maintaining your options and awareness as much as you possibly can.

I think it’s interesting how you, you talk about going slower is a really bad idea with defense technology and, we need to go as fast as possible to develop the advanced technologies we need without posing unbearable risks. And I think that’s a really important underlying, you know, what’s our risk appetite?

What can we accept and what can’t we accept and what’s most important to us? So the obvious question coming outta that Zach, is, what are the risks that we’re trying to mitigate? What are our adversaries doing that mean we can’t go slow, that we have to be running so fast with this ball?

Zac Staples: Yeah. I think, there’s no secret That unmanned autonomous systems, which rely heavily on AI and ML for both navigation, mission planning, execution are going to be an important part of future warfare. We were talking about that the conflict in Ukraine, proved it, and now we’re seeing significant resources in the US pour into robotic and autonomous systems.

The second thing that I think everybody is aware of is command and control systems that help you sort and target provide that clarity of vision to decision makers and in many cases and at some point in the future may be making engagement systems autonomously and have the human on the loop instead of in the loop.

those sorts of command and control and command and decision systems. Also have the opportunity to be, accelerated a great deal. And it doesn’t matter whose framework you use, you can use John Boyd, and the uda if, observe, orient, decide, and act for him, it was a it was a time-based drill.

The unit of measure was seconds. You can go back to, one of the famous admirals well he was a captain at the time, Harley Burke? Who said the difference between a good officer and a great one is about 10 seconds. And, there has been a time-based measure of performance for quality in a military officer’s ability to make decisions for centuries.

And so if AI enables our adversaries to use any one of those frameworks, pick one that are all time based to make good decisions faster, then we could lose a major war because of that. , And so there is no strategy that prudently says we are going to slow the advancement of ai, which we know could decrease our time on the clock.

that just doesn’t make any sense. And there are things that need to be solved just from a systems and operational integration perspective. So for example. Some algorithms need to run in like hyperscale cloud style environments. They just need to operate across such large data.

Maybe their training algorithm, maybe they’re training, but you’re going to do inferencing at the edge. how those systems move data to the place where the algorithm is gonna be trained, and then the inferencing, model gets back to the point of need is different for every system.

I keep to going back to predictive maintenance. just a great opportunity to solve the systems architecture. And say, listen I’ve got this diesel engine, it’s got 50 sensors on it. I’ve got this reverse osmosis unit that’s making fresh water.

It’s got 200 sensors on it, whatever. Right now I have all of this unclassed sensor data that I can use to do something really militarily relevant. Like tell you what rinse you gotta turn to, keep it going. And I can sort out all those architectural problems in an unclassified domain at low risk, and then apply them to these systems we know need them.

Like command and control, target pairing and robotic and autonomous capabilities.

Dominic Bowen: And we all know that America is clearly a superpower in many regards. Whether it’s about trade economics or whether it’s about military superpower. There, there is no competitor to the US and you use that as an example.

Are you able to talk about the other side? ’cause of course, if, America’s spend is, I think it encompasses the next I think 7, 8, 9 countries is anyone getting close? Do we know, is anyone getting close when it comes to the use of ai and in particular? Your example of quoting that the difference between a good and a great officer is about 10 seconds.

One of the benefits of the American military and I’ve had the huge blessing of seeing the American military in quite a few environments in including in North and Syria in Iraq, Afghanistan, and, it is a very, very impressive machine. Say what you like about American politics. Is an amazing machine.

Very, very impressive and very, very professional. But we know that some of our adversaries are not necessarily as well practiced. Russia clearly is very well practicing conflict, iran, to some degree China, much less so do we know what their ability is when it comes to command and control systems, predictive maintenance, et cetera, and the employment of unmanned vehicles.

Zac Staples: You know that’s a great question. And what I’ll share on this point is true for every nation. And so it’s equally true for our adversaries. There are things that every nation has as their national capabilities, the really secret sauce, right? That they’re working on, and even don’t, and decide not to share with even their best allies and that that’s every nation’s sovereign prerogative to do that.

What we don’t know is what does China have, what does Iran have in those capabilities? And so what we observe, we have to believe in an environment that is so tailored by information operations that, that what we observe as their capability and gets, written into Jane’s defense as what they can do may or may not be what they can do.

And in particularly a case like China that is targeting a particular operation. The president of China has told China to be ready to retake Taiwan by 2027. And so ready in that context could include a lot of things that they’re willing to demonstrate and a lot of things that only 10 people in a cave know about.

And so for me, and everybody has that, I wanna go back to this, digitization of conflict and say that the, and this is a podcast about risk, so I, let me share you what I think is the greatest risk to the combined efforts of per particularly Australia and the United States.

Who have both have very vest interests in the South China Sea and a long time collaboration to deter that concept. it’s that whatever we develop in the next three to five years for AI capabilities, if it doesn’t integrate and collaborate and cooperate with everything that we’ve build, been building for the last 40 years.

Then we have a big problem. And so one of the things that I just firmly believe in is, it’s not as much about it is about integrating artificial intelligence. Artificial intelligence can bring so many capabilities, but the greatest risk is that the company, the countries that kind of invented the modern digital age.

And let’s take the American company that built this iPhone. They’ve built a platform that is interoperable with the apps built by a global community of app developers, and it all just works together really seamlessly. Because the digital integration schema, the platform as a service schema, where there’s a platform that’s the iPhone that exposes SDKs, that enable creativity by a global developer community that enables modernization, reliable modernization at an affordable price, and at a pace of innovation that’s unmatched by anything else in the world.

But you can’t write an app for a Collins Glass submarine, or a Arley Burke destroyer. Because we haven’t designed those architectures as platform as a platform with SDKs that we share with our allies. And so as China builds a much, much more modern fleet.

It’s not like we are not like, let’s be honest, Australia, we’re gonna build the Aus submarine together, which is fantastic. And I had an opportunity to interact with lots of people in that community last month for a big technology symposium, in the DC area. And that’s gonna be great.

But the most important thing we need to build together is a platform as a service capability. Allow small and creative innovators to offer capabilities, one of which will fit exactly what the need is that a particular commander has, and if we build it that way. So lemme give you a really practical example.

Australia’s building a bunch of small robotic and unmanned systems. United States is doing that. Every country in the world is doing that. What we’re gonna find with all this new stuff is that we’re gonna say, you know what? That USV has the range and speed I need for this mission.

That modular missile, that’s the Australian Modular Missile. It actually has the warhead on it that I need for this mission and this sensor. From this other small business that is a passive, that is just a genius passive sensor from two guys who used to do cell phone rf, but just decided to, help their country.

I need to be able to put that together in a composable package and then deploy it with a man to command and control system that can do the mission oversight. Nothing about your country or mine or anybody else in the west, specs out, defense hardware leads to that outcome.

And so our greatest risk, to put a finer point on it, is that the people that invented the transistor in the internet don’t build a defense. Like modernization program that takes advantage of the way commercial industry modernizes and integrates. Yeah, we get out in the digital domain across sensors, ai, adaptability, and other kind of parameters and military goodness.

Dominic Bowen: Yeah, I’m glad you mentioned the orca initiative. I mean, this is central to the uk, US and Australian defense. And I think really the both na all three nations long-term defense strategies. I know the Trump administration is doing a review in line with the America first strategy, which is fine and reasonable, but I think everyone in the UK and Australia is really hoping that there’s no about faces on that. ‘Cause I think that’s a really critical part of everyone’s defense.

Zac Staples: Lemme tell you where, I come down on that just ’cause it is so fundamental, right? There are are brilliant creative people and companies in all three of those countries. And if we can build Aus as the platform.

That allow unmanned systems from the UK and communication systems from US companies and the, Aus operated by Australian Mariners to all come together at the point of need, then we’ve succeeded. America first set, only makes sense when it, it’s a platform that benefits America.

It is enhanced by the incredible capabilities that our colleagues bring. And so I’m with you. Anything that sets Aus back a Scot is a bad decision All three pillars need a little bit more investment. But, we’ve gotta keep it going.

Dominic Bowen: Hopefully so.

Zac Staples: Yeah.

Dominic Bowen: And the hype around artificial intelligence is really deafening and it’s been so for the last few years. And at the same time, the stakes around defense, around security are really existential as you’ve been alluding to already. I’d love to hear from you, what’s the most dangerous myth about artificial intelligence that you are hearing in defense circles today? And what’s the truth we really need to face about AI before the next crisis happens?

Zac Staples: Yeah, yeah, yeah. This is a no-brainer for me. The sexy part of AI is algorithms. Built on synthetic training data that is, so every demo ever is almost a really sexy algorithm doing a really nifty thing.

Built mostly on MATLAB generated training data. And then you get into, okay, we wanna make that real. Let’s go collect all the actual data we would need to run that algorithm. that, that’s where all the work is. So I think the biggest myth is that We collect tons of data. The data’s there. We just gotta go build the algorithms. That, in fact is balance that we do generate tons of data, but we don’t have it normalized, tagged, examined, whatever, and the funding streams to go do data engineering projects just don’t compete well against really sexy demos of algorithms.

So the biggest myth about AI is It’s not about.

Dominic Bowen: Yeah, I think that’s a really great point and I hope we’re learning. I was in Ukraine, last week with, some executives from some very large defense companies, and some non-defense companies. And the data and the speed of learning that Ukrainians have honestly put some fantastic European companies to shame.

And now of course. The European companies aren’t living or living and dying by their effectiveness. But the Ukrainians are they’re forced, you know, even things like their research and development. It’s no longer based in places like La Viv and Kyiv, where they’re normally where some of the best universities are, but they’ve had to move everything literally to the frontline.

So when it’s a drone operator or a radio operator or someone operating some sort of machinery realizes that. The Russians modus operandi has learned faster. You talked about the OODA loop, the observe orientate decide act before, but when the Russians have changed their modus operandi, there’s no time to go back to Kiev or IV and spend six months, which is the normal cycle.

Yeah. Sometimes longer. You know, these things have to be changed in days or people die quite simply. So I think it, it really is such an important aspect about learning and moving fast and the vast majority of emerging defense technology is often coming with commercial first applications.

But national security, especially when not in war, can’t afford just to be breaking things. And I think this drive to develop advanced technology has to have safeguards around it, technical, operational, ethical, legal, and it’s great. And we can have these conversations, you know, on Tuesday afternoon.

What was that last one? Domain selection. Yeah, that’s a great one.

Zac Staples: Pick the problems that are high reward, low risk problems for the application of ai.

Dominic Bowen: Yep. That’s a really, yeah, really great point. Thanks for raising that. And so I’m wondering, your company’s doing a lot of work and you’re doing really important work and, it’s quite exciting work and we’ll link to some of it in the show notes below.

But how are you integrating some basic fundamental comple concepts around governance, risk and compliance, into the core of your engineering processes and the work and the partnerships that you are doing.

Zac Staples: This is something that I’m actually hopeful for. So I’ve talked about this on a couple other podcasts, and I, love when it comes up. The Department of Defense in general has a really robust culture of v and v. You have to do verification, validation prior deployment. Now, there are some really negative side effects of that right now. It takes forever and it’s too expensive to do. But the good thing about for most defense systems, but the good thing about AI and digital, let’s just talk about kind of digital modernization in general, is that the test and reliability practices that AWS or anybody else does for site reliability engineering, where you really test the new piece of code functionally as well as for cybersecurity before you deploy it.

Almost all of that is automated. So the culture shift around the Department of Defense is going to maintain a very rigorous high standard for VNV. Just change the tests that they automate the test suite so instead of the VNV, official grading, the output of the test, they’re involved in setting the thresholds of what the automated test must be, and then they’re not a roadblock to speed.

So then you can have your culture, very rigorous, valid verification, validation in place, which only the do OD , DOD is one of the few places that has that because of the risk to life that so many things. So much of what has, you can keep that culture, change the tool set and end up with systems that are, you know, rigorously and provably tested in a way that we are not seeing in any, civilian deployments of this tech. So I think culturally, defense innovation in AI actually has a lot of promise because of that.

Dominic Bowen: Oh, that’s great. And you’re talking about promise. We’ve talked a lot about the risks today, but the other side of risk, of course, is opportunity. I’ve heard you talk about brilliant machines, ones that are autonomous, adaptive and connected.

When you think about these brilliant machines, what are you most excited about? What are the opportunities that get you out of bed every morning?

Zac Staples: I’m such a geek on this. The thing that gets me excited every day is our newest project. So our newest project is called Tempest Os.

And, at the end of the day, we believe that a new operating system is required for defense use cases. So right now almost all military information systems run on either kind of a general purpose enterprise Linux distribution, or maybe Windows, but mostly some enterprise Linux distribution, right?

Those are general purpose operating systems. that are for everybody in the world to build everything from a ring doorbell to, a scaled server in, an on-prem environment. They have every driver in them from Xbox controllers to 300 printer drivers loaded into them, So those general purpose operating system are designed to connect almost everything that’s been invented in the digital space to the chip. So that it just works when you plug it in. Well, that’s just a bad design philosophy. I’ve lately been saying like, Linus tour volt’s after school project that just became like a world after school project should not be the foundation of freedom.

You know what I mean? And so we need to think about that. And when you think about cybersecurity vulnerabilities, lack of interoperability, there it is time for a defense purposed operating system that is built to the task.

We’re building that. I rolled out, I’m so excited. June 30th, we committed, version 0.00 to our repo. And we’ll be rolling out demos and examples of this, like over the next few years. But we believe just like iOS enabled kind of an ecosystem of devices to.

Collaborate and communicate with each other to support individual. It use cases. We believe that there’s a ton of bespoke hardware out there, some of which is gonna be built by acus, some of which is unmanned systems by others. So you have a lot of bespoke hardware that lacks and operating a common operating system to ensure the secure exchange of data and platform services. So I get excited about this geeky infrastructure thing while everybody else is out building really cool, like super fast, unmanned like, like.

Dominic Bowen: I’ve always said, whether it’s in the military or the corporate sector or other government agencies, I’m always grateful for people that geek out on policy, on tech, on things that I’m not good at, in order that I don’t have to look at them. Someone else just goes, I looked at that, I’ve considered, I’m a great, awesome, let me pick up where I’m interested in on. So I, I think that’s great. I think that’s really, really good that you’re interested in that.

Zac Staples: Lemme give you a couple more examples, that I just think kind of bring this home. If you get any enterprise like Linux distribution, and that’s gonna be the core of this thing you’re gonna build.

And it doesn’t matter if this thing is a vision processing algorithm looking for through a thousand security cameras or it’s the Linux module you’re gonna run in your new drone. You get that operating system and everybody in the West there’s a whole list of security technical implementation guides that say, well, you gotta take that and you gotta turn this off and you gotta turn that down and you gotta get rid of this.

And so it basically takes five IT security professionals just to go turn all those knobs and flip all those switches. It is ridiculous. When you think about how important both reliable Data Exchange and cybersecurity are to the defense mission set, that there isn’t an operating system that out of the box, like already has all that done, and then so you take that and you add onto it.

What I would say is very, super important platform services. So just to geek out a little bit on this, very few people like hire a database administrator and buy a license to a database, and then have them build it on the computer at their office. In the commercial space, you go to Azure, AWS or Google Cloud or any one of the hyper scaled cloud providers and you just pay a small usage fee to log into a database that just works.

Then they manage all of the infrastructure behind that. So that’s a platform service that’s available on those clouds. Now you move over to the tactical side and there’s all sorts of need for platform services like the track file, like if everything has a track number for everything that it tracks, but none of the track numbers match up.

Anybody who’s been in the combat systems business knows this for while it’s a nightmare. There should be a track number database that just everybody on the platform uses and then other things like we are going, we’re going to invent resilient p and t precision navigation and timing solutions that aren’t reliant on GPS.

You know, there’s some out there, there’s a lot more coming. Everything on the platform should just be able to ask what time is it? Where am I? Get a solid answer that’s independent of GPS. And so what we end up with is a modernization. Right now, if you add something new to, let’s say, the joint strike fighter that so many of our nations share, man, it’s two or three years before all of the point to point integrations of that new capability are certified to work with all the stuff that’s already there and you get to go fly it on the.

With a platform based approach. It’s like adding an app to your phone. We’ve already managed all and so something new and creative comes out. Doesn’t matter who makes it, as long as it’s vetted and available on the platform. Team that needs it can download it and run it. And so we’re super excited about building Tempest OS as the infrastructure.

That that enables that. And we’ve got several large partners that have expressed some intent in that. And as our projects come together over the next few years, I’d just love to come back and tell you about some of the successes on this.

Dominic Bowen: Yeah. Fantastic. No, that’d be a great conversation.

And if we can take a slight pivot, Zach. Yeah. All nine nuclear armed states are actively advancing and improving their nuclear weapon systems. If we look at China as an example, they’re undergoing the fastest expansion of any nuclear power, adding about a hundred warheads every year since 2023.

By this year, I believe they’ve reached over 600 new warheads and they’re building new intercontinental ballistic missile silos in multiple regions around China. They’re aiming for parity with missile numbers with both the US and Russia by the end of this decade. But similarly, Russia’s not sitting still. They updated its nuclear doctrine last year, and they’ve lowered their threshold for nuclear loose in response to what they call critical threat to their sovereignty, which includes mass drone attacks. Now during the Cold War. Nuclear deterrence depended on predictability on second strike survivability on rigid commander control chains.

And I’m wondering with AI enabled autonomy this by contrast introduces systems that are adaptive. They’re decentralized and capable of potentially acting without human intervention. And this could be a strategic paradigm shift when it comes to nuclear doctrine. What are some of the lessons when we consider that?

What are some of the lessons from the Cold War and even nuclear deterrents that we should be considering when we’re developing AI and employing AI that can service us well to maintain peace and deterrence in 2025 and beyond?

Zac Staples: Yeah, I mean, you and me probably have this in common, Dominic, but anybody who’s been to war a lot knows it sucks for both sides.

Even if you win, war is just bad. And I, you know that as, as much as I do. I like deterrence and I think there may have been hawkish NATO generals that, that may have wanted to start a war with Russia if they thought it was win. And the vice versa is probably true.

If Russia during the Cold War thought nuclear war was winnable they may have started it. And so mutually assured destruction as a strategy was effective. And so I think, AI decision making about first strike is a horrible idea. There is no decision in the world that shouldn’t be more accountable to human beings than the decision to use that destructive force, the tragedy of which AI would nev and will never understand.

At the same time the ability to ensure second strike survivability. To mutual assure, destruction the and I be a day. While AI is still so new to use it in any military command and control system, it doesn’t mean that there isn’t a role that AI could, at some distant point in the future, play in ensuring.

And reinforcing the same idea that prevented nuclear war in the Cold War from nuclear war in the future, which is the idea of mutually assured destruction so that humanity never has to endure that absolute tragedy.

Dominic Bowen: Yeah. Thanks for unpacking that, Zach. And yeah, I can certainly attest and listeners to the podcast would’ve heard me say before, you know, during their first Gulf War, I giggle when I say this, during the first Gulf War, I was a little boy and I remember watching it on TV and seeing CNN and just, drawing the airplanes and drawing the missiles and just being so excited by the war.

And I remember at one point during that conflict, I actually realized that, or I thought, I realized and I got really sad. I went, there’s not gonna be any wars for me to fight. When I become older and my dream of becoming an Air Force pilot or a Special Forces captain were completely destroyed. And I was really as sad as a young, you know, primary school age boy can be.

But I was really sad that there wasn’t getting me any war to fight. And then, little did I realize I’d have a career of just bouncing from I think nearly every major conflict zone over the last 25 years.

Zac Staples: Right after the peace dividend. Right. And after the fall of Berlin Wall , and I thought man, 20 years, man, this is long. Gonna be a long time just sailing around doing port visits and yeah. That didn’t pan out very well,

Dominic Bowen: Yeah, exactly. Exactly. And yeah, no, I completely agree. You know, war sucks. I hope my son never has to fight, but I do fear that, you know, with the current world as it is the likelihood that he will have to fight in some.

Zac Staples: Maybe we can end on this because it’s the most important point is, right. I really like to think about what is the demonstrated capability that we need to have to deter peer conflict? Pure conflict is a bad idea. Nobody wins. Somebody may be the one forcing the other to sign a treat, right?

But nobody wins in that scenario. And so my goal in everything that we do at Fathom Five and the people we collaborate with is always the question on how we start every session, right? It’s what must we do to create a credible deterrent that supports diplomatic efforts to solve things without conflict. Because you and I know better than anybody else, soldiers know better than anybody else, that nobody actually wins war.

Dominic Bowen: And Zach, a question I ask all guests is, when you look around the world, when you see what’s happening, what is the international risks that concern you the most?

Zac Staples: The international risks that concerns me the most is that the countries that invented digital tech, modern digital technologies and have. Adapted them for commercial use cases and created a new generation of the largest companies in the world will not adapt them for national security.

And we run the risk of fighting a force that is more adaptive, creative, and enabled, puts a shorter time on the clock to get to a better decision because we couldn’t break the pattern of how we build things. So my, the greatest risk is that we fail to capture digital innovation in a way that, and guarantees deterrent.

Dominic Bowen: Very interesting. Very interesting. Zach, thanks very much for explaining that and thanks very much for coming on the International Risk Podcast.

Zac Staples: Absolutely, Dominic, my pleasure, man. I can’t wait to, listen to this and see if we said anything interesting.

Dominic Bowen: I think we did. I think we did, Zach. Well, that was a great conversation with Zach Staples.

He’s a former US Navy officer and founder of Fathom five. I really appreciated hearing his thoughts on the strategic, ethical, and even institutional challenges of integrating advanced technologies into modern. And military operations. Please go to our website and subscribe to our newsletter for the latest news in your inbox.

Today’s podcast was produced and coordinated by Elisa Garbil. I’m Dominic Bowen, your host. Thanks very much for listening. We will speak again next week.

Similar Posts