The AI Bet: Huge Investment, Job Cuts, and Uncertain Returns
AI is rapidly becoming a central axis of economic transformation, corporate strategy, and geopolitical risk. What began as experimentation with generative tools has evolved into a full-scale reconfiguration of how firms invest, operate, and compete.
In a recent episode of the International Risk Podcast, host Dominic Bowen spoke with Craig Unsworth, a portfolio Chief Product Officer and advisor working across private equity-backed technology companies. Their discussion explored how AI adoption is reshaping enterprise behaviour.
Investment Without Immediate Return
At the frontier of enterprise technology, the scale of investment is striking. Large firms are committing vast sums to AI infrastructure, particularly data centres and compute capacity, often without clear short-term returns.
As Unsworth describes, these companies are “making a bet on tomorrow.” Rather than optimising existing operations, they are building for a future defined by exponential, not incremental, growth. This forward-looking posture reflects a belief that AI will fundamentally reshape markets, justifying aggressive capital deployment in the present.
Yet this dynamic is not evenly distributed. While large firms can absorb speculative investment, mid-market companies face a more constrained reality. For these firms AI adoption is producing a different kind of pressure. These companies are actively experimenting but often without clear visibility over costs.
However, Unsworth identifies an approaching inflection point. CFOs will increasingly demand predictability and cost discipline. What is currently accepted as innovation spend will soon be subject to the same scrutiny as any other operational expense.
From Experimentation to Operationalisation
A key shift over the past two years has been the transition from experimentation to structured deployment.
In the early phase of generative AI, firms explored use cases opportunistically they would test whether tools could replace discrete tasks or roles. Today, that approach is being replaced by systematic integration. Companies are designing “agent workflows,” where multiple AI systems operate in coordinated sequences, each with defined objectives and measurable returns.
One of the more revealing concepts introduced by Unsworth is the idea of “FTEE”—full-time equivalent equivalents. This metric captures the output generated by a combination of human workers and AI systems, compared to what would previously have required human labour alone.
The logic is straightforward: if AI-enabled workflows can deliver the output of multiple employees at a lower cost, the investment is justified. As long as the productivity gains exceed the increase in spend, firms remain in positive territory.
This framework is already influencing hiring decisions. In many cases, companies are not conducting large-scale layoffs, but are instead freezing headcount or avoiding new hires. Growth is being absorbed by systems rather than people.

Investor Discipline and the End of “Vague Spend”
Another important shift is occurring at the level of capital markets. According to Unsworth, investor attitudes toward AI spending are changing more rapidly than company behaviour.
During the early phase of the AI boom, capital was deployed with relatively little scrutiny. Firms were encouraged to experiment, often without clear accountability for returns. That period is ending.
Investors are now demanding specificity, defined use cases, measurable ROI, and clear pathways to value creation. The distinction between innovation and operational expenditure is becoming sharper, forcing companies to justify AI investment in concrete terms.
While Unsworth stops short of describing the current environment as a full-scale bubble, he identifies clear “bubble-like” characteristics. These include extreme compensation packages for AI talent and highly speculative infrastructure investment.
Such signals are particularly visible at the frontier of the market, but their effects are felt downstream. Smaller firms, lacking the financial resilience of large tech companies, are often the first to adjust behaviour in response to perceived excess.
This creates a two-speed ecosystem: one defined by abundance and experimentation, the other by constraint and discipline.

A Fragmented Global Landscape
AI development is also increasingly shaped by geopolitical divergence. Regulatory approaches across the United States, Europe, and Asia are evolving in different directions, particularly in areas such as data privacy, model governance, and national security.
At the same time, the competitive landscape remains heavily concentrated. The dominant firms are overwhelmingly American, with China pursuing a more distinct, state-influenced strategy. Europe, by contrast, has yet to produce a comparable leader.
This asymmetry raises strategic questions about technological sovereignty, market access, and long-term competitiveness.
The Risk Landscape Expands
Perhaps the most striking insight from Unsworth is the scale at which AI is expanding corporate risk registers. What were once manageable lists of operational risks have multiplied dramatically, reflecting the complexity introduced by AI systems.
First, the erosion of competitive “moats.” In an AI-enabled environment, products and services can be replicated far more quickly, reducing the durability of competitive advantage.
Second, talent risk. Highly AI-literate employees are both critical and mobile, creating retention challenges for firms that depend on them.
Third, infrastructure risk. The rapid expansion of data centres raises questions about energy consumption, water usage, and geographic vulnerability, particularly in regions exposed to climate-related disruptions.
Finally, there are emerging societal risks, including dependency on AI systems and the potential for behavioural or cognitive effects analogous to social media addiction.
The implications of AI extend well beyond firm-level efficiency. As Unsworth notes, the technology has the potential to reshape labour markets, tax systems, and even the structure of the state.
If large segments of high-income professions, such as law or finance, are partially automated, the impact on tax revenues could be significant. This raises difficult questions about how governments fund public services in an AI-driven economy.
Proposals such as universal basic income are likely to re-enter policy debates, but implementation remains politically and economically complex. Moreover, the global nature of AI development complicates national-level responses, as firms can relocate to more favourable regulatory environments.
The current moment is best understood as transitional. The direction of travel is clear toward greater automation, deeper integration, and more complex risk but the endpoint remains uncertain.
