The largest American technology companies have collectively committed up to $725 billion in capital expenditures this year, nearly all of it aimed at AI infrastructure, and the sheer scale of that number is beginning to change how investors read an earnings beat.
There was a period, not long ago, when AI spending was framed as an investment in future optionality. You built the capability, the revenue would follow, and the market would reward the vision. That framing is becoming harder to sustain. According to Bloomberg’s analysis of the capital expenditure commitments now on record from Amazon, Alphabet, Microsoft, Meta, Oracle, and their peers, the combined AI infrastructure spend for 2025 approaches three quarters of a trillion dollars. This is not a side bet on a promising technology. It is the primary capital allocation decision for some of the most profitable companies ever built, and the financial logic that makes it work is getting more complicated by the quarter.
The pressure is structural. Each of these companies is simultaneously a customer of AI infrastructure and a provider of it, and the competitive dynamics require them to build before demand is fully visible. Amazon cannot afford to let Microsoft Azure pull ahead on GPU availability for enterprise customers. Alphabet cannot let Amazon Web Services own the AI inference market while Google’s own models are the underlying product. Microsoft is locked into a capacity commitment to OpenAI that requires physical data center expansion regardless of near-term utilization rates. The result is a race where the penalty for slowing down feels more immediate than the penalty for overspending, which is a dynamic that historically does not end with efficient capital allocation.
What has changed in recent quarters is investor patience. Through 2023 and into 2024, markets largely accepted the premise that AI infrastructure spend would translate into durable revenue at margins worth waiting for. That tolerance is showing signs of strain. Meta’s first quarter 2025 earnings illustrated the tension clearly. Revenue growth was strong, advertising demand remained robust, and Llama’s open-source momentum continued to build developer mindshare. The stock still dropped after the call because the company raised its full-year capital expenditure guidance, and analysts spent much of the Q&A asking variations of the same question: at what point does the spend curve intersect with the revenue curve in a way that justifies the multiple?
Alphabet faced a version of the same dynamic. Google’s core search business continues to generate cash at a scale few businesses in history have matched, and its cloud division is growing. But the company is spending at a rate that compresses the free cash flow story investors have relied on for years, and the returns on that spending are, by the company’s own admission, still several years from being fully realized. Microsoft, which has perhaps the clearest near-term AI revenue story through Copilot enterprise licensing, is nonetheless running data center construction programs that require capital commitments extending well into the next decade.
Why this looks less like software and more like steel
The original case for AI as an investment category rested heavily on software economics. Write the model once, distribute it infinitely, watch margins expand as scale increases. The infrastructure reality of 2025 looks considerably more like an industrial buildout. GPUs depreciate. Data centers require land, power, cooling, and constant hardware refresh cycles. The energy demands of large-scale AI inference are material enough that several of these companies are now making long-term power purchase agreements and, in some cases, direct investments in energy generation. Oracle has been particularly explicit about its data center expansion, announcing campus-scale builds that require dedicated power infrastructure. This is not software leverage. It is capital intensity of a kind the tech sector has not seen since the fiber optic overbuild of the early 2000s, a comparison that makes some analysts visibly uncomfortable.
That comparison deserves some nuance. The fiber overbuild produced genuine infrastructure that eventually supported enormous value creation, even after the companies that built it went bankrupt. The AI infrastructure being constructed now is being built by companies with balance sheets that are, in most cases, strong enough to survive a demand shortfall that would have destroyed any previous generation of tech infrastructure builder. Amazon’s AWS cash flows, Google’s advertising engine, and Microsoft’s enterprise software annuity are each capable of subsidizing years of infrastructure investment without threatening the parent company’s viability. The question is not survival. It is return on capital, and on that question the math is genuinely uncertain.
For startups and smaller operators watching this buildout, the dynamic creates a specific kind of strategic problem. The cost of access to frontier AI compute is set by companies for whom that compute is also a core competitive asset. Pricing will reflect that dual role. The hyperscalers have every incentive to make AI capable enough to attract enterprise customers and expensive enough that building equivalent capability independently remains out of reach for all but the best-funded challengers. That is not a conspiracy. It is just the logical outcome of letting the infrastructure layer and the application layer be owned by the same companies.
The spending commitments for this year are already locked. The more important number to watch is what the guidance looks like twelve months from now, when the companies currently justifying $725 billion in collective capex will need to show that the revenue it was meant to generate is actually arriving on schedule. If it is, this will look like the most rational industrial investment in a generation. If it is not, the conversation about AI economics will get significantly harder, and significantly louder, very quickly.
Also read: Elon Musk’s lawsuit against OpenAI is forcing the AI industry to confront its governance blind spot • The Musk versus OpenAI trial is a governance stress test for the entire AI investment thesis • OpenAI’s goblin problem reveals that model personality is now a serious operational risk