Anthropic, the company led by Dario Amodei, has recently signed a $1.8 billion deal with Akamai Technologies to secure greater computing capacity for its AI models, particularly for the Claude platform.
The agreement, valid for seven years, is the largest contract ever signed in Akamai’s history and comes at a time when demand for AI infrastructure is growing at a pace that is increasingly difficult to sustain even for major tech companies.
Claude’s growth pushes Anthropic to strike a deal with Akamai to obtain more computing power in the AI space
Behind the operation mentioned above there is not only Anthropic’s growth, but also a broader shift in the entire cloud market.
In recent months, in fact, the artificial intelligence sector has revealed an increasingly evident structural problem.
Namely that building advanced AI models requires huge amounts of computing power, specialized GPUs and distributed infrastructures capable of sustaining continuous workloads.
Not by chance, during the Code with Claude conference in San Francisco, Amodei stated that Anthropic would see an 80% growth in annualized revenue and in the use of its services in the first quarter of 2026.
A significant part of this expansion would be tied to the use of Claude for coding, automation and AI-assisted software development.
And this very growth is forcing AI companies to seek new sources of computing capacity well beyond the traditional hyperscalers.
Anthropic, in fact, is not limiting itself to the agreement with Akamai: in recent months it has also forged partnerships with Google Cloud, Amazon Web Services, CoreWeave and even with Elon Musk’s SpaceX.
The message that emerges is therefore very clear: the real battle in artificial intelligence no longer concerns only language models, but access to the infrastructures that make them possible.
Akamai changes identity: from internet delivery to AI infrastructure
On the other hand, for Akamai too the agreement with Anthropic represents much more than a simple commercial partnership.
Historically the company has been known mainly for its content delivery and cybersecurity services, but the explosion of artificial intelligence is opening up new strategic opportunities.
Thanks to its global network made up of more than 4,000 points of presence distributed across over 130 countries, Akamai has a decentralized infrastructure that can adapt well to the needs of modern AI workloads.
And this is precisely one of the most interesting aspects of the operation. In recent years the cloud market has been dominated mainly by a few centralized hyperscalers such as AWS, Google Cloud and Microsoft Azure.
However, artificial intelligence is greatly increasing the pressure on available resources and many companies are looking for more distributed and flexible solutions.
In this context, Anthropic seems to have understood that relying exclusively on the major traditional providers may not be sufficient in the long term.
Not surprisingly, investors reacted enthusiastically to the news.
After the announcement, Akamai’s shares rose by about 28%, a clear sign of how much the market sees artificial intelligence as an opportunity for radical transformation for many tech companies.
According to analysts’ estimates, the contract could end up representing about 6% of Akamai’s annual revenue once fully operational, with the first economic impacts expected by the end of 2026.
This evolution also shows how the AI sector is reshaping the technological value chain. Not only are the companies that develop language models benefiting, but also all the players able to provide infrastructure, energy, data centers and connectivity.
However, the problem is that this ‘race’ is becoming increasingly expensive. Training and maintaining advanced AI models requires continuous multi-billion-dollar investments, making the sector increasingly concentrated in the hands of a few large companies with access to enormous capital.
The real challenge of AI is infrastructure, not just software
The agreement between Anthropic and Akamai reveals a reality that is often underestimated in the public debate on artificial intelligence: the main limit of modern AI is no longer just algorithmic, but infrastructural.
In recent years the sector has focused mainly on competition between chatbots, language models and advanced features.
Today, however, a second problem is emerging, less visible but perhaps even more important: that of access to computing capacity. AI companies are in fact consuming enormous amounts of energy, GPUs and data bandwidth.
Each new generation of models requires more resources than the previous one, creating an investment spiral that risks increasingly favoring only the groups with greater financial resources.
And Anthropic is not the only company in this situation. OpenAI, Google and Meta are also investing billions to secure chips, data centers and cloud infrastructures sufficient to support the growth of generative AI.
This scenario also raises critical questions about the future of the sector. If artificial intelligence increasingly depends on gigantic infrastructure investments, there is a risk that the market will become progressively less open and more centralized.
Moreover, the pressure on computing resources could also have effects on energy costs and environmental sustainability.
The expansion of AI in fact requires enormous amounts of electricity and advanced cooling systems, turning data centers into increasingly important strategic assets.
