Coinbase crypto exchange has recently experienced an outage, which resulted in a multi-hour service disruption that affected trading, exchange access and balance updates. CEO Brian Armstrong talked about this incident in a new tweet.
On May 7, 2026 at 23:50 UTC, the Coinbase monitoring team detected cascading quote failures from internal services. Customer-facing impacts included spot trading, Prime, International and derivative exchanges.
In his tweet, the Coinbase CEO stated that the recent outage was never acceptable. The root cause, according to him, was a room overheating in an AWS data center when multiple chillers failed. He stated that Coinbase designed its services to be resilient against downtime in any one AWS Availability Zone (AZ), and that most of its systems performed this way, but not all.
Largest Swiss Bank Loads Up on Strategy (MSTR)
Ethereum (ETH) Could Hit $12K This Year, Lee Predicts
The centralized exchange did not behave as expected during the broader AWS outage, leading to a service disruption.
Armstrong noted that exchanges have unique architectures that optimize for latency and co-location of clients. While it is possible to make exchanges resistant to AWS Availability Zone (AZ) failures, this can introduce latency delays that are not desirable, along with breaking customer co-location.
The Coinbase CEO highlighted the next steps to take in the wake of the incident which include revisiting the said trade-offs to ensure it gives users the best possible venue to trade. He noted that the duration of an outage should be reduced considerably when an AZ move is needed.
Working on next steps
In a separate tweet, Coinbase CEO Brian Armstrong interacted with the initial technical summary of the outage shared by Rob Witoff, Coinbase Head of Platform.
While thanking the teams that put in efforts to resolve the issue, Armstrong added that Coinbase was already working on the next steps.
The issue saw trading across retail, advanced, and institutional exchanges blocked. During the lag, customers saw delayed balance streams, which resolved automatically once replication caught up. However, no data was lost due to the incident.

