Every Prompt Drinks
from the River
On the day seven states missed their deadline to save the Colorado River, a conversation with an AI revealed the water crisis hiding inside every query.
On February 14, 2026, seven Western states missed a federal deadline to agree on how to share the shrinking waters of the Colorado River. The negotiations had been deadlocked for more than two years. The Upper Basin states — Colorado, Utah, Wyoming, and New Mexico — refused mandatory cuts. The Lower Basin — Arizona, California, and Nevada — said they wouldn’t accept a deal without shared sacrifice. The Bureau of Reclamation’s February snowpack report showed the Upper Colorado Basin at 58 percent of median. Lake Powell was approaching all-time lows. If levels dropped much further, hydropower turbines inside Glen Canyon Dam could be forced offline.
The coverage that day focused on the familiar players: agricultural interests, municipal water districts, tribal nations with senior water rights, and state negotiators pointing fingers across the basin divide. What none of the reporting mentioned was one of the fastest-growing water consumers in the region — and perhaps the single greatest emerging threat to an already overallocated system.
AI data centers.
Across the Southwest, the same states fighting over Colorado River water are racing to attract data center investment. Arizona, Nevada, Colorado — all Colorado River Basin states — are among the fastest-growing data center markets in the country. Meta’s facility in Goodyear, west of Phoenix, consumes roughly 56 million gallons of potable water annually, equivalent to about 670 households. Google’s facilities average 550,000 gallons daily. The Tahoe Reno Industrial Center in Nevada is transforming into one of the world’s largest data center markets. OpenAI’s Stargate project in Abilene, Texas, will be a 1.2-gigawatt campus.
A 2024 Lawrence Berkeley National Laboratory report found that U.S. data centers consumed 17 billion gallons of water in a recent year — enough for 150,000 homes annually. That demand is projected to double or even quadruple within the next few years.
And yet, in the Colorado River negotiations, data center water consumption is still treated as a local permitting issue, not a basin-wide demand driver. The 1922 Compact that governs the river — drafted when the region’s population was a fraction of today’s, and AI didn’t exist — has no mechanism to account for an industry that emerged decades later.
The 1922 Colorado River Compact designated 15 million acre-feet of water per year for seven states. The river has never reliably produced that much. Now, a new industrial consumer is drawing from the same overallocated system — and the governance framework can’t see it.
To understand how AI connects to the Colorado River, follow a single prompt from a keyboard in San Diego to a data center and back. The chain of infrastructure reveals a water footprint that most users never see — and that no subscription fee accounts for.
What Happens When You Hit Enter
The chain of energy, water, and cost from a keyboard in San Diego to a Claude response — and back to the Colorado River. Click any step to expand.
The critical insight is in Step 4: the power plant. Most discussions of data center water use focus on the evaporative cooling towers on-site — the facilities where water is sprayed across hot surfaces and lost to the atmosphere. That’s significant, but it’s only a fraction of the total.
Natural gas power plants, which supply 40–60 percent of electricity in many U.S. grid regions, operate on a steam cycle that requires its own massive water cooling infrastructure. For every kilowatt-hour the data center consumes, the power plant may withdraw 1.5–4 liters of water for cooling, much of it lost to evaporation. According to research from the Colorado School of Mines, when a data center’s local grid water intensity is factored in, total water usage jumps from roughly 0.7 liters per kilowatt-hour on-site to approximately 5 liters per kilowatt-hour — meaning the off-site water consumption can dwarf what happens inside the building.
This creates a trap. If a data center switches from evaporative cooling to a closed-loop or dry cooling system — which eliminates on-site water evaporation — that system uses more electricity to achieve the same thermal result. If that additional electricity comes from natural gas generation, the water consumption simply shifts from the data center’s address to the power plant’s address, often in the same watershed. The optics improve; the actual draw on the river system may not.
The estimated 1–5+ liters of water consumed per AI query is overwhelmingly consumptive use — meaning it doesn’t return to the source. At the data center, evaporative cooling works like a massive swamp cooler: water absorbs heat and evaporates into the atmosphere. Companies estimate that 45–60 percent of withdrawn water is consumed this way. The remainder is discharged as mineral-concentrated “blowdown” water that often requires treatment before it can re-enter the water system.
At the power plant, the same physics applies at larger scale. Cooling towers evaporate water to condense the steam that has already passed through the turbines. That evaporated water enters the atmosphere and may precipitate thousands of miles away. From the perspective of the Colorado River Basin, it’s gone.
In Western water law, the distinction between “withdrawal” and “consumption” is everything. The Colorado River crisis is fundamentally about consumption exceeding supply. When a data center in Phoenix evaporates water through its cooling towers, that water is subtracted from the finite flow that 40 million people and 5.5 million agricultural acres depend on. It doesn’t come back.
Most of the water consumed by AI is evaporated — removed from the local watershed permanently. It enters the atmosphere and may precipitate over the Pacific or the Midwest. From the Colorado River’s perspective, every gallon evaporated through a cooling tower in Phoenix is a gallon that never reaches Lake Mead.
There is a reasonable instinct, upon learning this, to ask whether one can simply stop using AI. The answer, increasingly, is no.
Google began embedding AI-generated responses at the top of search results by default in 2024. Every search query now potentially triggers an inference call whether the user requests it or not. Apple Intelligence is built into iOS. Samsung’s Galaxy AI runs on-device and in the cloud. Smart TVs, fitness watches, vehicle navigation systems, home assistants — all increasingly run inference workloads that contribute to the same energy and water demand.
The irony deepens: water utilities themselves are adopting AI for leak detection, demand forecasting, and distribution optimization. The same overtaxed water systems struggling with scarcity are deploying AI tools to manage their diminishing supplies — tools that require compute, which requires power, which requires water. It is a recursive loop with no clean exit.
But the deeper problem isn’t individual choice. Even if one person disconnected entirely, the aggregate demand wouldn’t change meaningfully. The infrastructure runs 24/7 regardless. Growth is driven by enterprise and government adoption far more than by individual consumers. The question was never whether any single user could escape AI, but whether the communities bearing the water and energy costs had any say in the system being built around them.
A Claude Max subscription costs $100 per month. At API pricing, Claude Opus 4.6 runs $5 per million input tokens and $25 per million output tokens. Anthropic, which reported annualized revenue of approximately $7 billion by late 2025, has invested $8 billion in AWS infrastructure through Project Rainier and announced a separate $50 billion data center buildout in Texas and New York. The company has raised over $15 billion from investors.
None of this pricing accounts for the water. A heavy daily user sending 50 prompts per day may consume 50–250 liters of freshwater through the combined direct and indirect chain — roughly 400–2,000 gallons per month. That water comes from rivers and aquifers that also supply communities, agriculture, and ecosystems. Its cost is externalized: absorbed by communities, borne by the environment, and invisible to the user.
The communities near data center sites — rural Indiana around Project Rainier, the Phoenix suburbs near Meta’s campus, the Navajo Nation downstream on the Colorado — had essentially no say in siting decisions for infrastructure that draws from their shared water systems. The economic benefits — jobs, tax revenue — are real but finite. The water costs are ongoing and cumulative.
The Colorado River negotiations that collapsed on February 14 are, in part, about who bears the cost of demand that every AI prompt contributes to. The governance framework that manages the river has no mechanism to account for an industry that barely existed a decade ago.
There are promising technological responses. Microsoft has developed a closed-loop, zero-water evaporation cooling design that eliminates evaporative water entirely, reducing annual water use by more than 125 million liters per facility. Direct-to-chip liquid cooling can reduce water consumption by 20–90 percent in water-scarce regions. Startups are developing microfluidic cooling etched directly into silicon. Some companies are experimenting with subsea data centers and geothermal cooling.
But a majority of AI-specialized data centers still use evaporation-based cooling systems either around the clock or during hot weather, and that proportion is expected to increase through 2028. The technology to reduce water consumption exists; the economic incentives to deploy it at scale are still developing. Water remains cheap relative to land and energy in most siting decisions, which means it’s the last variable optimized.
Meanwhile, data center electricity demand across Western states is growing at 4.5 percent annually. The grid isn’t decarbonizing fast enough to absorb the new load without additional natural gas generation. And the river that supplies both the cities and the power plants that supply the data centers is entering what may be its worst year on record.
The Colorado River crisis isn’t a natural disaster. It’s manufactured scarcity — an overallocated system compounded by a megadrought, accelerated by new industrial demand that no existing governance framework can address. The February 14 deadline came and went with no agreement. If no deal materializes by summer, the outcome will likely be years of Supreme Court litigation, during which the river will continue to decline.
Every prompt drinks from the river. The question is whether anyone is counting.
Per-query energy and water estimates compiled from: Epoch AI (2025), IEEE Spectrum (Ren & Luers, 2025), University of California Riverside / University of Texas Arlington water footprint research, Washington Post / UCR analysis, Brookings Institution (2025), Lawrence Berkeley National Laboratory 2024 Data Center Report, Environmental Law Institute, Colorado School of Mines / Payne Institute, World Economic Forum (2025), Anthropic and AWS public disclosures, ScienceDirect (2025). Colorado River reporting drawn from Deseret News, KJZZ, KUNC, Spectrum News, ABC15, Maven’s Notebook, and the Colorado Sun. All per-query figures are order-of-magnitude estimates reflecting current research consensus; actual values vary by model, prompt length, data center location, cooling technology, and regional grid mix.
This article originated as a real-time conversation with Claude (Anthropic) on February 14, 2026 — the same day the Colorado River deadline was missed. The conversation was restructured and edited for publication. The interactive schematic was generated during the conversation.