Total water consumption in the USA in 2015 was 1218 billion litres per day, of which thermoelectric power used 503 billion litres, irrigation used 446 billion litres and 147 billion litres per day went to supply 87% of the US population with potable water13.
Data centres consume water across two main categories: indirectly through electricity generation (traditionally thermoelectric power) and directly through cooling. In 2014, a total of 626 billion litres of water use was attributable to US data centres4. This is a small proportion in the context of such high national figures, however, data centres compete with other users for access to local resources. A medium-sized data centre (15 megawatts (MW)) uses as much water as three average-sized hospitals, or more than two 18-hole golf courses14. Some progress has been made with using recycled and non-potable water, but from the limited figures available15 some data centre operators are drawing more than half of their water from potable sources (Fig. 2). This has been the source of considerable controversy in areas of water stress and highlights the importance of understanding how data centres use water.
Consumption from potable water was 64% (2017), 65% (2018) and 57% (2019)15.
This section considers these two categories of data centre water consumption.
Water use in electricity generation
Water requirements are measured based on withdrawal or consumption. Consumption refers to water lost (usually through evaporation), whereas water withdrawal refers to water taken from a source such as natural surface water, underground water, reclaimed water or treated potable water, and then later returned to the source16.
Power plants generate heat using fossil fuels such as coal and gas, or nuclear fission, to convert water into steam which rotates a turbine, thereby generating electricity. Water is a key part of this process, which involves pre-treating the source water to remove corroding contaminants, and post-treatment to remove brines. Once heated into steam to rotate the turbine, water is lost through evaporation, discharged as effluent or recirculated; sometimes all three16.
The US average water intensity for electricity generation for 2015 was 2.18 litres per kilowatt hour (L/kWh)17, but fuel and generator technology type have a major impact on cooling water requirements. For example, a dry air cooling system for a natural gas combined cycle generator consumes and withdraws 0.00–0.02 L/kWh, whereas a wet cooling (open recirculating) system for a coal steam turbine consumes 0.53 L/kWh and withdraws 132.5 L/kWh. Efficiency varies significantly, with consumption ranging from 0.00 to 4.4 L/kWh and withdrawal ranging from 0.31 to 533.7 L/kWh depending on the system characteristics16.
Hydropower systems also use large volumes of water despite being considered a cleaner source of electricity. Water evaporation from open reservoirs is a major source of losses, particularly in dry regions and where water is not pumped back into the reservoir or passed onto downstream users. The US national average water consumption for hydropower is 16.8 L/kWh compared to 1.25 L/kWh for thermoelectricity17.
With the majority of generation still from fossil fuels18, the transition to renewables is important for both carbon and water intensity. Only solar and wind energy do not involve water in generation, yet both still consume water in the manufacturing and construction processes9. Estimates suggest that by 2030, moving to wind and solar energy could reduce water withdrawals by 50% in the UK, 25% in the USA, Germany and Australia and 10% in India19.
In the data centre sector, Google and Microsoft are leading the shift to renewables. Between 2010 and 2018, the number of servers increased 6 times, network traffic increased 10 times and storage capacity increased by 25 times, yet energy consumption has only grown by 6%6. A major contributor to this has been the migration to cloud computing, as of 2020 estimated to be a $236 billion market20 and responsible for managing 40% of servers4.
Due to their size, the cloud providers have been able to invest in highly efficient operations. Although often criticised as a metric of efficiency21, an indicator of this can be seen through low power usage effectiveness (PUE) ratios. PUE is a measure of how much of energy input is used by the ICT equipment as opposed to the data centre infrastructure such as cooling22, defined as follows:
$${rm{PUE}}=frac{{rm{Data}} {rm{Centre}} {rm{Total}} {rm{Energy}} {rm{Consumption}}}{{rm{ICT}} {rm{Equipment}} {rm{Energy}} {rm{Consumption}}}$$
(1)
PUE is relevant to understanding indirect water consumption because it indicates how efficient a particular facility is at its primary purpose: operating ICT equipment. This includes servers, networking and storage devices. An ideal PUE of 1.0 would mean 100% of the energy going to power useful services running on the ICT equipment rather than wasted on cooling, lighting and power distribution. Water is consumed indirectly through the power generation, so more efficient use of that power means more efficient use of water.
Traditional data centres have reported PUEs reducing from 2.23 in 2010 to 1.93 in 20206. In contrast, the largest “hyperscale” cloud providers report PUEs ranging from 1.25 to 1.18. Some report even better performance, such as Google with a Q2 2020 fleet wide trailing 12-month PUE of 1.1023.
As data centre efficiency reaches such levels, further gains are more difficult. This has already started to show up in plateauing PUE numbers24, which means the expected increase in future usage may soon be unable to be offset by efficiency improvements25. As more equipment is deployed, and more data centres are needed to house that equipment, energy demand will increase. If that energy is not sourced from renewables, indirect water consumption will increase.
Power generation source is therefore a key element in understanding data centre water consumption, with PUE an indicator of how efficiently that power is used, but it is just the first category. Direct water use is also important—all that equipment needs cooling, which in some older facilities can consume up to 30% of total data centre energy demand26,27,28.
Water use in data centre cooling
ICT equipment generates heat and so most devices must have a mechanism to manage their temperature. Drawing cool air over hot metal transfers heat energy to that air, which is then pushed out into the environment. This works because the computer temperature is usually higher than the surrounding air.
The same process occurs in data centres, just at a larger scale. ICT equipment is located within a room or hall, heat is ejected from the equipment via an exhaust and that air is then extracted, cooled and recirculated. Data centre rooms are designed to operate within temperature ranges of 20–22 °C, with a lower bound of 12 °C29. As temperatures increase, equipment failure rates also increase, although not necessarily linearly30.
There are several different mechanisms for data centre cooling27,28, but the general approach involves chillers reducing air temperature by cooling water—typically to 7–10 °C31—which is then used as a heat transfer mechanism. Some data centres use cooling towers where external air travels across a wet media so the water evaporates. Fans expel the hot, wet air and the cooled water is recirculated32. Other data centres use adiabatic economisers where water sprayed directly into the air flow, or onto a heat exchange surface, cools the air entering the data centre33. With both techniques, the evaporation results in water loss. A small 1 MW data centre using one of these types of traditional cooling can use around 25.5 million litres of water per year32.
Cooling the water is the main source of energy consumption. Raising the chiller water temperature from the usual 7–10 °C to 18–20 °C can reduce expenses by 40% due to the reduced temperature difference between the water and the air. Costs depend on the seasonal ambient temperature of the data centre location. In cooler regions, less cooling is required and instead free air cooling can draw in cold air from the external environment31. This also means smaller chillers can be used, reducing capital expenditure by up to 30%31. Both Google34 and Microsoft35 have built data centres without chillers, but this is difficult in hot regions36.
Source: Resources - nature.com