How can we reduce the carbon footprint of global computing?
The voracious appetite for energy from the worldâs computers and communications technology presents a clear threat for the globeâs warming climate. That was the blunt assessment from presenters in the intensive two-day Climate Implications of Computing and Communications workshop held on March 3 and 4, hosted by MITâs Climate and Sustainability Consortium (MCSC), MIT-IBM Watson AI Lab, and the Schwarzman College of Computing.
The virtual event featured rich discussions and highlighted opportunities for collaboration among an interdisciplinary group of MIT faculty and researchers and industry leaders across multiple sectors â underscoring the power of academia and industry coming together.
âIf we continue with the existing trajectory of compute energy, by 2040, we are supposed to hit the worldâs energy production capacity. The increase in compute energy and demand has been increasing at a much faster rate than the world energy production capacity increase,â said Bilge Yildiz, the Breene M. Kerr Professor in the MIT departments of Nuclear Science and Engineering and Materials Science and Engineering, one of the workshopâs 18 presenters. This computing energy projection draws from the Semiconductor Research Corporationsâs decadal report.To cite just one example: Information and communications technology already account for more than 2 percent of global energy demand, which is on a par with the aviation industries emissions from fuel.âWe are the very beginning of this data-driven world. We really need to start thinking about this and act now,â said presenter Evgeni Gousev, senior director at Qualcomm. Â Innovative energy-efficiency optionsTo that end, the workshop presentations explored a host of energy-efficiency options, including specialized chip design, data center architecture, better algorithms, hardware modifications, and changes in consumer behavior. Industry leaders from AMD, Ericsson, Google, IBM, iRobot, NVIDIA, Qualcomm, Tertill, Texas Instruments, and Verizon outlined their companiesâ energy-saving programs, while experts from across MIT provided insight into current research that could yield more efficient computing.Panel topics ranged from âCustom hardware for efficient computingâ to âHardware for new architecturesâ to âAlgorithms for efficient computing,â among others.
Visual representation of the conversation during the workshop session entitled “Energy Efficient Systems.”
Image: Haley McDevitt
Previous item
Next item
The goal, said Yildiz, is to improve energy efficiency associated with computing by more than a million-fold.âI think part of the answer of how we make computing much more sustainable has to do with specialized architectures that have very high level of utilization,â said DarĂo Gil, IBM senior vice president and director of research, who stressed that solutions should be as âelegantâ as possible.  For example, Gil illustrated an innovative chip design that uses vertical stacking to reduce the distance data has to travel, and thus reduces energy consumption. Surprisingly, more effective use of tape â a traditional medium for primary data storage â combined with specialized hard drives (HDD), can yield a dramatic savings in carbon dioxide emissions.Gil and presenters Bill Dally, chief scientist and senior vice president of research of NVIDIA; Ahmad Bahai, CTO of Texas Instruments; and others zeroed in on storage. Gil compared data to a floating iceberg in which we can have fast access to the âhot dataâ of the smaller visible part while the âcold data,â the large underwater mass, represents data that tolerates higher latency. Think about digital photo storage, Gil said. âHonestly, are you really retrieving all of those photographs on a continuous basis?â Storage systems should provide an optimized mix of of HDD for hot data and tape for cold data based on data access patterns.Bahai stressed the significant energy saving gained from segmenting standby and full processing. âWe need to learn how to do nothing better,â he said. Dally spoke of mimicking the way our brain wakes up from a deep sleep, âWe can wake [computers] up much faster, so we don’t need to keep them running in full speed.âSeveral workshop presenters spoke of a focus on âsparsity,â a matrix in which most of the elements are zero, as a way to improve efficiency in neural networks. Or as Dally said, âNever put off till tomorrow, where you could put off forever,â explaining efficiency is not âgetting the most information with the fewest bits. It’s doing the most with the least energy.âHolistic and multidisciplinary approachesâWe need both efficient algorithms and efficient hardware, and sometimes we need to co-design both the algorithm and the hardware for efficient computing,â said Song Han, a panel moderator and assistant professor in the Department of Electrical Engineering and Computer Science (EECS) at MIT.Some presenters were optimistic about innovations already underway. According to Ericssonâs research, as much as 15 percent of the carbon emissions globally can be reduced through the use of existing solutions, noted Mats PellbĂ€ck Scharp, head of sustainability at Ericsson. For example, GPUs are more efficient than CPUs for AI, and the progression from 3G to 5G networks boosts energy savings.â5G is the most energy efficient standard ever,â said Scharp. âWe can build 5G without increasing energy consumption.âCompanies such as Google are optimizing energy use at their data centers through improved design, technology, and renewable energy. “Five of our data centers around the globe are operating near or above 90 percent carbon-free energy,â said Jeff Dean, Google’s senior fellow and senior vice president of Google Research.Yet, pointing to the possible slowdown in the doubling of transistors in an integrated circuit â or Mooreâs Law â âWe need new approaches to meet this compute demand,â said Sam Naffziger, AMD senior vice president, corporate fellow, and product technology architect. Naffziger spoke of addressing performance âoverkill.â For example, âweâre finding in the gaming and machine learning space we can make use of lower-precision math to deliver an image that looks just as good with 16-bit computations as with 32-bit computations, and instead of legacy 32b math to train AI networks, we can use lower-energy 8b or 16b computations.â
Visual representation of the conversation during the workshop session entitled “Wireless, networked, and distributed systems.”
Image: Haley McDevitt
Previous item
Next item
Other presenters singled out compute at the edge as a prime energy hog.âWe also have to change the devices that are put in our customersâ hands,â said Heidi Hemmer, senior vice president of engineering at Verizon. As we think about how we use energy, it is common to jump to data centers â but it really starts at the device itself, and the energy that the devices use. Then, we can think about home web routers, distributed networks, the data centers, and the hubs. âThe devices are actually the least energy-efficient out of that,â concluded Hemmer.Some presenters had different perspectives. Several called for developing dedicated silicon chipsets for efficiency. However, panel moderator Muriel Medard, the Cecil H. Green Professor in EECS, described research at MIT, Boston University, and Maynooth University on the GRAND (Guessing Random Additive Noise Decoding) chip, saying, ârather than having obsolescence of chips as the new codes come in and in different standards, you can use one chip for all codes.âWhatever the chip or new algorithm, Helen Greiner, CEO of Tertill (a weeding robot) and co-founder of iRobot, emphasized that to get products to market, âWe have to learn to go away from wanting to get the absolute latest and greatest, the most advanced processor that usually is more expensive.â She added, âI like to say robot demos are a dime a dozen, but robot products are very infrequent.âGreiner emphasized consumers can play a role in pushing for more energy-efficient products â just as drivers began to demand electric cars.Dean also sees an environmental role for the end user.âWe have enabled our cloud customers to select which cloud region they want to run their computation in, and they can decide how important it is that they have a low carbon footprint,â he said, also citing other interfaces that might allow consumers to decide which air flights are more efficient or what impact installing a solar panel on their home would have.However, Scharp said, âProlonging the life of your smartphone or tablet is really the best climate action you can do if you want to reduce your digital carbon footprint.âFacing increasing demandsDespite their optimism, the presenters acknowledged the world faces increasing compute demand from machine learning, AI, gaming, and especially, blockchain. Panel moderator Vivienne Sze, associate professor in EECS, noted the conundrum.âWe can do a great job in making computing and communication really efficient. But there is this tendency that once things are very efficient, people use more of it, and this might result in an overall increase in the usage of these technologies, which will then increase our overall carbon footprint,â Sze said.Presenters saw great potential in academic/industry partnerships, particularly from research efforts on the academic side. âBy combining these two forces together, you can really amplify the impact,â concluded Gousev.Presenters at the Climate Implications of Computing and Communications workshop also included: Joel Emer, professor of the practice in EECS at MIT; David Perreault, the Joseph F. and Nancy P. Keithley Professor of EECS at MIT; JesĂșs del Alamo, MIT Donner Professor and professor of electrical engineering in EECS at MIT; Heike Riel, IBM Fellow and head science and technology at IBM; and Takashi Ando, principal research staff member at IBM Research. The recorded workshop sessions are available on YouTube. More
