More stories

  • in

    Taking the “training wheels” off clean energy

    Renewable power sources have seen unprecedented levels of investment in recent years. But with political uncertainty clouding the future of subsidies for green energy, these technologies must begin to compete with fossil fuels on equal footing, said participants at the 2025 MIT Energy Conference.“What these technologies need less is training wheels, and more of a level playing field,” said Brian Deese, an MIT Institute Innovation Fellow, during a conference-opening keynote panel.The theme of the two-day conference, which is organized each year by MIT students, was “Breakthrough to deployment: Driving climate innovation to market.” Speakers largely expressed optimism about advancements in green technology, balanced by occasional notes of alarm about a rapidly changing regulatory and political environment.Deese defined what he called “the good, the bad, and the ugly” of the current energy landscape. The good: Clean energy investment in the United States hit an all-time high of $272 billion in 2024. The bad: Announcements of future investments have tailed off. And the ugly: Macro conditions are making it more difficult for utilities and private enterprise to build out the clean energy infrastructure needed to meet growing energy demands.“We need to build massive amounts of energy capacity in the United States,” Deese said. “And the three things that are the most allergic to building are high uncertainty, high interest rates, and high tariff rates. So that’s kind of ugly. But the question … is how, and in what ways, that underlying commercial momentum can drive through this period of uncertainty.”A shifting clean energy landscapeDuring a panel on artificial intelligence and growth in electricity demand, speakers said that the technology may serve as a catalyst for green energy breakthroughs, in addition to putting strain on existing infrastructure. “Google is committed to building digital infrastructure responsibly, and part of that means catalyzing the development of clean energy infrastructure that is not only meeting the AI need, but also benefiting the grid as a whole,” said Lucia Tian, head of clean energy and decarbonization technologies at Google.Across the two days, speakers emphasized that the cost-per-unit and scalability of clean energy technologies will ultimately determine their fate. But they also acknowledged the impact of public policy, as well as the need for government investment to tackle large-scale issues like grid modernization.Vanessa Chan, a former U.S. Department of Energy (DoE) official and current vice dean of innovation and entrepreneurship at the University of Pennsylvania School of Engineering and Applied Sciences, warned of the “knock-on” effects of the move to slash National Institutes of Health (NIH) funding for indirect research costs, for example. “In reality, what you’re doing is undercutting every single academic institution that does research across the nation,” she said.During a panel titled “No clean energy transition without transmission,” Maria Robinson, former director of the DoE’s Grid Deployment Office, said that ratepayers alone will likely not be able to fund the grid upgrades needed to meet growing power demand. “The amount of investment we’re going to need over the next couple of years is going to be significant,” she said. “That’s where the federal government is going to have to play a role.”David Cohen-Tanugi, a clean energy venture builder at MIT, noted that extreme weather events have changed the climate change conversation in recent years. “There was a narrative 10 years ago that said … if we start talking about resilience and adaptation to climate change, we’re kind of throwing in the towel or giving up,” he said. “I’ve noticed a very big shift in the investor narrative, the startup narrative, and more generally, the public consciousness. There’s a realization that the effects of climate change are already upon us.”“Everything on the table”The conference featured panels and keynote addresses on a range of emerging clean energy technologies, including hydrogen power, geothermal energy, and nuclear fusion, as well as a session on carbon capture.Alex Creely, a chief engineer at Commonwealth Fusion Systems, explained that fusion (the combining of small atoms into larger atoms, which is the same process that fuels stars) is safer and potentially more economical than traditional nuclear power. Fusion facilities, he said, can be powered down instantaneously, and companies like his are developing new, less-expensive magnet technology to contain the extreme heat produced by fusion reactors.By the early 2030s, Creely said, his company hopes to be operating 400-megawatt power plants that use only 50 kilograms of fuel per year. “If you can get fusion working, it turns energy into a manufacturing product, not a natural resource,” he said.Quinn Woodard Jr., senior director of power generation and surface facilities at geothermal energy supplier Fervo Energy, said his company is making the geothermal energy more economical through standardization, innovation, and economies of scale. Traditionally, he said, drilling is the largest cost in producing geothermal power. Fervo has “completely flipped the cost structure” with advances in drilling, Woodard said, and now the company is focused on bringing down its power plant costs.“We have to continuously be focused on cost, and achieving that is paramount for the success of the geothermal industry,” he said.One common theme across the conference: a number of approaches are making rapid advancements, but experts aren’t sure when — or, in some cases, if — each specific technology will reach a tipping point where it is capable of transforming energy markets.“I don’t want to get caught in a place where we often descend in this climate solution situation, where it’s either-or,” said Peter Ellis, global director of nature climate solutions at The Nature Conservancy. “We’re talking about the greatest challenge civilization has ever faced. We need everything on the table.”The road aheadSeveral speakers stressed the need for academia, industry, and government to collaborate in pursuit of climate and energy goals. Amy Luers, senior global director of sustainability for Microsoft, compared the challenge to the Apollo spaceflight program, and she said that academic institutions need to focus more on how to scale and spur investments in green energy.“The challenge is that academic institutions are not currently set up to be able to learn the how, in driving both bottom-up and top-down shifts over time,” Luers said. “If the world is going to succeed in our road to net zero, the mindset of academia needs to shift. And fortunately, it’s starting to.”During a panel called “From lab to grid: Scaling first-of-a-kind energy technologies,” Hannan Happi, CEO of renewable energy company Exowatt, stressed that electricity is ultimately a commodity. “Electrons are all the same,” he said. “The only thing [customers] care about with regards to electrons is that they are available when they need them, and that they’re very cheap.”Melissa Zhang, principal at Azimuth Capital Management, noted that energy infrastructure development cycles typically take at least five to 10 years — longer than a U.S. political cycle. However, she warned that green energy technologies are unlikely to receive significant support at the federal level in the near future. “If you’re in something that’s a little too dependent on subsidies … there is reason to be concerned over this administration,” she said.World Energy CEO Gene Gebolys, the moderator of the lab-to-grid panel, listed off a number of companies founded at MIT. “They all have one thing in common,” he said. “They all went from somebody’s idea, to a lab, to proof-of-concept, to scale. It’s not like any of this stuff ever ends. It’s an ongoing process.” More

  • in

    MIT Maritime Consortium sets sail

    Around 11 billion tons of goods, or about 1.5 tons per person worldwide, are transported by sea each year, representing about 90 percent of global trade by volume. Internationally, the merchant shipping fleet numbers around 110,000 vessels. These ships, and the ports that service them, are significant contributors to the local and global economy — and they’re significant contributors to greenhouse gas emissions.A new consortium, formalized in a signing ceremony at MIT last week, aims to address climate-harming emissions in the maritime shipping industry, while supporting efforts for environmentally friendly operation in compliance with the decarbonization goals set by the International Maritime Organization.“This is a timely collaboration with key stakeholders from the maritime industry with a very bold and interdisciplinary research agenda that will establish new technologies and evidence-based standards,” says Themis Sapsis, the William Koch Professor of Marine Technology at MIT and the director of MIT’s Center for Ocean Engineering. “It aims to bring the best from MIT in key areas for commercial shipping, such as nuclear technology for commercial settings, autonomous operation and AI methods, improved hydrodynamics and ship design, cybersecurity, and manufacturing.” Co-led by Sapsis and Fotini Christia, the Ford International Professor of the Social Sciences; director of the Institute for Data, Systems, and Society (IDSS); and director of the MIT Sociotechnical Systems Research Center, the newly-launched MIT Maritime Consortium (MC) brings together MIT collaborators from across campus, including the Center for Ocean Engineering, which is housed in the Department of Mechanical Engineering; IDSS, which is housed in the MIT Schwarzman College of Computing; the departments of Nuclear Science and Engineering and Civil and Environmental Engineering; MIT Sea Grant; and others, with a national and an international community of industry experts.The Maritime Consortium’s founding members are the American Bureau of Shipping (ABS), Capital Clean Energy Carriers Corp., and HD Korea Shipbuilding and Offshore Engineering. Innovation members are Foresight-Group, Navios Maritime Partners L.P., Singapore Maritime Institute, and Dorian LPG.“The challenges the maritime industry faces are challenges that no individual company or organization can address alone,” says Christia. “The solution involves almost every discipline from the School of Engineering, as well as AI and data-driven algorithms, and policy and regulation — it’s a true MIT problem.”Researchers will explore new designs for nuclear systems consistent with the techno-economic needs and constraints of commercial shipping, economic and environmental feasibility of alternative fuels, new data-driven algorithms and rigorous evaluation criteria for autonomous platforms in the maritime space, cyber-physical situational awareness and anomaly detection, as well as 3D printing technologies for onboard manufacturing. Collaborators will also advise on research priorities toward evidence-based standards related to MIT presidential priorities around climate, sustainability, and AI.MIT has been a leading center of ship research and design for over a century, and is widely recognized for contributions to hydrodynamics, ship structural mechanics and dynamics, propeller design, and overall ship design, and its unique educational program for U.S. Navy Officers, the Naval Construction and Engineering Program. Research today is at the forefront of ocean science and engineering, with significant efforts in fluid mechanics and hydrodynamics, acoustics, offshore mechanics, marine robotics and sensors, and ocean sensing and forecasting. The consortium’s academic home at MIT also opens the door to cross-departmental collaboration across the Institute.The MC will launch multiple research projects designed to tackle challenges from a variety of angles, all united by cutting-edge data analysis and computation techniques. Collaborators will research new designs and methods that improve efficiency and reduce greenhouse gas emissions, explore feasibility of alternative fuels, and advance data-driven decision-making, manufacturing and materials, hydrodynamic performance, and cybersecurity.“This consortium brings a powerful collection of significant companies that, together, has the potential to be a global shipping shaper in itself,” says Christopher J. Wiernicki SM ’85, chair and chief executive officer of ABS. “The strength and uniqueness of this consortium is the members, which are all world-class organizations and real difference makers. The ability to harness the members’ experience and know-how, along with MIT’s technology reach, creates real jet fuel to drive progress,” Wiernicki says. “As well as researching key barriers, bottlenecks, and knowledge gaps in the emissions challenge, the consortium looks to enable development of the novel technology and policy innovation that will be key. Long term, the consortium hopes to provide the gravity we will need to bend the curve.” More

  • in

    Puzzling out climate change

    Shreyaa Raghavan’s journey into solving some of the world’s toughest challenges started with a simple love for puzzles. By high school, her knack for problem-solving naturally drew her to computer science. Through her participation in an entrepreneurship and leadership program, she built apps and twice made it to the semifinals of the program’s global competition.Her early successes made a computer science career seem like an obvious choice, but Raghavan says a significant competing interest left her torn.“Computer science sparks that puzzle-, problem-solving part of my brain,” says Raghavan ’24, an Accenture Fellow and a PhD candidate in MIT’s Institute for Data, Systems, and Society. “But while I always felt like building mobile apps was a fun little hobby, it didn’t feel like I was directly solving societal challenges.”Her perspective shifted when, as an MIT undergraduate, Raghavan participated in an Undergraduate Research Opportunity in the Photovoltaic Research Laboratory, now known as the Accelerated Materials Laboratory for Sustainability. There, she discovered how computational techniques like machine learning could optimize materials for solar panels — a direct application of her skills toward mitigating climate change.“This lab had a very diverse group of people, some from a computer science background, some from a chemistry background, some who were hardcore engineers. All of them were communicating effectively and working toward one unified goal — building better renewable energy systems,” Raghavan says. “It opened my eyes to the fact that I could use very technical tools that I enjoy building and find fulfillment in that by helping solve major climate challenges.”With her sights set on applying machine learning and optimization to energy and climate, Raghavan joined Cathy Wu’s lab when she started her PhD in 2023. The lab focuses on building more sustainable transportation systems, a field that resonated with Raghavan due to its universal impact and its outsized role in climate change — transportation accounts for roughly 30 percent of greenhouse gas emissions.“If we were to throw all of the intelligent systems we are exploring into the transportation networks, by how much could we reduce emissions?” she asks, summarizing a core question of her research.Wu, an associate professor in the Department of Civil and Environmental Engineering, stresses the value of Raghavan’s work.“Transportation is a critical element of both the economy and climate change, so potential changes to transportation must be carefully studied,” Wu says. “Shreyaa’s research into smart congestion management is important because it takes a data-driven approach to add rigor to the broader research supporting sustainability.”Raghavan’s contributions have been recognized with the Accenture Fellowship, a cornerstone of the MIT-Accenture Convergence Initiative for Industry and Technology. As an Accenture Fellow, she is exploring the potential impact of technologies for avoiding stop-and-go traffic and its emissions, using systems such as networked autonomous vehicles and digital speed limits that vary according to traffic conditions — solutions that could advance decarbonization in the transportation section at relatively low cost and in the near term.Raghavan says she appreciates the Accenture Fellowship not only for the support it provides, but also because it demonstrates industry involvement in sustainable transportation solutions.“It’s important for the field of transportation, and also energy and climate as a whole, to synergize with all of the different stakeholders,” she says. “I think it’s important for industry to be involved in this issue of incorporating smarter transportation systems to decarbonize transportation.”Raghavan has also received a fellowship supporting her research from the U.S. Department of Transportation.“I think it’s really exciting that there’s interest from the policy side with the Department of Transportation and from the industry side with Accenture,” she says.Raghavan believes that addressing climate change requires collaboration across disciplines. “I think with climate change, no one industry or field is going to solve it on its own. It’s really got to be each field stepping up and trying to make a difference,” she says. “I don’t think there’s any silver-bullet solution to this problem. It’s going to take many different solutions from different people, different angles, different disciplines.”With that in mind, Raghavan has been very active in the MIT Energy and Climate Club since joining about three years ago, which, she says, “was a really cool way to meet lots of people who were working toward the same goal, the same climate goals, the same passions, but from completely different angles.”This year, Raghavan is on the community and education team, which works to build the community at MIT that is working on climate and energy issues. As part of that work, Raghavan is launching a mentorship program for undergraduates, pairing them with graduate students who help the undergrads develop ideas about how they can work on climate using their unique expertise.“I didn’t foresee myself using my computer science skills in energy and climate,” Raghavan says, “so I really want to give other students a clear pathway, or a clear sense of how they can get involved.”Raghavan has embraced her area of study even in terms of where she likes to think.“I love working on trains, on buses, on airplanes,” she says. “It’s really fun to be in transit and working on transportation problems.”Anticipating a trip to New York to visit a cousin, she holds no dread for the long train trip.“I know I’m going to do some of my best work during those hours,” she says. “Four hours there. Four hours back.” More

  • in

    Streamlining data collection for improved salmon population management

    Sara Beery came to MIT as an assistant professor in MIT’s Department of Electrical Engineering and Computer Science (EECS) eager to focus on ecological challenges. She has fashioned her research career around the opportunity to apply her expertise in computer vision, machine learning, and data science to tackle real-world issues in conservation and sustainability. Beery was drawn to the Institute’s commitment to “computing for the planet,” and set out to bring her methods to global-scale environmental and biodiversity monitoring.In the Pacific Northwest, salmon have a disproportionate impact on the health of their ecosystems, and their complex reproductive needs have attracted Beery’s attention. Each year, millions of salmon embark on a migration to spawn. Their journey begins in freshwater stream beds where the eggs hatch. Young salmon fry (newly hatched salmon) make their way to the ocean, where they spend several years maturing to adulthood. As adults, the salmon return to the streams where they were born in order to spawn, ensuring the continuation of their species by depositing their eggs in the gravel of the stream beds. Both male and female salmon die shortly after supplying the river habitat with the next generation of salmon. Throughout their migration, salmon support a wide range of organisms in the ecosystems they pass through. For example, salmon bring nutrients like carbon and nitrogen from the ocean upriver, enhancing their availability to those ecosystems. In addition, salmon are key to many predator-prey relationships: They serve as a food source for various predators, such as bears, wolves, and birds, while helping to control other populations, like insects, through predation. After they die from spawning, the decomposing salmon carcasses also replenish valuable nutrients to the surrounding ecosystem. The migration of salmon not only sustains their own species but plays a critical role in the overall health of the rivers and oceans they inhabit. At the same time, salmon populations play an important role both economically and culturally in the region. Commercial and recreational salmon fisheries contribute significantly to the local economy. And for many Indigenous peoples in the Pacific northwest, salmon hold notable cultural value, as they have been central to their diets, traditions, and ceremonies. Monitoring salmon migrationIncreased human activity, including overfishing and hydropower development, together with habitat loss and climate change, have had a significant impact on salmon populations in the region. As a result, effective monitoring and management of salmon fisheries is important to ensure balance among competing ecological, cultural, and human interests. Accurately counting salmon during their seasonal migration to their natal river to spawn is essential in order to track threatened populations, assess the success of recovery strategies, guide fishing season regulations, and support the management of both commercial and recreational fisheries. Precise population data help decision-makers employ the best strategies to safeguard the health of the ecosystem while accommodating human needs. Monitoring salmon migration is a labor-intensive and inefficient undertaking.Beery is currently leading a research project that aims to streamline salmon monitoring using cutting-edge computer vision methods. This project fits within Beery’s broader research interest, which focuses on the interdisciplinary space between artificial intelligence, the natural world, and sustainability. Its relevance to fisheries management made it a good fit for funding from MIT’s Abdul Latif Jameel Water and Food Systems Lab (J-WAFS). Beery’s 2023 J-WAFS seed grant was the first research funding she was awarded since joining the MIT faculty.  Historically, monitoring efforts relied on humans to manually count salmon from riverbanks using eyesight. In the past few decades, underwater sonar systems have been implemented to aid in counting the salmon. These sonar systems are essentially underwater video cameras, but they differ in that they use acoustics instead of light sensors to capture the presence of a fish. Use of this method requires people to set up a tent alongside the river to count salmon based on the output of a sonar camera that is hooked up to a laptop. While this system is an improvement to the original method of monitoring salmon by eyesight, it still relies significantly on human effort and is an arduous and time-consuming process. Automating salmon monitoring is necessary for better management of salmon fisheries. “We need these technological tools,” says Beery. “We can’t keep up with the demand of monitoring and understanding and studying these really complex ecosystems that we work in without some form of automation.”In order to automate counting of migrating salmon populations in the Pacific Northwest, the project team, including Justin Kay, a PhD student in EECS, has been collecting data in the form of videos from sonar cameras at different rivers. The team annotates a subset of the data to train the computer vision system to autonomously detect and count the fish as they migrate. Kay describes the process of how the model counts each migrating fish: “The computer vision algorithm is designed to locate a fish in the frame, draw a box around it, and then track it over time. If a fish is detected on one side of the screen and leaves on the other side of the screen, then we count it as moving upstream.” On rivers where the team has created training data for the system, it has produced strong results, with only 3 to 5 percent counting error. This is well below the target that the team and partnering stakeholders set of no more than a 10 percent counting error. Testing and deployment: Balancing human effort and use of automationThe researchers’ technology is being deployed to monitor the migration of salmon on the newly restored Klamath River. Four dams on the river were recently demolished, making it the largest dam removal project in U.S. history. The dams came down after a more than 20-year-long campaign to remove them, which was led by Klamath tribes, in collaboration with scientists, environmental organizations, and commercial fishermen. After the removal of the dams, 240 miles of the river now flow freely and nearly 800 square miles of habitat are accessible to salmon. Beery notes the almost immediate regeneration of salmon populations in the Klamath River: “I think it was within eight days of the dam coming down, they started seeing salmon actually migrate upriver beyond the dam.” In a collaboration with California Trout, the team is currently processing new data to adapt and create a customized model that can then be deployed to help count the newly migrating salmon.One challenge with the system revolves around training the model to accurately count the fish in unfamiliar environments with variations such as riverbed features, water clarity, and lighting conditions. These factors can significantly alter how the fish appear on the output of a sonar camera and confuse the computer model. When deployed in new rivers where no data have been collected before, like the Klamath, the performance of the system degrades and the margin of error increases substantially to 15-20 percent. The researchers constructed an automatic adaptation algorithm within the system to overcome this challenge and create a scalable system that can be deployed to any site without human intervention. This self-initializing technology works to automatically calibrate to the new conditions and environment to accurately count the migrating fish. In testing, the automatic adaptation algorithm was able to reduce the counting error down to the 10 to 15 percent range. The improvement in counting error with the self-initializing function means that the technology is closer to being deployable to new locations without much additional human effort. Enabling real-time management with the “Fishbox”Another challenge faced by the research team was the development of an efficient data infrastructure. In order to run the computer vision system, the video produced by sonar cameras must be delivered via the cloud or by manually mailing hard drives from a river site to the lab. These methods have notable drawbacks: a cloud-based approach is limited due to lack of internet connectivity in remote river site locations, and shipping the data introduces problems of delay. Instead of relying on these methods, the team has implemented a power-efficient computer, coined the “Fishbox,” that can be used in the field to perform the processing. The Fishbox consists of a small, lightweight computer with optimized software that fishery managers can plug into their existing laptops and sonar cameras. The system is then capable of running salmon counting models directly at the sonar sites without the need for internet connectivity. This allows managers to make hour-by-hour decisions, supporting more responsive, real-time management of salmon populations.Community developmentThe team is also working to bring a community together around monitoring for salmon fisheries management in the Pacific Northwest. “It’s just pretty exciting to have stakeholders who are enthusiastic about getting access to [our technology] as we get it to work and having a tighter integration and collaboration with them,” says Beery. “I think particularly when you’re working on food and water systems, you need direct collaboration to help facilitate impact, because you’re ensuring that what you develop is actually serving the needs of the people and organizations that you are helping to support.”This past June, Beery’s lab organized a workshop in Seattle that convened nongovernmental organizations, tribes, and state and federal departments of fish and wildlife to discuss the use of automated sonar systems to monitor and manage salmon populations. Kay notes that the workshop was an “awesome opportunity to have everybody sharing different ways that they’re using sonar and thinking about how the automated methods that we’re building could fit into that workflow.” The discussion continues now via a shared Slack channel created by the team, with over 50 participants. Convening this group is a significant achievement, as many of these organizations would not otherwise have had an opportunity to come together and collaborate. Looking forwardAs the team continues to tune the computer vision system, refine their technology, and engage with diverse stakeholders — from Indigenous communities to fishery managers — the project is poised to make significant improvements to the efficiency and accuracy of salmon monitoring and management in the region. And as Beery advances the work of her MIT group, the J-WAFS seed grant is helping to keep challenges such as fisheries management in her sights.  “The fact that the J-WAFS seed grant existed here at MIT enabled us to continue to work on this project when we moved here,” comments Beery, adding “it also expanded the scope of the project and allowed us to maintain active collaboration on what I think is a really important and impactful project.” As J-WAFS marks its 10th anniversary this year, the program aims to continue supporting and encouraging MIT faculty to pursue innovative projects that aim to advance knowledge and create practical solutions with real-world impacts on global water and food system challenges.  More

  • in

    The multifaceted challenge of powering AI

    Artificial intelligence has become vital in business and financial dealings, medical care, technology development, research, and much more. Without realizing it, consumers rely on AI when they stream a video, do online banking, or perform an online search. Behind these capabilities are more than 10,000 data centers globally, each one a huge warehouse containing thousands of computer servers and other infrastructure for storing, managing, and processing data. There are now over 5,000 data centers in the United States, and new ones are being built every day — in the U.S. and worldwide. Often dozens are clustered together right near where people live, attracted by policies that provide tax breaks and other incentives, and by what looks like abundant electricity.And data centers do consume huge amounts of electricity. U.S. data centers consumed more than 4 percent of the country’s total electricity in 2023, and by 2030 that fraction could rise to 9 percent, according to the Electric Power Research Institute. A single large data center can consume as much electricity as 50,000 homes.The sudden need for so many data centers presents a massive challenge to the technology and energy industries, government policymakers, and everyday consumers. Research scientists and faculty members at the MIT Energy Initiative (MITEI) are exploring multiple facets of this problem — from sourcing power to grid improvement to analytical tools that increase efficiency, and more. Data centers have quickly become the energy issue of our day.Unexpected demand brings unexpected solutionsSeveral companies that use data centers to provide cloud computing and data management services are announcing some surprising steps to deliver all that electricity. Proposals include building their own small nuclear plants near their data centers and even restarting one of the undamaged nuclear reactors at Three Mile Island, which has been shuttered since 2019. (A different reactor at that plant partially melted down in 1979, causing the nation’s worst nuclear power accident.) Already the need to power AI is causing delays in the planned shutdown of some coal-fired power plants and raising prices for residential consumers. Meeting the needs of data centers is not only stressing power grids, but also setting back the transition to clean energy needed to stop climate change.There are many aspects to the data center problem from a power perspective. Here are some that MIT researchers are focusing on, and why they’re important.An unprecedented surge in the demand for electricity“In the past, computing was not a significant user of electricity,” says William H. Green, director of MITEI and the Hoyt C. Hottel Professor in the MIT Department of Chemical Engineering. “Electricity was used for running industrial processes and powering household devices such as air conditioners and lights, and more recently for powering heat pumps and charging electric cars. But now all of a sudden, electricity used for computing in general, and by data centers in particular, is becoming a gigantic new demand that no one anticipated.”Why the lack of foresight? Usually, demand for electric power increases by roughly half-a-percent per year, and utilities bring in new power generators and make other investments as needed to meet the expected new demand. But the data centers now coming online are creating unprecedented leaps in demand that operators didn’t see coming. In addition, the new demand is constant. It’s critical that a data center provides its services all day, every day. There can be no interruptions in processing large datasets, accessing stored data, and running the cooling equipment needed to keep all the packed-together computers churning away without overheating.Moreover, even if enough electricity is generated, getting it to where it’s needed may be a problem, explains Deepjyoti Deka, a MITEI research scientist. “A grid is a network-wide operation, and the grid operator may have sufficient generation at another location or even elsewhere in the country, but the wires may not have sufficient capacity to carry the electricity to where it’s wanted.” So transmission capacity must be expanded — and, says Deka, that’s a slow process.Then there’s the “interconnection queue.” Sometimes, adding either a new user (a “load”) or a new generator to an existing grid can cause instabilities or other problems for everyone else already on the grid. In that situation, bringing a new data center online may be delayed. Enough delays can result in new loads or generators having to stand in line and wait for their turn. Right now, much of the interconnection queue is already filled up with new solar and wind projects. The delay is now about five years. Meeting the demand from newly installed data centers while ensuring that the quality of service elsewhere is not hampered is a problem that needs to be addressed.Finding clean electricity sourcesTo further complicate the challenge, many companies — including so-called “hyperscalers” such as Google, Microsoft, and Amazon — have made public commitments to having net-zero carbon emissions within the next 10 years. Many have been making strides toward achieving their clean-energy goals by buying “power purchase agreements.” They sign a contract to buy electricity from, say, a solar or wind facility, sometimes providing funding for the facility to be built. But that approach to accessing clean energy has its limits when faced with the extreme electricity demand of a data center.Meanwhile, soaring power consumption is delaying coal plant closures in many states. There are simply not enough sources of renewable energy to serve both the hyperscalers and the existing users, including individual consumers. As a result, conventional plants fired by fossil fuels such as coal are needed more than ever.As the hyperscalers look for sources of clean energy for their data centers, one option could be to build their own wind and solar installations. But such facilities would generate electricity only intermittently. Given the need for uninterrupted power, the data center would have to maintain energy storage units, which are expensive. They could instead rely on natural gas or diesel generators for backup power — but those devices would need to be coupled with equipment to capture the carbon emissions, plus a nearby site for permanently disposing of the captured carbon.Because of such complications, several of the hyperscalers are turning to nuclear power. As Green notes, “Nuclear energy is well matched to the demand of data centers, because nuclear plants can generate lots of power reliably, without interruption.”In a much-publicized move in September, Microsoft signed a deal to buy power for 20 years after Constellation Energy reopens one of the undamaged reactors at its now-shuttered nuclear plant at Three Mile Island, the site of the much-publicized nuclear accident in 1979. If approved by regulators, Constellation will bring that reactor online by 2028, with Microsoft buying all of the power it produces. Amazon also reached a deal to purchase power produced by another nuclear plant threatened with closure due to financial troubles. And in early December, Meta released a request for proposals to identify nuclear energy developers to help the company meet their AI needs and their sustainability goals.Other nuclear news focuses on small modular nuclear reactors (SMRs), factory-built, modular power plants that could be installed near data centers, potentially without the cost overruns and delays often experienced in building large plants. Google recently ordered a fleet of SMRs to generate the power needed by its data centers. The first one will be completed by 2030 and the remainder by 2035.Some hyperscalers are betting on new technologies. For example, Google is pursuing next-generation geothermal projects, and Microsoft has signed a contract to purchase electricity from a startup’s fusion power plant beginning in 2028 — even though the fusion technology hasn’t yet been demonstrated.Reducing electricity demandOther approaches to providing sufficient clean electricity focus on making the data center and the operations it houses more energy efficient so as to perform the same computing tasks using less power. Using faster computer chips and optimizing algorithms that use less energy are already helping to reduce the load, and also the heat generated.Another idea being tried involves shifting computing tasks to times and places where carbon-free energy is available on the grid. Deka explains: “If a task doesn’t have to be completed immediately, but rather by a certain deadline, can it be delayed or moved to a data center elsewhere in the U.S. or overseas where electricity is more abundant, cheaper, and/or cleaner? This approach is known as ‘carbon-aware computing.’” We’re not yet sure whether every task can be moved or delayed easily, says Deka. “If you think of a generative AI-based task, can it easily be separated into small tasks that can be taken to different parts of the country, solved using clean energy, and then be brought back together? What is the cost of doing this kind of division of tasks?”That approach is, of course, limited by the problem of the interconnection queue. It’s difficult to access clean energy in another region or state. But efforts are under way to ease the regulatory framework to make sure that critical interconnections can be developed more quickly and easily.What about the neighbors?A major concern running through all the options for powering data centers is the impact on residential energy consumers. When a data center comes into a neighborhood, there are not only aesthetic concerns but also more practical worries. Will the local electricity service become less reliable? Where will the new transmission lines be located? And who will pay for the new generators, upgrades to existing equipment, and so on? When new manufacturing facilities or industrial plants go into a neighborhood, the downsides are generally offset by the availability of new jobs. Not so with a data center, which may require just a couple dozen employees.There are standard rules about how maintenance and upgrade costs are shared and allocated. But the situation is totally changed by the presence of a new data center. As a result, utilities now need to rethink their traditional rate structures so as not to place an undue burden on residents to pay for the infrastructure changes needed to host data centers.MIT’s contributionsAt MIT, researchers are thinking about and exploring a range of options for tackling the problem of providing clean power to data centers. For example, they are investigating architectural designs that will use natural ventilation to facilitate cooling, equipment layouts that will permit better airflow and power distribution, and highly energy-efficient air conditioning systems based on novel materials. They are creating new analytical tools for evaluating the impact of data center deployments on the U.S. power system and for finding the most efficient ways to provide the facilities with clean energy. Other work looks at how to match the output of small nuclear reactors to the needs of a data center, and how to speed up the construction of such reactors.MIT teams also focus on determining the best sources of backup power and long-duration storage, and on developing decision support systems for locating proposed new data centers, taking into account the availability of electric power and water and also regulatory considerations, and even the potential for using what can be significant waste heat, for example, for heating nearby buildings. Technology development projects include designing faster, more efficient computer chips and more energy-efficient computing algorithms.In addition to providing leadership and funding for many research projects, MITEI is acting as a convenor, bringing together companies and stakeholders to address this issue. At MITEI’s 2024 Annual Research Conference, a panel of representatives from two hyperscalers and two companies that design and construct data centers together discussed their challenges, possible solutions, and where MIT research could be most beneficial.As data centers continue to be built, and computing continues to create an unprecedented increase in demand for electricity, Green says, scientists and engineers are in a race to provide the ideas, innovations, and technologies that can meet this need, and at the same time continue to advance the transition to a decarbonized energy system. More

  • in

    Explained: Generative AI’s environmental impact

    In a two-part series, MIT News explores the environmental implications of generative AI. In this article, we look at why this technology is so resource-intensive. A second piece will investigate what experts are doing to reduce genAI’s carbon footprint and other impacts.The excitement surrounding potential benefits of generative AI, from improving worker productivity to advancing scientific research, is hard to ignore. While the explosive growth of this new technology has enabled rapid deployment of powerful models in many industries, the environmental consequences of this generative AI “gold rush” remain difficult to pin down, let alone mitigate.The computational power required to train generative AI models that often have billions of parameters, such as OpenAI’s GPT-4, can demand a staggering amount of electricity, which leads to increased carbon dioxide emissions and pressures on the electric grid.Furthermore, deploying these models in real-world applications, enabling millions to use generative AI in their daily lives, and then fine-tuning the models to improve their performance draws large amounts of energy long after a model has been developed.Beyond electricity demands, a great deal of water is needed to cool the hardware used for training, deploying, and fine-tuning generative AI models, which can strain municipal water supplies and disrupt local ecosystems. The increasing number of generative AI applications has also spurred demand for high-performance computing hardware, adding indirect environmental impacts from its manufacture and transport.“When we think about the environmental impact of generative AI, it is not just the electricity you consume when you plug the computer in. There are much broader consequences that go out to a system level and persist based on actions that we take,” says Elsa A. Olivetti, professor in the Department of Materials Science and Engineering and the lead of the Decarbonization Mission of MIT’s new Climate Project.Olivetti is senior author of a 2024 paper, “The Climate and Sustainability Implications of Generative AI,” co-authored by MIT colleagues in response to an Institute-wide call for papers that explore the transformative potential of generative AI, in both positive and negative directions for society.Demanding data centersThe electricity demands of data centers are one major factor contributing to the environmental impacts of generative AI, since data centers are used to train and run the deep learning models behind popular tools like ChatGPT and DALL-E.A data center is a temperature-controlled building that houses computing infrastructure, such as servers, data storage drives, and network equipment. For instance, Amazon has more than 100 data centers worldwide, each of which has about 50,000 servers that the company uses to support cloud computing services.While data centers have been around since the 1940s (the first was built at the University of Pennsylvania in 1945 to support the first general-purpose digital computer, the ENIAC), the rise of generative AI has dramatically increased the pace of data center construction.“What is different about generative AI is the power density it requires. Fundamentally, it is just computing, but a generative AI training cluster might consume seven or eight times more energy than a typical computing workload,” says Noman Bashir, lead author of the impact paper, who is a Computing and Climate Impact Fellow at MIT Climate and Sustainability Consortium (MCSC) and a postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL).Scientists have estimated that the power requirements of data centers in North America increased from 2,688 megawatts at the end of 2022 to 5,341 megawatts at the end of 2023, partly driven by the demands of generative AI. Globally, the electricity consumption of data centers rose to 460 terawatts in 2022. This would have made data centers the 11th largest electricity consumer in the world, between the nations of Saudi Arabia (371 terawatts) and France (463 terawatts), according to the Organization for Economic Co-operation and Development.By 2026, the electricity consumption of data centers is expected to approach 1,050 terawatts (which would bump data centers up to fifth place on the global list, between Japan and Russia).While not all data center computation involves generative AI, the technology has been a major driver of increasing energy demands.“The demand for new data centers cannot be met in a sustainable way. The pace at which companies are building new data centers means the bulk of the electricity to power them must come from fossil fuel-based power plants,” says Bashir.The power needed to train and deploy a model like OpenAI’s GPT-3 is difficult to ascertain. In a 2021 research paper, scientists from Google and the University of California at Berkeley estimated the training process alone consumed 1,287 megawatt hours of electricity (enough to power about 120 average U.S. homes for a year), generating about 552 tons of carbon dioxide.While all machine-learning models must be trained, one issue unique to generative AI is the rapid fluctuations in energy use that occur over different phases of the training process, Bashir explains.Power grid operators must have a way to absorb those fluctuations to protect the grid, and they usually employ diesel-based generators for that task.Increasing impacts from inferenceOnce a generative AI model is trained, the energy demands don’t disappear.Each time a model is used, perhaps by an individual asking ChatGPT to summarize an email, the computing hardware that performs those operations consumes energy. Researchers have estimated that a ChatGPT query consumes about five times more electricity than a simple web search.“But an everyday user doesn’t think too much about that,” says Bashir. “The ease-of-use of generative AI interfaces and the lack of information about the environmental impacts of my actions means that, as a user, I don’t have much incentive to cut back on my use of generative AI.”With traditional AI, the energy usage is split fairly evenly between data processing, model training, and inference, which is the process of using a trained model to make predictions on new data. However, Bashir expects the electricity demands of generative AI inference to eventually dominate since these models are becoming ubiquitous in so many applications, and the electricity needed for inference will increase as future versions of the models become larger and more complex.Plus, generative AI models have an especially short shelf-life, driven by rising demand for new AI applications. Companies release new models every few weeks, so the energy used to train prior versions goes to waste, Bashir adds. New models often consume more energy for training, since they usually have more parameters than their predecessors.While electricity demands of data centers may be getting the most attention in research literature, the amount of water consumed by these facilities has environmental impacts, as well.Chilled water is used to cool a data center by absorbing heat from computing equipment. It has been estimated that, for each kilowatt hour of energy a data center consumes, it would need two liters of water for cooling, says Bashir.“Just because this is called ‘cloud computing’ doesn’t mean the hardware lives in the cloud. Data centers are present in our physical world, and because of their water usage they have direct and indirect implications for biodiversity,” he says.The computing hardware inside data centers brings its own, less direct environmental impacts.While it is difficult to estimate how much power is needed to manufacture a GPU, a type of powerful processor that can handle intensive generative AI workloads, it would be more than what is needed to produce a simpler CPU because the fabrication process is more complex. A GPU’s carbon footprint is compounded by the emissions related to material and product transport.There are also environmental implications of obtaining the raw materials used to fabricate GPUs, which can involve dirty mining procedures and the use of toxic chemicals for processing.Market research firm TechInsights estimates that the three major producers (NVIDIA, AMD, and Intel) shipped 3.85 million GPUs to data centers in 2023, up from about 2.67 million in 2022. That number is expected to have increased by an even greater percentage in 2024.The industry is on an unsustainable path, but there are ways to encourage responsible development of generative AI that supports environmental objectives, Bashir says.He, Olivetti, and their MIT colleagues argue that this will require a comprehensive consideration of all the environmental and societal costs of generative AI, as well as a detailed assessment of the value in its perceived benefits.“We need a more contextual way of systematically and comprehensively understanding the implications of new developments in this space. Due to the speed at which there have been improvements, we haven’t had a chance to catch up with our abilities to measure and understand the tradeoffs,” Olivetti says. More

  • in

    Q&A: The climate impact of generative AI

    Vijay Gadepally, a senior staff member at MIT Lincoln Laboratory, leads a number of projects at the Lincoln Laboratory Supercomputing Center (LLSC) to make computing platforms, and the artificial intelligence systems that run on them, more efficient. Here, Gadepally discusses the increasing use of generative AI in everyday tools, its hidden environmental impact, and some of the ways that Lincoln Laboratory and the greater AI community can reduce emissions for a greener future.Q: What trends are you seeing in terms of how generative AI is being used in computing?A: Generative AI uses machine learning (ML) to create new content, like images and text, based on data that is inputted into the ML system. At the LLSC we design and build some of the largest academic computing platforms in the world, and over the past few years we’ve seen an explosion in the number of projects that need access to high-performance computing for generative AI. We’re also seeing how generative AI is changing all sorts of fields and domains — for example, ChatGPT is already influencing the classroom and the workplace faster than regulations can seem to keep up.We can imagine all sorts of uses for generative AI within the next decade or so, like powering highly capable virtual assistants, developing new drugs and materials, and even improving our understanding of basic science. We can’t predict everything that generative AI will be used for, but I can certainly say that with more and more complex algorithms, their compute, energy, and climate impact will continue to grow very quickly.Q: What strategies is the LLSC using to mitigate this climate impact?A: We’re always looking for ways to make computing more efficient, as doing so helps our data center make the most of its resources and allows our scientific colleagues to push their fields forward in as efficient a manner as possible.As one example, we’ve been reducing the amount of power our hardware consumes by making simple changes, similar to dimming or turning off lights when you leave a room. In one experiment, we reduced the energy consumption of a group of graphics processing units by 20 percent to 30 percent, with minimal impact on their performance, by enforcing a power cap. This technique also lowered the hardware operating temperatures, making the GPUs easier to cool and longer lasting.Another strategy is changing our behavior to be more climate-aware. At home, some of us might choose to use renewable energy sources or intelligent scheduling. We are using similar techniques at the LLSC — such as training AI models when temperatures are cooler, or when local grid energy demand is low.We also realized that a lot of the energy spent on computing is often wasted, like how a water leak increases your bill but without any benefits to your home. We developed some new techniques that allow us to monitor computing workloads as they are running and then terminate those that are unlikely to yield good results. Surprisingly, in a number of cases we found that the majority of computations could be terminated early without compromising the end result.Q: What’s an example of a project you’ve done that reduces the energy output of a generative AI program?A: We recently built a climate-aware computer vision tool. Computer vision is a domain that’s focused on applying AI to images; so, differentiating between cats and dogs in an image, correctly labeling objects within an image, or looking for components of interest within an image.In our tool, we included real-time carbon telemetry, which produces information about how much carbon is being emitted by our local grid as a model is running. Depending on this information, our system will automatically switch to a more energy-efficient version of the model, which typically has fewer parameters, in times of high carbon intensity, or a much higher-fidelity version of the model in times of low carbon intensity.By doing this, we saw a nearly 80 percent reduction in carbon emissions over a one- to two-day period. We recently extended this idea to other generative AI tasks such as text summarization and found the same results. Interestingly, the performance sometimes improved after using our technique!Q: What can we do as consumers of generative AI to help mitigate its climate impact?A: As consumers, we can ask our AI providers to offer greater transparency. For example, on Google Flights, I can see a variety of options that indicate a specific flight’s carbon footprint. We should be getting similar kinds of measurements from generative AI tools so that we can make a conscious decision on which product or platform to use based on our priorities.We can also make an effort to be more educated on generative AI emissions in general. Many of us are familiar with vehicle emissions, and it can help to talk about generative AI emissions in comparative terms. People may be surprised to know, for example, that one image-generation task is roughly equivalent to driving four miles in a gas car, or that it takes the same amount of energy to charge an electric car as it does to generate about 1,500 text summarizations.There are many cases where customers would be happy to make a trade-off if they knew the trade-off’s impact.Q: What do you see for the future?A: Mitigating the climate impact of generative AI is one of those problems that people all over the world are working on, and with a similar goal. We’re doing a lot of work here at Lincoln Laboratory, but its only scratching at the surface. In the long term, data centers, AI developers, and energy grids will need to work together to provide “energy audits” to uncover other unique ways that we can improve computing efficiencies. We need more partnerships and more collaboration in order to forge ahead.If you’re interested in learning more, or collaborating with Lincoln Laboratory on these efforts, please contact Vijay Gadepally.

    Play video

    Video: MIT Lincoln Laboratory More

  • in

    Unlocking the hidden power of boiling — for energy, space, and beyond

    Most people take boiling water for granted. For Associate Professor Matteo Bucci, uncovering the physics behind boiling has been a decade-long journey filled with unexpected challenges and new insights.The seemingly simple phenomenon is extremely hard to study in complex systems like nuclear reactors, and yet it sits at the core of a wide range of important industrial processes. Unlocking its secrets could thus enable advances in efficient energy production, electronics cooling, water desalination, medical diagnostics, and more.“Boiling is important for applications way beyond nuclear,” says Bucci, who earned tenure at MIT in July. “Boiling is used in 80 percent of the power plants that produce electricity. My research has implications for space propulsion, energy storage, electronics, and the increasingly important task of cooling computers.”Bucci’s lab has developed new experimental techniques to shed light on a wide range of boiling and heat transfer phenomena that have limited energy projects for decades. Chief among those is a problem caused by bubbles forming so quickly they create a band of vapor across a surface that prevents further heat transfer. In 2023, Bucci and collaborators developed a unifying principle governing the problem, known as the boiling crisis, which could enable more efficient nuclear reactors and prevent catastrophic failures.For Bucci, each bout of progress brings new possibilities — and new questions to answer.“What’s the best paper?” Bucci asks. “The best paper is the next one. I think Alfred Hitchcock used to say it doesn’t matter how good your last movie was. If your next one is poor, people won’t remember it. I always tell my students that our next paper should always be better than the last. It’s a continuous journey of improvement.”From engineering to bubblesThe Italian village where Bucci grew up had a population of about 1,000 during his childhood. He gained mechanical skills by working in his father’s machine shop and by taking apart and reassembling appliances like washing machines and air conditioners to see what was inside. He also gained a passion for cycling, competing in the sport until he attended the University of Pisa for undergraduate and graduate studies.In college, Bucci was fascinated with matter and the origins of life, but he also liked building things, so when it came time to pick between physics and engineering, he decided nuclear engineering was a good middle ground.“I have a passion for construction and for understanding how things are made,” Bucci says. “Nuclear engineering was a very unlikely but obvious choice. It was unlikely because in Italy, nuclear was already out of the energy landscape, so there were very few of us. At the same time, there were a combination of intellectual and practical challenges, which is what I like.”For his PhD, Bucci went to France, where he met his wife, and went on to work at a French national lab. One day his department head asked him to work on a problem in nuclear reactor safety known as transient boiling. To solve it, he wanted to use a method for making measurements pioneered by MIT Professor Jacopo Buongiorno, so he received grant money to become a visiting scientist at MIT in 2013. He’s been studying boiling at MIT ever since.Today Bucci’s lab is developing new diagnostic techniques to study boiling and heat transfer along with new materials and coatings that could make heat transfer more efficient. The work has given researchers an unprecedented view into the conditions inside a nuclear reactor.“The diagnostics we’ve developed can collect the equivalent of 20 years of experimental work in a one-day experiment,” Bucci says.That data, in turn, led Bucci to a remarkably simple model describing the boiling crisis.“The effectiveness of the boiling process on the surface of nuclear reactor cladding determines the efficiency and the safety of the reactor,” Bucci explains. “It’s like a car that you want to accelerate, but there is an upper limit. For a nuclear reactor, that upper limit is dictated by boiling heat transfer, so we are interested in understanding what that upper limit is and how we can overcome it to enhance the reactor performance.”Another particularly impactful area of research for Bucci is two-phase immersion cooling, a process wherein hot server parts bring liquid to boil, then the resulting vapor condenses on a heat exchanger above to create a constant, passive cycle of cooling.“It keeps chips cold with minimal waste of energy, significantly reducing the electricity consumption and carbon dioxide emissions of data centers,” Bucci explains. “Data centers emit as much CO2 as the entire aviation industry. By 2040, they will account for over 10 percent of emissions.”Supporting studentsBucci says working with students is the most rewarding part of his job. “They have such great passion and competence. It’s motivating to work with people who have the same passion as you.”“My students have no fear to explore new ideas,” Bucci adds. “They almost never stop in front of an obstacle — sometimes to the point where you have to slow them down and put them back on track.”In running the Red Lab in the Department of Nuclear Science and Engineering, Bucci tries to give students independence as well as support.“We’re not educating students, we’re educating future researchers,” Bucci says. “I think the most important part of our work is to not only provide the tools, but also to give the confidence and the self-starting attitude to fix problems. That can be business problems, problems with experiments, problems with your lab mates.”Some of the more unique experiments Bucci’s students do require them to gather measurements while free falling in an airplane to achieve zero gravity.“Space research is the big fantasy of all the kids,” says Bucci, who joins students in the experiments about twice a year. “It’s very fun and inspiring research for students. Zero g gives you a new perspective on life.”Applying AIBucci is also excited about incorporating artificial intelligence into his field. In 2023, he was a co-recipient of a multi-university research initiative (MURI) project in thermal science dedicated solely to machine learning. In a nod to the promise AI holds in his field, Bucci also recently founded a journal called AI Thermal Fluids to feature AI-driven research advances.“Our community doesn’t have a home for people that want to develop machine-learning techniques,” Bucci says. “We wanted to create an avenue for people in computer science and thermal science to work together to make progress. I think we really need to bring computer scientists into our community to speed this process up.”Bucci also believes AI can be used to process huge reams of data gathered using the new experimental techniques he’s developed as well as to model phenomena researchers can’t yet study.“It’s possible that AI will give us the opportunity to understand things that cannot be observed, or at least guide us in the dark as we try to find the root causes of many problems,” Bucci says. More