More stories

  • in

    Explained: Generative AI’s environmental impact

    In a two-part series, MIT News explores the environmental implications of generative AI. In this article, we look at why this technology is so resource-intensive. A second piece will investigate what experts are doing to reduce genAI’s carbon footprint and other impacts.The excitement surrounding potential benefits of generative AI, from improving worker productivity to advancing scientific research, is hard to ignore. While the explosive growth of this new technology has enabled rapid deployment of powerful models in many industries, the environmental consequences of this generative AI “gold rush” remain difficult to pin down, let alone mitigate.The computational power required to train generative AI models that often have billions of parameters, such as OpenAI’s GPT-4, can demand a staggering amount of electricity, which leads to increased carbon dioxide emissions and pressures on the electric grid.Furthermore, deploying these models in real-world applications, enabling millions to use generative AI in their daily lives, and then fine-tuning the models to improve their performance draws large amounts of energy long after a model has been developed.Beyond electricity demands, a great deal of water is needed to cool the hardware used for training, deploying, and fine-tuning generative AI models, which can strain municipal water supplies and disrupt local ecosystems. The increasing number of generative AI applications has also spurred demand for high-performance computing hardware, adding indirect environmental impacts from its manufacture and transport.“When we think about the environmental impact of generative AI, it is not just the electricity you consume when you plug the computer in. There are much broader consequences that go out to a system level and persist based on actions that we take,” says Elsa A. Olivetti, professor in the Department of Materials Science and Engineering and the lead of the Decarbonization Mission of MIT’s new Climate Project.Olivetti is senior author of a 2024 paper, “The Climate and Sustainability Implications of Generative AI,” co-authored by MIT colleagues in response to an Institute-wide call for papers that explore the transformative potential of generative AI, in both positive and negative directions for society.Demanding data centersThe electricity demands of data centers are one major factor contributing to the environmental impacts of generative AI, since data centers are used to train and run the deep learning models behind popular tools like ChatGPT and DALL-E.A data center is a temperature-controlled building that houses computing infrastructure, such as servers, data storage drives, and network equipment. For instance, Amazon has more than 100 data centers worldwide, each of which has about 50,000 servers that the company uses to support cloud computing services.While data centers have been around since the 1940s (the first was built at the University of Pennsylvania in 1945 to support the first general-purpose digital computer, the ENIAC), the rise of generative AI has dramatically increased the pace of data center construction.“What is different about generative AI is the power density it requires. Fundamentally, it is just computing, but a generative AI training cluster might consume seven or eight times more energy than a typical computing workload,” says Noman Bashir, lead author of the impact paper, who is a Computing and Climate Impact Fellow at MIT Climate and Sustainability Consortium (MCSC) and a postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL).Scientists have estimated that the power requirements of data centers in North America increased from 2,688 megawatts at the end of 2022 to 5,341 megawatts at the end of 2023, partly driven by the demands of generative AI. Globally, the electricity consumption of data centers rose to 460 terawatts in 2022. This would have made data centers the 11th largest electricity consumer in the world, between the nations of Saudi Arabia (371 terawatts) and France (463 terawatts), according to the Organization for Economic Co-operation and Development.By 2026, the electricity consumption of data centers is expected to approach 1,050 terawatts (which would bump data centers up to fifth place on the global list, between Japan and Russia).While not all data center computation involves generative AI, the technology has been a major driver of increasing energy demands.“The demand for new data centers cannot be met in a sustainable way. The pace at which companies are building new data centers means the bulk of the electricity to power them must come from fossil fuel-based power plants,” says Bashir.The power needed to train and deploy a model like OpenAI’s GPT-3 is difficult to ascertain. In a 2021 research paper, scientists from Google and the University of California at Berkeley estimated the training process alone consumed 1,287 megawatt hours of electricity (enough to power about 120 average U.S. homes for a year), generating about 552 tons of carbon dioxide.While all machine-learning models must be trained, one issue unique to generative AI is the rapid fluctuations in energy use that occur over different phases of the training process, Bashir explains.Power grid operators must have a way to absorb those fluctuations to protect the grid, and they usually employ diesel-based generators for that task.Increasing impacts from inferenceOnce a generative AI model is trained, the energy demands don’t disappear.Each time a model is used, perhaps by an individual asking ChatGPT to summarize an email, the computing hardware that performs those operations consumes energy. Researchers have estimated that a ChatGPT query consumes about five times more electricity than a simple web search.“But an everyday user doesn’t think too much about that,” says Bashir. “The ease-of-use of generative AI interfaces and the lack of information about the environmental impacts of my actions means that, as a user, I don’t have much incentive to cut back on my use of generative AI.”With traditional AI, the energy usage is split fairly evenly between data processing, model training, and inference, which is the process of using a trained model to make predictions on new data. However, Bashir expects the electricity demands of generative AI inference to eventually dominate since these models are becoming ubiquitous in so many applications, and the electricity needed for inference will increase as future versions of the models become larger and more complex.Plus, generative AI models have an especially short shelf-life, driven by rising demand for new AI applications. Companies release new models every few weeks, so the energy used to train prior versions goes to waste, Bashir adds. New models often consume more energy for training, since they usually have more parameters than their predecessors.While electricity demands of data centers may be getting the most attention in research literature, the amount of water consumed by these facilities has environmental impacts, as well.Chilled water is used to cool a data center by absorbing heat from computing equipment. It has been estimated that, for each kilowatt hour of energy a data center consumes, it would need two liters of water for cooling, says Bashir.“Just because this is called ‘cloud computing’ doesn’t mean the hardware lives in the cloud. Data centers are present in our physical world, and because of their water usage they have direct and indirect implications for biodiversity,” he says.The computing hardware inside data centers brings its own, less direct environmental impacts.While it is difficult to estimate how much power is needed to manufacture a GPU, a type of powerful processor that can handle intensive generative AI workloads, it would be more than what is needed to produce a simpler CPU because the fabrication process is more complex. A GPU’s carbon footprint is compounded by the emissions related to material and product transport.There are also environmental implications of obtaining the raw materials used to fabricate GPUs, which can involve dirty mining procedures and the use of toxic chemicals for processing.Market research firm TechInsights estimates that the three major producers (NVIDIA, AMD, and Intel) shipped 3.85 million GPUs to data centers in 2023, up from about 2.67 million in 2022. That number is expected to have increased by an even greater percentage in 2024.The industry is on an unsustainable path, but there are ways to encourage responsible development of generative AI that supports environmental objectives, Bashir says.He, Olivetti, and their MIT colleagues argue that this will require a comprehensive consideration of all the environmental and societal costs of generative AI, as well as a detailed assessment of the value in its perceived benefits.“We need a more contextual way of systematically and comprehensively understanding the implications of new developments in this space. Due to the speed at which there have been improvements, we haven’t had a chance to catch up with our abilities to measure and understand the tradeoffs,” Olivetti says. More

  • in

    Q&A: The climate impact of generative AI

    Vijay Gadepally, a senior staff member at MIT Lincoln Laboratory, leads a number of projects at the Lincoln Laboratory Supercomputing Center (LLSC) to make computing platforms, and the artificial intelligence systems that run on them, more efficient. Here, Gadepally discusses the increasing use of generative AI in everyday tools, its hidden environmental impact, and some of the ways that Lincoln Laboratory and the greater AI community can reduce emissions for a greener future.Q: What trends are you seeing in terms of how generative AI is being used in computing?A: Generative AI uses machine learning (ML) to create new content, like images and text, based on data that is inputted into the ML system. At the LLSC we design and build some of the largest academic computing platforms in the world, and over the past few years we’ve seen an explosion in the number of projects that need access to high-performance computing for generative AI. We’re also seeing how generative AI is changing all sorts of fields and domains — for example, ChatGPT is already influencing the classroom and the workplace faster than regulations can seem to keep up.We can imagine all sorts of uses for generative AI within the next decade or so, like powering highly capable virtual assistants, developing new drugs and materials, and even improving our understanding of basic science. We can’t predict everything that generative AI will be used for, but I can certainly say that with more and more complex algorithms, their compute, energy, and climate impact will continue to grow very quickly.Q: What strategies is the LLSC using to mitigate this climate impact?A: We’re always looking for ways to make computing more efficient, as doing so helps our data center make the most of its resources and allows our scientific colleagues to push their fields forward in as efficient a manner as possible.As one example, we’ve been reducing the amount of power our hardware consumes by making simple changes, similar to dimming or turning off lights when you leave a room. In one experiment, we reduced the energy consumption of a group of graphics processing units by 20 percent to 30 percent, with minimal impact on their performance, by enforcing a power cap. This technique also lowered the hardware operating temperatures, making the GPUs easier to cool and longer lasting.Another strategy is changing our behavior to be more climate-aware. At home, some of us might choose to use renewable energy sources or intelligent scheduling. We are using similar techniques at the LLSC — such as training AI models when temperatures are cooler, or when local grid energy demand is low.We also realized that a lot of the energy spent on computing is often wasted, like how a water leak increases your bill but without any benefits to your home. We developed some new techniques that allow us to monitor computing workloads as they are running and then terminate those that are unlikely to yield good results. Surprisingly, in a number of cases we found that the majority of computations could be terminated early without compromising the end result.Q: What’s an example of a project you’ve done that reduces the energy output of a generative AI program?A: We recently built a climate-aware computer vision tool. Computer vision is a domain that’s focused on applying AI to images; so, differentiating between cats and dogs in an image, correctly labeling objects within an image, or looking for components of interest within an image.In our tool, we included real-time carbon telemetry, which produces information about how much carbon is being emitted by our local grid as a model is running. Depending on this information, our system will automatically switch to a more energy-efficient version of the model, which typically has fewer parameters, in times of high carbon intensity, or a much higher-fidelity version of the model in times of low carbon intensity.By doing this, we saw a nearly 80 percent reduction in carbon emissions over a one- to two-day period. We recently extended this idea to other generative AI tasks such as text summarization and found the same results. Interestingly, the performance sometimes improved after using our technique!Q: What can we do as consumers of generative AI to help mitigate its climate impact?A: As consumers, we can ask our AI providers to offer greater transparency. For example, on Google Flights, I can see a variety of options that indicate a specific flight’s carbon footprint. We should be getting similar kinds of measurements from generative AI tools so that we can make a conscious decision on which product or platform to use based on our priorities.We can also make an effort to be more educated on generative AI emissions in general. Many of us are familiar with vehicle emissions, and it can help to talk about generative AI emissions in comparative terms. People may be surprised to know, for example, that one image-generation task is roughly equivalent to driving four miles in a gas car, or that it takes the same amount of energy to charge an electric car as it does to generate about 1,500 text summarizations.There are many cases where customers would be happy to make a trade-off if they knew the trade-off’s impact.Q: What do you see for the future?A: Mitigating the climate impact of generative AI is one of those problems that people all over the world are working on, and with a similar goal. We’re doing a lot of work here at Lincoln Laboratory, but its only scratching at the surface. In the long term, data centers, AI developers, and energy grids will need to work together to provide “energy audits” to uncover other unique ways that we can improve computing efficiencies. We need more partnerships and more collaboration in order to forge ahead.If you’re interested in learning more, or collaborating with Lincoln Laboratory on these efforts, please contact Vijay Gadepally.

    Play video

    Video: MIT Lincoln Laboratory More

  • in

    Unlocking the hidden power of boiling — for energy, space, and beyond

    Most people take boiling water for granted. For Associate Professor Matteo Bucci, uncovering the physics behind boiling has been a decade-long journey filled with unexpected challenges and new insights.The seemingly simple phenomenon is extremely hard to study in complex systems like nuclear reactors, and yet it sits at the core of a wide range of important industrial processes. Unlocking its secrets could thus enable advances in efficient energy production, electronics cooling, water desalination, medical diagnostics, and more.“Boiling is important for applications way beyond nuclear,” says Bucci, who earned tenure at MIT in July. “Boiling is used in 80 percent of the power plants that produce electricity. My research has implications for space propulsion, energy storage, electronics, and the increasingly important task of cooling computers.”Bucci’s lab has developed new experimental techniques to shed light on a wide range of boiling and heat transfer phenomena that have limited energy projects for decades. Chief among those is a problem caused by bubbles forming so quickly they create a band of vapor across a surface that prevents further heat transfer. In 2023, Bucci and collaborators developed a unifying principle governing the problem, known as the boiling crisis, which could enable more efficient nuclear reactors and prevent catastrophic failures.For Bucci, each bout of progress brings new possibilities — and new questions to answer.“What’s the best paper?” Bucci asks. “The best paper is the next one. I think Alfred Hitchcock used to say it doesn’t matter how good your last movie was. If your next one is poor, people won’t remember it. I always tell my students that our next paper should always be better than the last. It’s a continuous journey of improvement.”From engineering to bubblesThe Italian village where Bucci grew up had a population of about 1,000 during his childhood. He gained mechanical skills by working in his father’s machine shop and by taking apart and reassembling appliances like washing machines and air conditioners to see what was inside. He also gained a passion for cycling, competing in the sport until he attended the University of Pisa for undergraduate and graduate studies.In college, Bucci was fascinated with matter and the origins of life, but he also liked building things, so when it came time to pick between physics and engineering, he decided nuclear engineering was a good middle ground.“I have a passion for construction and for understanding how things are made,” Bucci says. “Nuclear engineering was a very unlikely but obvious choice. It was unlikely because in Italy, nuclear was already out of the energy landscape, so there were very few of us. At the same time, there were a combination of intellectual and practical challenges, which is what I like.”For his PhD, Bucci went to France, where he met his wife, and went on to work at a French national lab. One day his department head asked him to work on a problem in nuclear reactor safety known as transient boiling. To solve it, he wanted to use a method for making measurements pioneered by MIT Professor Jacopo Buongiorno, so he received grant money to become a visiting scientist at MIT in 2013. He’s been studying boiling at MIT ever since.Today Bucci’s lab is developing new diagnostic techniques to study boiling and heat transfer along with new materials and coatings that could make heat transfer more efficient. The work has given researchers an unprecedented view into the conditions inside a nuclear reactor.“The diagnostics we’ve developed can collect the equivalent of 20 years of experimental work in a one-day experiment,” Bucci says.That data, in turn, led Bucci to a remarkably simple model describing the boiling crisis.“The effectiveness of the boiling process on the surface of nuclear reactor cladding determines the efficiency and the safety of the reactor,” Bucci explains. “It’s like a car that you want to accelerate, but there is an upper limit. For a nuclear reactor, that upper limit is dictated by boiling heat transfer, so we are interested in understanding what that upper limit is and how we can overcome it to enhance the reactor performance.”Another particularly impactful area of research for Bucci is two-phase immersion cooling, a process wherein hot server parts bring liquid to boil, then the resulting vapor condenses on a heat exchanger above to create a constant, passive cycle of cooling.“It keeps chips cold with minimal waste of energy, significantly reducing the electricity consumption and carbon dioxide emissions of data centers,” Bucci explains. “Data centers emit as much CO2 as the entire aviation industry. By 2040, they will account for over 10 percent of emissions.”Supporting studentsBucci says working with students is the most rewarding part of his job. “They have such great passion and competence. It’s motivating to work with people who have the same passion as you.”“My students have no fear to explore new ideas,” Bucci adds. “They almost never stop in front of an obstacle — sometimes to the point where you have to slow them down and put them back on track.”In running the Red Lab in the Department of Nuclear Science and Engineering, Bucci tries to give students independence as well as support.“We’re not educating students, we’re educating future researchers,” Bucci says. “I think the most important part of our work is to not only provide the tools, but also to give the confidence and the self-starting attitude to fix problems. That can be business problems, problems with experiments, problems with your lab mates.”Some of the more unique experiments Bucci’s students do require them to gather measurements while free falling in an airplane to achieve zero gravity.“Space research is the big fantasy of all the kids,” says Bucci, who joins students in the experiments about twice a year. “It’s very fun and inspiring research for students. Zero g gives you a new perspective on life.”Applying AIBucci is also excited about incorporating artificial intelligence into his field. In 2023, he was a co-recipient of a multi-university research initiative (MURI) project in thermal science dedicated solely to machine learning. In a nod to the promise AI holds in his field, Bucci also recently founded a journal called AI Thermal Fluids to feature AI-driven research advances.“Our community doesn’t have a home for people that want to develop machine-learning techniques,” Bucci says. “We wanted to create an avenue for people in computer science and thermal science to work together to make progress. I think we really need to bring computer scientists into our community to speed this process up.”Bucci also believes AI can be used to process huge reams of data gathered using the new experimental techniques he’s developed as well as to model phenomena researchers can’t yet study.“It’s possible that AI will give us the opportunity to understand things that cannot be observed, or at least guide us in the dark as we try to find the root causes of many problems,” Bucci says. More

  • in

    MIT delegation mainstreams biodiversity conservation at the UN Biodiversity Convention, COP16

    For the first time, MIT sent an organized engagement to the global Conference of the Parties for the Convention on Biological Diversity, which this year was held Oct. 21 to Nov. 1 in Cali, Colombia.The 10 delegates to COP16 included faculty, researchers, and students from the MIT Environmental Solutions Initiative (ESI), the Department of Electrical Engineering and Computer Science (EECS), the Computer Science and Artificial Intelligence Laboratory (CSAIL), the Department of Urban Studies and Planning (DUSP), the Institute for Data, Systems, and Society (IDSS), and the Center for Sustainability Science and Strategy.In previous years, MIT faculty had participated sporadically in the discussions. This organized engagement, led by the ESI, is significant because it brought representatives from many of the groups working on biodiversity across the Institute; showcased the breadth of MIT’s research in more than 15 events including panels, roundtables, and keynote presentations across the Blue and Green Zones of the conference (with the Blue Zone representing the primary venue for the official negotiations and discussions and the Green Zone representing public events); and created an experiential learning opportunity for students who followed specific topics in the negotiations and throughout side events.The conference also gathered attendees from governments, nongovernmental organizations, businesses, other academic institutions, and practitioners focused on stopping global biodiversity loss and advancing the 23 goals of the Kunming-Montreal Global Biodiversity Framework (KMGBF), an international agreement adopted in 2022 to guide global efforts to protect and restore biodiversity through 2030.MIT’s involvement was particularly pronounced when addressing goals related to building coalitions of sub-national governments (targets 11, 12, 14); technology and AI for biodiversity conservation (targets 20 and 21); shaping equitable markets (targets 3, 11, and 19); and informing an action plan for Afro-descendant communities (targets 3, 10, and 22).Building coalitions of sub-national governmentsThe ESI’s Natural Climate Solutions (NCS) Program was able to support two separate coalitions of Latin American cities, namely the Coalition of Cities Against Illicit Economies in the Biogeographic Chocó Region and the Colombian Amazonian Cities coalition, who successfully signed declarations to advance specific targets of the KMGBF (the aforementioned targets 11, 12, 14).This was accomplished through roundtables and discussions where team members — including Marcela Angel, research program director at the MIT ESI; Angelica Mayolo, ESI Martin Luther King Fellow 2023-25; and Silvia Duque and Hannah Leung, MIT Master’s in City Planning students — presented a set of multi-scale actions including transnational strategies, recommendations to strengthen local and regional institutions, and community-based actions to promote the conservation of the Biogeographic Chocó as an ecological corridor.“There is an urgent need to deepen the relationship between academia and local governments of cities located in biodiversity hotspots,” said Angel. “Given the scale and unique conditions of Amazonian cities, pilot research projects present an opportunity to test and generate a proof of concept. These could generate catalytic information needed to scale up climate adaptation and conservation efforts in socially and ecologically sensitive contexts.”ESI’s research also provided key inputs for the creation of the Fund for the Biogeographic Chocó Region, a multi-donor fund launched within the framework of COP16 by a coalition composed of Colombia, Ecuador, Panamá, and Costa Rica. The fund aims to support biodiversity conservation, ecosystem restoration, climate change mitigation and adaptation, and sustainable development efforts across the region.Technology and AI for biodiversity conservationData, technology, and artificial intelligence are playing an increasing role in how we understand biodiversity and ecosystem change globally. Professor Sara Beery’s research group at MIT focuses on this intersection, developing AI methods that enable species and environmental monitoring at previously unprecedented spatial, temporal, and taxonomic scales.During the International Union of Biological Diversity Science-Policy Forum, the high-level COP16 segment focused on outlining recommendations from scientific and academic community, Beery spoke on a panel alongside María Cecilia Londoño, scientific information manager of the Humboldt Institute and co-chair of the Global Biodiversity Observations Network, and Josh Tewksbury, director of the Smithsonian Tropical Research Institute, among others, about how these technological advancements will help humanity achieve our biodiversity targets. The panel emphasized that AI innovation was needed, but with emphasis on direct human-AI partnership, AI capacity building, and the need for data and AI policy to ensure equity of access and benefit from these technologies.As a direct outcome of the session, for the first time, AI was emphasized in the statement on behalf of science and academia delivered by Hernando Garcia, director of the Humboldt Institute, and David Skorton, secretary general of the Smithsonian Institute, to the high-level segment of the COP16.That statement read, “To effectively address current and future challenges, urgent action is required in equity, governance, valuation, infrastructure, decolonization and policy frameworks around biodiversity data and artificial intelligence.”Beery also organized a panel at the GEOBON pavilion in the Blue Zone on Scaling Biodiversity Monitoring with AI, which brought together global leaders from AI research, infrastructure development, capacity and community building, and policy and regulation. The panel was initiated and experts selected from the participants at the recent Aspen Global Change Institute Workshop on Overcoming Barriers to Impact in AI for Biodiversity, co-organized by Beery.Shaping equitable marketsIn a side event co-hosted by the ESI with CAF-Development Bank of Latin America, researchers from ESI’s Natural Climate Solutions Program — including Marcela Angel; Angelica Mayolo; Jimena Muzio, ESI research associate; and Martin Perez Lara, ESI research affiliate and director for Forest Climate Solutions Impact and Monitoring at World Wide Fund for Nature of the U.S. — presented results of a study titled “Voluntary Carbon Markets for Social Impact: Comprehensive Assessment of the Role of Indigenous Peoples and Local Communities (IPLC) in Carbon Forestry Projects in Colombia.” The report highlighted the structural barriers that hinder effective participation of IPLC, and proposed a conceptual framework to assess IPLC engagement in voluntary carbon markets.Communicating these findings is important because the global carbon market has experienced a credibility crisis since 2023, influenced by critical assessments in academic literature, journalism questioning the quality of mitigation results, and persistent concerns about the engagement of private actors with IPLC. Nonetheless, carbon forestry projects have expanded rapidly in Indigenous, Afro-descendant, and local communities’ territories, and there is a need to assess the relationships between private actors and IPLC and to propose pathways for equitable participation. 

    Panelists pose at the equitable markets side event at the Latin American Pavilion in the Blue Zone.

    Previous item
    Next item

    The research presentation and subsequent panel with representatives of the association for Carbon Project Developers in Colombia Asocarbono, Fondo Acción, and CAF further discussed recommendations for all actors in the value chain of carbon certificates — including those focused on promoting equitable benefit-sharing and safeguarding compliance, increased accountability, enhanced governance structures, strengthened institutionality, and regulatory frameworks  — necessary to create an inclusive and transparent market.Informing an action plan for Afro-descendant communitiesThe Afro-Interamerican Forum on Climate Change (AIFCC), an international network working to highlight the critical role of Afro-descendant peoples in global climate action, was also present at COP16.At the Afro Summit, Mayolo presented key recommendations prepared collectively by the members of AIFCC to the technical secretariat of the Convention on Biological Diversity (CBD). The recommendations emphasize:creating financial tools for conservation and supporting Afro-descendant land rights;including a credit guarantee fund for countries that recognize Afro-descendant collective land titling and research on their contributions to biodiversity conservation;calling for increased representation of Afro-descendant communities in international policy forums;capacity-building for local governments; andstrategies for inclusive growth in green business and energy transition.These actions aim to promote inclusive and sustainable development for Afro-descendant populations.“Attending COP16 with a large group from MIT contributing knowledge and informed perspectives at 15 separate events was a privilege and honor,” says MIT ESI Director John E. Fernández. “This demonstrates the value of the ESI as a powerful research and convening body at MIT. Science is telling us unequivocally that climate change and biodiversity loss are the two greatest challenges that we face as a species and a planet. MIT has the capacity, expertise, and passion to address not only the former, but also the latter, and the ESI is committed to facilitating the very best contributions across the institute for the critical years that are ahead of us.”A fuller overview of the conference is available via The MIT Environmental Solutions Initiative’s Primer of COP16. More

  • in

    New AI tool generates realistic satellite images of future flooding

    Visualizing the potential impacts of a hurricane on people’s homes before it hits can help residents prepare and decide whether to evacuate.MIT scientists have developed a method that generates satellite imagery from the future to depict how a region would look after a potential flooding event. The method combines a generative artificial intelligence model with a physics-based flood model to create realistic, birds-eye-view images of a region, showing where flooding is likely to occur given the strength of an oncoming storm.As a test case, the team applied the method to Houston and generated satellite images depicting what certain locations around the city would look like after a storm comparable to Hurricane Harvey, which hit the region in 2017. The team compared these generated images with actual satellite images taken of the same regions after Harvey hit. They also compared AI-generated images that did not include a physics-based flood model.The team’s physics-reinforced method generated satellite images of future flooding that were more realistic and accurate. The AI-only method, in contrast, generated images of flooding in places where flooding is not physically possible.The team’s method is a proof-of-concept, meant to demonstrate a case in which generative AI models can generate realistic, trustworthy content when paired with a physics-based model. In order to apply the method to other regions to depict flooding from future storms, it will need to be trained on many more satellite images to learn how flooding would look in other regions.“The idea is: One day, we could use this before a hurricane, where it provides an additional visualization layer for the public,” says Björn Lütjens, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences, who led the research while he was a doctoral student in MIT’s Department of Aeronautics and Astronautics (AeroAstro). “One of the biggest challenges is encouraging people to evacuate when they are at risk. Maybe this could be another visualization to help increase that readiness.”To illustrate the potential of the new method, which they have dubbed the “Earth Intelligence Engine,” the team has made it available as an online resource for others to try.The researchers report their results today in the journal IEEE Transactions on Geoscience and Remote Sensing. The study’s MIT co-authors include Brandon Leshchinskiy; Aruna Sankaranarayanan; and Dava Newman, professor of AeroAstro and director of the MIT Media Lab; along with collaborators from multiple institutions.Generative adversarial imagesThe new study is an extension of the team’s efforts to apply generative AI tools to visualize future climate scenarios.“Providing a hyper-local perspective of climate seems to be the most effective way to communicate our scientific results,” says Newman, the study’s senior author. “People relate to their own zip code, their local environment where their family and friends live. Providing local climate simulations becomes intuitive, personal, and relatable.”For this study, the authors use a conditional generative adversarial network, or GAN, a type of machine learning method that can generate realistic images using two competing, or “adversarial,” neural networks. The first “generator” network is trained on pairs of real data, such as satellite images before and after a hurricane. The second “discriminator” network is then trained to distinguish between the real satellite imagery and the one synthesized by the first network.Each network automatically improves its performance based on feedback from the other network. The idea, then, is that such an adversarial push and pull should ultimately produce synthetic images that are indistinguishable from the real thing. Nevertheless, GANs can still produce “hallucinations,” or factually incorrect features in an otherwise realistic image that shouldn’t be there.“Hallucinations can mislead viewers,” says Lütjens, who began to wonder whether such hallucinations could be avoided, such that generative AI tools can be trusted to help inform people, particularly in risk-sensitive scenarios. “We were thinking: How can we use these generative AI models in a climate-impact setting, where having trusted data sources is so important?”Flood hallucinationsIn their new work, the researchers considered a risk-sensitive scenario in which generative AI is tasked with creating satellite images of future flooding that could be trustworthy enough to inform decisions of how to prepare and potentially evacuate people out of harm’s way.Typically, policymakers can get an idea of where flooding might occur based on visualizations in the form of color-coded maps. These maps are the final product of a pipeline of physical models that usually begins with a hurricane track model, which then feeds into a wind model that simulates the pattern and strength of winds over a local region. This is combined with a flood or storm surge model that forecasts how wind might push any nearby body of water onto land. A hydraulic model then maps out where flooding will occur based on the local flood infrastructure and generates a visual, color-coded map of flood elevations over a particular region.“The question is: Can visualizations of satellite imagery add another level to this, that is a bit more tangible and emotionally engaging than a color-coded map of reds, yellows, and blues, while still being trustworthy?” Lütjens says.The team first tested how generative AI alone would produce satellite images of future flooding. They trained a GAN on actual satellite images taken by satellites as they passed over Houston before and after Hurricane Harvey. When they tasked the generator to produce new flood images of the same regions, they found that the images resembled typical satellite imagery, but a closer look revealed hallucinations in some images, in the form of floods where flooding should not be possible (for instance, in locations at higher elevation).To reduce hallucinations and increase the trustworthiness of the AI-generated images, the team paired the GAN with a physics-based flood model that incorporates real, physical parameters and phenomena, such as an approaching hurricane’s trajectory, storm surge, and flood patterns. With this physics-reinforced method, the team generated satellite images around Houston that depict the same flood extent, pixel by pixel, as forecasted by the flood model.“We show a tangible way to combine machine learning with physics for a use case that’s risk-sensitive, which requires us to analyze the complexity of Earth’s systems and project future actions and possible scenarios to keep people out of harm’s way,” Newman says. “We can’t wait to get our generative AI tools into the hands of decision-makers at the local community level, which could make a significant difference and perhaps save lives.”The research was supported, in part, by the MIT Portugal Program, the DAF-MIT Artificial Intelligence Accelerator, NASA, and Google Cloud. More

  • in

    A vision for U.S. science success

    White House science advisor Arati Prabhakar expressed confidence in U.S. science and technology capacities during a talk on Wednesday about major issues the country must tackle.“Let me start with the purpose of science and technology and innovation, which is to open possibilities so that we can achieve our great aspirations,” said Prabhakar, who is the director of the Office of Science and Technology Policy (OSTP) and a co-chair of the President’s Council of Advisors on Science and Technology (PCAST). “The aspirations that we have as a country today are as great as they have ever been,” she added.Much of Prabhakar’s talk focused on three major issues in science and technology development: cancer prevention, climate change, and AI. In the process, she also emphasized the necessity for the U.S. to sustain its global leadership in research across domains of science and technology, which she called “one of America’s long-time strengths.”“Ever since the end of the Second World War, we said we’re going in on basic research, we’re going to build our universities’ capacity to do it, we have an unparalleled basic research capacity, and we should always have that,” said Prabhakar.“We have gotten better, I think, in recent years at commercializing technology from our basic research,” Prabhakar added, noting, “Capital moves when you can see profit and growth.” The Biden administration, she said, has invested in a variety of new ways for the public and private sector to work together to massively accelerate the movement of technology into the market.Wednesday’s talk drew a capacity audience of nearly 300 people in MIT’s Wong Auditorium and was hosted by the Manufacturing@MIT Working Group. The event included introductory remarks by Suzanne Berger, an Institute Professor and a longtime expert on the innovation economy, and Nergis Mavalvala, dean of the School of Science and an astrophysicist and leader in gravitational-wave detection.Introducing Mavalvala, Berger said the 2015 announcement of the discovery of gravitational waves “was the day I felt proudest and most elated to be a member of the MIT community,” and noted that U.S. government support helped make the research possible. Mavalvala, in turn, said MIT was “especially honored” to hear Prabhakar discuss leading-edge research and acknowledge the role of universities in strengthening the country’s science and technology sectors.Prabhakar has extensive experience in both government and the private sector. She has been OSTP director and co-chair of PCAST since October of 2022. She served as director of the Defense Advanced Research Projects Agency (DARPA) from 2012 to 2017 and director of the National Institute of Standards and Technology (NIST) from 1993 to 1997.She has also held executive positions at Raychem and Interval Research, and spent a decade at the investment firm U.S. Venture Partners. An engineer by training, Prabhakar earned a BS in electrical engineering from Texas Tech University in 1979, an MA in electrical engineering from Caltech in 1980, and a PhD in applied physics from Caltech in 1984.Among other remarks about medicine, Prabhakar touted the Biden administration’s “Cancer Moonshot” program, which aims to cut the cancer death rate in half over the next 25 years through multiple approaches, from better health care provision and cancer detection to limiting public exposure to carcinogens. We should be striving, Prabhakar said, for “a future in which people take good health for granted and can get on with their lives.”On AI, she heralded both the promise and concerns about technology, saying, “I think it’s time for active steps to get on a path to where it actually allows people to do more and earn more.”When it comes to climate change, Prabhakar said, “We all understand that the climate is going to change. But it’s in our hands how severe those changes get. And it’s possible that we can build a better future.” She noted the bipartisan infrastructure bill signed into law in 2021 and the Biden administration’s Inflation Reduction Act as important steps forward in this fight.“Together those are making the single biggest investment anyone anywhere on the planet has ever made in the clean energy transition,” she said. “I used to feel hopeless about our ability to do that, and it gives me tremendous hope.”After her talk, Prabhakar was joined onstage for a group discussion with the three co-presidents of the MIT Energy and Climate Club: Laurentiu Anton, a doctoral candidate in electrical engineering and computer science; Rosie Keller, an MBA candidate at the MIT Sloan School of Management; and Thomas Lee, a doctoral candidate in MIT’s Institute for Data, Systems, and Society.Asked about the seemingly sagging public confidence in science today, Prabhakar offered a few thoughts.“The first thing I would say is, don’t take it personally,” Prabhakar said, noting that any dip in public regard for science is less severe than the diminished public confidence in other institutions.Adding some levity, she observed that in polling about which occupations are regarded as being desirable for a marriage partner to have, “scientist” still ranks highly.“Scientists still do really well on that front, we’ve got that going for us,” she quipped.More seriously, Prabhakar observed, rather than “preaching” at the public, scientists should recognize that “part of the job for us is to continue to be clear about what we know are the facts, and to present them clearly but humbly, and to be clear that we’re going to continue working to learn more.” At the same time, she continued, scientists can always reinforce that “oh, by the way, facts are helpful things that can actually help you make better choices about how the future turns out. I think that would be better in my view.”Prabhakar said that her White House work had been guided, in part, by one of the overarching themes that President Biden has often reinforced.“He thinks about America as a nation that can be described in a single word, and that word is ‘possibilities,’” she said. “And that idea, that is such a big idea, it lights me up. I think of what we do in the world of science and technology and innovation as really part and parcel of creating those possibilities.”Ultimately, Prabhakar said, at all times and all points in American history, scientists and technologists must continue “to prove once more that when people come together and do this work … we do it in a way that builds opportunity and expands opportunity for everyone in our country. I think this is the great privilege we all have in the work we do, and it’s also our responsibility.” More

  • in

    Advancing urban tree monitoring with AI-powered digital twins

    The Irish philosopher George Berkely, best known for his theory of immaterialism, once famously mused, “If a tree falls in a forest and no one is around to hear it, does it make a sound?”What about AI-generated trees? They probably wouldn’t make a sound, but they will be critical nonetheless for applications such as adaptation of urban flora to climate change. To that end, the novel “Tree-D Fusion” system developed by researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), Google, and Purdue University merges AI and tree-growth models with Google’s Auto Arborist data to create accurate 3D models of existing urban trees. The project has produced the first-ever large-scale database of 600,000 environmentally aware, simulation-ready tree models across North America.“We’re bridging decades of forestry science with modern AI capabilities,” says Sara Beery, MIT electrical engineering and computer science (EECS) assistant professor, MIT CSAIL principal investigator, and a co-author on a new paper about Tree-D Fusion. “This allows us to not just identify trees in cities, but to predict how they’ll grow and impact their surroundings over time. We’re not ignoring the past 30 years of work in understanding how to build these 3D synthetic models; instead, we’re using AI to make this existing knowledge more useful across a broader set of individual trees in cities around North America, and eventually the globe.”Tree-D Fusion builds on previous urban forest monitoring efforts that used Google Street View data, but branches it forward by generating complete 3D models from single images. While earlier attempts at tree modeling were limited to specific neighborhoods, or struggled with accuracy at scale, Tree-D Fusion can create detailed models that include typically hidden features, such as the back side of trees that aren’t visible in street-view photos.The technology’s practical applications extend far beyond mere observation. City planners could use Tree-D Fusion to one day peer into the future, anticipating where growing branches might tangle with power lines, or identifying neighborhoods where strategic tree placement could maximize cooling effects and air quality improvements. These predictive capabilities, the team says, could change urban forest management from reactive maintenance to proactive planning.A tree grows in Brooklyn (and many other places)The researchers took a hybrid approach to their method, using deep learning to create a 3D envelope of each tree’s shape, then using traditional procedural models to simulate realistic branch and leaf patterns based on the tree’s genus. This combo helped the model predict how trees would grow under different environmental conditions and climate scenarios, such as different possible local temperatures and varying access to groundwater.Now, as cities worldwide grapple with rising temperatures, this research offers a new window into the future of urban forests. In a collaboration with MIT’s Senseable City Lab, the Purdue University and Google team is embarking on a global study that re-imagines trees as living climate shields. Their digital modeling system captures the intricate dance of shade patterns throughout the seasons, revealing how strategic urban forestry could hopefully change sweltering city blocks into more naturally cooled neighborhoods.“Every time a street mapping vehicle passes through a city now, we’re not just taking snapshots — we’re watching these urban forests evolve in real-time,” says Beery. “This continuous monitoring creates a living digital forest that mirrors its physical counterpart, offering cities a powerful lens to observe how environmental stresses shape tree health and growth patterns across their urban landscape.”AI-based tree modeling has emerged as an ally in the quest for environmental justice: By mapping urban tree canopy in unprecedented detail, a sister project from the Google AI for Nature team has helped uncover disparities in green space access across different socioeconomic areas. “We’re not just studying urban forests — we’re trying to cultivate more equity,” says Beery. The team is now working closely with ecologists and tree health experts to refine these models, ensuring that as cities expand their green canopies, the benefits branch out to all residents equally.It’s a breezeWhile Tree-D fusion marks some major “growth” in the field, trees can be uniquely challenging for computer vision systems. Unlike the rigid structures of buildings or vehicles that current 3D modeling techniques handle well, trees are nature’s shape-shifters — swaying in the wind, interweaving branches with neighbors, and constantly changing their form as they grow. The Tree-D fusion models are “simulation-ready” in that they can estimate the shape of the trees in the future, depending on the environmental conditions.“What makes this work exciting is how it pushes us to rethink fundamental assumptions in computer vision,” says Beery. “While 3D scene understanding techniques like photogrammetry or NeRF [neural radiance fields] excel at capturing static objects, trees demand new approaches that can account for their dynamic nature, where even a gentle breeze can dramatically alter their structure from moment to moment.”The team’s approach of creating rough structural envelopes that approximate each tree’s form has proven remarkably effective, but certain issues remain unsolved. Perhaps the most vexing is the “entangled tree problem;” when neighboring trees grow into each other, their intertwined branches create a puzzle that no current AI system can fully unravel.The scientists see their dataset as a springboard for future innovations in computer vision, and they’re already exploring applications beyond street view imagery, looking to extend their approach to platforms like iNaturalist and wildlife camera traps.“This marks just the beginning for Tree-D Fusion,” says Jae Joong Lee, a Purdue University PhD student who developed, implemented and deployed the Tree-D-Fusion algorithm. “Together with my collaborators, I envision expanding the platform’s capabilities to a planetary scale. Our goal is to use AI-driven insights in service of natural ecosystems — supporting biodiversity, promoting global sustainability, and ultimately, benefiting the health of our entire planet.”Beery and Lee’s co-authors are Jonathan Huang, Scaled Foundations head of AI (formerly of Google); and four others from Purdue University: PhD students Jae Joong Lee and Bosheng Li, Professor and Dean’s Chair of Remote Sensing Songlin Fei, Assistant Professor Raymond Yeh, and Professor and Associate Head of Computer Science Bedrich Benes. Their work is based on efforts supported by the United States Department of Agriculture’s (USDA) Natural Resources Conservation Service and is directly supported by the USDA’s National Institute of Food and Agriculture. The researchers presented their findings at the European Conference on Computer Vision this month.  More

  • in

    Ensuring a durable transition

    To fend off the worst impacts of climate change, “we have to decarbonize, and do it even faster,” said William H. Green, director of the MIT Energy Initiative (MITEI) and Hoyt C. Hottel Professor, MIT Department of Chemical Engineering, at MITEI’s Annual Research Conference.“But how the heck do we actually achieve this goal when the United States is in the middle of a divisive election campaign, and globally, we’re facing all kinds of geopolitical conflicts, trade protectionism, weather disasters, increasing demand from developing countries building a middle class, and data centers in countries like the U.S.?”Researchers, government officials, and business leaders convened in Cambridge, Massachusetts, Sept. 25-26 to wrestle with this vexing question at the conference that was themed, “A durable energy transition: How to stay on track in the face of increasing demand and unpredictable obstacles.”“In this room we have a lot of power,” said Green, “if we work together, convey to all of society what we see as real pathways and policies to solve problems, and take collective action.”The critical role of consensus-building in driving the energy transition arose repeatedly in conference sessions, whether the topic involved developing and adopting new technologies, constructing and siting infrastructure, drafting and passing vital energy policies, or attracting and retaining a skilled workforce.Resolving conflictsThere is “blowback and a social cost” in transitioning away from fossil fuels, said Stephen Ansolabehere, the Frank G. Thompson Professor of Government at Harvard University, in a panel on the social barriers to decarbonization. “Companies need to engage differently and recognize the rights of communities,” he said.Nora DeDontney, director of development at Vineyard Offshore, described her company’s two years of outreach and negotiations to bring large cables from ocean-based wind turbines onshore.“Our motto is, ‘community first,’” she said. Her company works to mitigate any impacts towns might feel because of offshore wind infrastructure construction with projects, such as sewer upgrades; provides workforce training to Tribal Nations; and lays out wind turbines in a manner that provides safe and reliable areas for local fisheries.Elsa A. Olivetti, professor in the Department of Materials Science and Engineering at MIT and the lead of the Decarbonization Mission of MIT’s new Climate Project, discussed the urgent need for rapid scale-up of mineral extraction. “Estimates indicate that to electrify the vehicle fleet by 2050, about six new large copper mines need to come on line each year,” she said. To meet the demand for metals in the United States means pushing into Indigenous lands and environmentally sensitive habitats. “The timeline of permitting is not aligned with the temporal acceleration needed,” she said.Larry Susskind, the Ford Professor of Urban and Environmental Planning in the MIT Department of Urban Studies and Planning, is trying to resolve such tensions with universities playing the role of mediators. He is creating renewable energy clinics where students train to participate in emerging disputes over siting. “Talk to people before decisions are made, conduct joint fact finding, so that facilities reduce harms and share the benefits,” he said.Clean energy boom and pressureA relatively recent and unforeseen increase in demand for energy comes from data centers, which are being built by large technology companies for new offerings, such as artificial intelligence.“General energy demand was flat for 20 years — and now, boom,” said Sean James, Microsoft’s senior director of data center research. “It caught utilities flatfooted.” With the expansion of AI, the rush to provision data centers with upwards of 35 gigawatts of new (and mainly renewable) power in the near future, intensifies pressure on big companies to balance the concerns of stakeholders across multiple domains. Google is pursuing 24/7 carbon-free energy by 2030, said Devon Swezey, the company’s senior manager for global energy and climate.“We’re pursuing this by purchasing more and different types of clean energy locally, and accelerating technological innovation such as next-generation geothermal projects,” he said. Pedro Gómez Lopez, strategy and development director, Ferrovial Digital, which designs and constructs data centers, incorporates renewable energy into their projects, which contributes to decarbonization goals and benefits to locales where they are sited. “We can create a new supply of power, taking the heat generated by a data center to residences or industries in neighborhoods through District Heating initiatives,” he said.The Inflation Reduction Act and other legislation has ramped up employment opportunities in clean energy nationwide, touching every region, including those most tied to fossil fuels. “At the start of 2024 there were about 3.5 million clean energy jobs, with ‘red’ states showing the fastest growth in clean energy jobs,” said David S. Miller, managing partner at Clean Energy Ventures. “The majority (58 percent) of new jobs in energy are now in clean energy — that transition has happened. And one-in-16 new jobs nationwide were in clean energy, with clean energy jobs growing more than three times faster than job growth economy-wide”In this rapid expansion, the U.S. Department of Energy (DoE) is prioritizing economically marginalized places, according to Zoe Lipman, lead for good jobs and labor standards in the Office of Energy Jobs at the DoE. “The community benefit process is integrated into our funding,” she said. “We are creating the foundation of a virtuous circle,” encouraging benefits to flow to disadvantaged and energy communities, spurring workforce training partnerships, and promoting well-paid union jobs. “These policies incentivize proactive community and labor engagement, and deliver community benefits, both of which are key to building support for technological change.”Hydrogen opportunity and challengeWhile engagement with stakeholders helps clear the path for implementation of technology and the spread of infrastructure, there remain enormous policy, scientific, and engineering challenges to solve, said multiple conference participants. In a “fireside chat,” Prasanna V. Joshi, vice president of low-carbon-solutions technology at ExxonMobil, and Ernest J. Moniz, professor of physics and special advisor to the president at MIT, discussed efforts to replace natural gas and coal with zero-carbon hydrogen in order to reduce greenhouse gas emissions in such major industries as steel and fertilizer manufacturing.“We have gone into an era of industrial policy,” said Moniz, citing a new DoE program offering incentives to generate demand for hydrogen — more costly than conventional fossil fuels — in end-use applications. “We are going to have to transition from our current approach, which I would call carrots-and-twigs, to ultimately, carrots-and-sticks,” Moniz warned, in order to create “a self-sustaining, major, scalable, affordable hydrogen economy.”To achieve net zero emissions by 2050, ExxonMobil intends to use carbon capture and sequestration in natural gas-based hydrogen and ammonia production. Ammonia can also serve as a zero-carbon fuel. Industry is exploring burning ammonia directly in coal-fired power plants to extend the hydrogen value chain. But there are challenges. “How do you burn 100 percent ammonia?”, asked Joshi. “That’s one of the key technology breakthroughs that’s needed.” Joshi believes that collaboration with MIT’s “ecosystem of breakthrough innovation” will be essential to breaking logjams around the hydrogen and ammonia-based industries.MIT ingenuity essentialThe energy transition is placing very different demands on different regions around the world. Take India, where today per capita power consumption is one of the lowest. But Indians “are an aspirational people … and with increasing urbanization and industrial activity, the growth in power demand is expected to triple by 2050,” said Praveer Sinha, CEO and managing director of the Tata Power Co. Ltd., in his keynote speech. For that nation, which currently relies on coal, the move to clean energy means bringing another 300 gigawatts of zero-carbon capacity online in the next five years. Sinha sees this power coming from wind, solar, and hydro, supplemented by nuclear energy.“India plans to triple nuclear power generation capacity by 2032, and is focusing on advancing small modular reactors,” said Sinha. “The country also needs the rapid deployment of storage solutions to firm up the intermittent power.” The goal is to provide reliable electricity 24/7 to a population living both in large cities and in geographically remote villages, with the help of long-range transmission lines and local microgrids. “India’s energy transition will require innovative and affordable technology solutions, and there is no better place to go than MIT, where you have the best brains, startups, and technology,” he said.These assets were on full display at the conference. Among them a cluster of young businesses, including:the MIT spinout Form Energy, which has developed a 100-hour iron battery as a backstop to renewable energy sources in case of multi-day interruptions;startup Noya that aims for direct air capture of atmospheric CO2 using carbon-based materials;the firm Active Surfaces, with a lightweight material for putting solar photovoltaics in previously inaccessible places;Copernic Catalysts, with new chemistry for making ammonia and sustainable aviation fuel far more inexpensively than current processes; andSesame Sustainability, a software platform spun out of MITEI that gives industries a full financial analysis of the costs and benefits of decarbonization.The pipeline of research talent extended into the undergraduate ranks, with a conference “slam” competition showcasing students’ summer research projects in areas from carbon capture using enzymes to 3D design for the coils used in fusion energy confinement.“MIT students like me are looking to be the next generation of energy leaders, looking for careers where we can apply our engineering skills to tackle exciting climate problems and make a tangible impact,” said Trent Lee, a junior in mechanical engineering researching improvements in lithium-ion energy storage. “We are stoked by the energy transition, because it’s not just the future, but our chance to build it.” More