More stories

  • in

    Jessika Trancik named director of the Sociotechnical Systems Research Center

    Jessika Trancik, a professor in MIT’s Institute for Data, Systems, and Society, has been named the new director of the Sociotechnical Systems Research Center (SSRC), effective July 1. The SSRC convenes and supports researchers focused on problems and solutions at the intersection of technology and its societal impacts.Trancik conducts research on technology innovation and energy systems. At the Trancik Lab, she and her team develop methods drawing on engineering knowledge, data science, and policy analysis. Their work examines the pace and drivers of technological change, helping identify where innovation is occurring most rapidly, how emerging technologies stack up against existing systems, and which performance thresholds matter most for real-world impact. Her models have been used to inform government innovation policy and have been applied across a wide range of industries.“Professor Trancik’s deep expertise in the societal implications of technology, and her commitment to developing impactful solutions across industries, make her an excellent fit to lead SSRC,” says Maria C. Yang, interim dean of engineering and William E. Leonhard (1940) Professor of Mechanical Engineering.Much of Trancik’s research focuses on the domain of energy systems, and establishing methods for energy technology evaluation, including of their costs, performance, and environmental impacts. She covers a wide range of energy services — including electricity, transportation, heating, and industrial processes. Her research has applications in solar and wind energy, energy storage, low-carbon fuels, electric vehicles, and nuclear fission. Trancik is also known for her research on extreme events in renewable energy availability.A prolific researcher, Trancik has helped measure progress and inform the development of solar photovoltaics, batteries, electric vehicle charging infrastructure, and other low-carbon technologies — and anticipate future trends. One of her widely cited contributions includes quantifying learning rates and identifying where targeted investments can most effectively accelerate innovation. These tools have been used by U.S. federal agencies, international organizations, and the private sector to shape energy R&D portfolios, climate policy, and infrastructure planning.Trancik is committed to engaging and informing the public on energy consumption. She and her team developed the app carboncounter.com, which helps users choose cars with low costs and low environmental impacts.As an educator, Trancik teaches courses for students across MIT’s five schools and the MIT Schwarzman College of Computing.“The question guiding my teaching and research is how do we solve big societal challenges with technology, and how can we be more deliberate in developing and supporting technologies to get us there?” Trancik said in an article about course IDS.521/IDS.065 (Energy Systems for Climate Change Mitigation).Trancik received her undergraduate degree in materials science and engineering from Cornell University. As a Rhodes Scholar, she completed her PhD in materials science at the University of Oxford. She subsequently worked for the United Nations in Geneva, Switzerland, and the Earth Institute at Columbia University. After serving as an Omidyar Research Fellow at the Santa Fe Institute, she joined MIT in 2010 as a faculty member.Trancik succeeds Fotini Christia, the Ford International Professor of Social Sciences in the Department of Political Science and director of IDSS, who previously served as director of SSRC. More

  • in

    Surprisingly diverse innovations led to dramatically cheaper solar panels

    The cost of solar panels has dropped by more than 99 percent since the 1970s, enabling widespread adoption of photovoltaic systems that convert sunlight into electricity.A new MIT study drills down on specific innovations that enabled such dramatic cost reductions, revealing that technical advances across a web of diverse research efforts and industries played a pivotal role.The findings could help renewable energy companies make more effective R&D investment decisions and aid policymakers in identifying areas to prioritize to spur growth in manufacturing and deployment.The researchers’ modeling approach shows that key innovations often originated outside the solar sector, including advances in semiconductor fabrication, metallurgy, glass manufacturing, oil and gas drilling, construction processes, and even legal domains.“Our results show just how intricate the process of cost improvement is, and how much scientific and engineering advances, often at a very basic level, are at the heart of these cost reductions. A lot of knowledge was drawn from different domains and industries, and this network of knowledge is what makes these technologies improve,” says study senior author Jessika Trancik, a professor in MIT’s Institute for Data, Systems, and Society.Trancik is joined on the paper by co-lead authors Goksin Kavlak, a former IDSS graduate student and postdoc who is now a senior energy associate at the Brattle Group; Magdalena Klemun, a former IDSS graduate student and postdoc who is now an assistant professor at Johns Hopkins University; former MIT postdoc Ajinkya Kamat; as well as Brittany Smith and Robert Margolis of the National Renewable Energy Laboratory. The research appears today in PLOS ONE.Identifying innovationsThis work builds on mathematical models that the researchers previously developed that tease out the effects of engineering technologies on the cost of photovoltaic (PV) modules and systems.In this study, the researchers aimed to dig even deeper into the scientific advances that drove those cost declines.They combined their quantitative cost model with a detailed, qualitative analysis of innovations that affected the costs of PV system materials, manufacturing steps, and deployment processes.“Our quantitative cost model guided the qualitative analysis, allowing us to look closely at innovations in areas that are hard to measure due to a lack of quantitative data,” Kavlak says.Building on earlier work identifying key cost drivers — such as the number of solar cells per module, wiring efficiency, and silicon wafer area — the researchers conducted a structured scan of the literature for innovations likely to affect these drivers. Next, they grouped these innovations to identify patterns, revealing clusters that reduced costs by improving materials or prefabricating components to streamline manufacturing and installation. Finally, the team tracked industry origins and timing for each innovation, and consulted domain experts to zero in on the most significant innovations.All told, they identified 81 unique innovations that affected PV system costs since 1970, from improvements in antireflective coated glass to the implementation of fully online permitting interfaces.“With innovations, you can always go to a deeper level, down to things like raw materials processing techniques, so it was challenging to know when to stop. Having that quantitative model to ground our qualitative analysis really helped,” Trancik says.They chose to separate PV module costs from so-called balance-of-system (BOS) costs, which cover things like mounting systems, inverters, and wiring.PV modules, which are wired together to form solar panels, are mass-produced and can be exported, while many BOS components are designed, built, and sold at the local level.“By examining innovations both at the BOS level and within the modules, we identify the different types of innovations that have emerged in these two parts of PV technology,” Kavlak says.BOS costs depend more on soft technologies, nonphysical elements such as permitting procedures, which have contributed significantly less to PV’s past cost improvement compared to hardware innovations.“Often, it comes down to delays. Time is money, and if you have delays on construction sites and unpredictable processes, that affects these balance-of-system costs,” Trancik says.Innovations such as automated permitting software, which flags code-compliant systems for fast-track approval, show promise. Though not yet quantified in this study, the team’s framework could support future analysis of their economic impact and similar innovations that streamline deployment processes.Interconnected industriesThe researchers found that innovations from the semiconductor, electronics, metallurgy, and petroleum industries played a major role in reducing both PV and BOS costs, but BOS costs were also impacted by innovations in software engineering and electric utilities.Noninnovation factors, like efficiency gains from bulk purchasing and the accumulation of knowledge in the solar power industry, also reduced some cost variables.In addition, while most PV panel innovations originated in research organizations or industry, many BOS innovations were developed by city governments, U.S. states, or professional associations.“I knew there was a lot going on with this technology, but the diversity of all these fields and how closely linked they are, and the fact that we can clearly see that network through this analysis, was interesting,” Trancik says.“PV was very well-positioned to absorb innovations from other industries — thanks to the right timing, physical compatibility, and supportive policies to adapt innovations for PV applications,” Klemun adds.The analysis also reveals the role greater computing power could play in reducing BOS costs through advances like automated engineering review systems and remote site assessment software.“In terms of knowledge spillovers, what we’ve seen so far in PV may really just be the beginning,” Klemun says, pointing to the expanding role of robotics and AI-driven digital tools in driving future cost reductions and quality improvements.In addition to their qualitative analysis, the researchers demonstrated how this methodology could be used to estimate the quantitative impact of a particular innovation if one has the numerical data to plug into the cost equation.For instance, using information about material prices and manufacturing procedures, they estimate that wire sawing, a technique which was introduced in the 1980s, led to an overall PV system cost decrease of $5 per watt by reducing silicon losses and increasing throughput during fabrication.“Through this retrospective analysis, you learn something valuable for future strategy because you can see what worked and what didn’t work, and the models can also be applied prospectively. It is also useful to know what adjacent sectors may help support improvement in a particular technology,” Trancik says.Moving forward, the researchers plan to apply this methodology to a wide range of technologies, including other renewable energy systems. They also want to further study soft technology to identify innovations or processes that could accelerate cost reductions.“Although the process of technological innovation may seem like a black box, we’ve shown that you can study it just like any other phenomena,” Trancik says.This research is funded, in part, by the U.S. Department of Energy Solar Energies Technology Office. More

  • in

    MIT-Africa launches new collaboration with Angola

    The MIT Center for International Studies announced the launch of a new pilot initiative with Angola, to be implemented through its MIT-Africa Program.The new initiative marks a significant collaboration between MIT-Africa, Sonangol (Angola’s national energy company), and the Instituto Superior Politécnico de Tecnologias e Ciências (ISPTEC). The collaboration was formalized at a signing ceremony on MIT’s campus in June with key stakeholders from all three institutions present, including Diamantino Pedro Azevedo, the Angolan minister of mineral resources, petroleum, and gas, and Sonangol CEO Gaspar Martins.“This partnership marks a pivotal step in the Angolan government’s commitment to leveraging knowledge as the cornerstone of the country’s economic transformation,” says Azevedo. “By connecting the oil and gas sector with science, innovation, and world-class training, we are equipping future generations to lead Angola into a more technological, sustainable, and globally competitive era.”The sentiment is shared by the MIT-Africa Program leaders. “This initiative reflects MIT’s deep commitment to fostering meaningful, long-term relationships across the African continent,” says Mai Hassan, faculty director of the MIT-Africa Program. “It supports our mission of advancing knowledge and educating students in ways that are globally informed, and it provides a platform for mutual learning. By working with Angolan partners, we gain new perspectives and opportunities for innovation that benefit both MIT and our collaborators.”In addition to its new collaboration with MIT-Africa, Sonangol has joined MIT’s Industrial Liaison Program (ILP), breaking new ground as its first corporate member based in sub-Saharan Africa. ILP enables companies worldwide to harness MIT resources to address current challenges and to anticipate future needs. As an ILP member, Sonangol seeks to facilitate collaboration in key sectors such as natural resources and mining, energy, construction, and infrastructure.The MIT-Africa Program manages a portfolio of research, teaching, and learning initiatives that emphasize two-way value — offering impactful experiences to MIT students and faculty while collaborating closely with institutions and communities across Africa. The new Angola collaboration is aligned with this ethos, and will launch with two core activities during the upcoming academic year:Global Classroom: An MIT course on geo-spatial technologies for environmental monitoring, taught by an MIT faculty member, will be brought directly to the ISPTEC campus, offering Angolan students and MIT participants a collaborative, in-country learning experience.Global Teaching Labs: MIT students will travel to ISPTEC to teach science, technology, engineering, arts, and mathematics subjects on renewable energy technologies, engaging Angolan students through hands-on instruction.“This is not a traditional development project,” says Ari Jacobovits, managing director of MIT-Africa. “This is about building genuine partnerships rooted in academic rigor, innovation, and shared curiosity. The collaboration has been designed from the ground up with our partners at ISPTEC and Sonangol. We’re coming in with a readiness to learn as much as we teach.”The pilot marks an important first step in establishing a long-term collaboration with Angola. By investing in collaborative education and innovation, the new initiative aims to spark novel approaches to global challenges and strengthen academic institutions on both sides.These agreements with MIT-Africa and ILP “not only enhance our innovation and technological capabilities, but also create opportunities for sustainable development and operational excellence,” says Gaspar. “They advance our mission to be a leading force in the African energy sector.”“The vision behind this initiative is bold,” says Hassan. “It’s about co-creating knowledge and building capacity that lasts.” More

  • in

    Theory-guided strategy expands the scope of measurable quantum interactions

    A new theory-guided framework could help scientists probe the properties of new semiconductors for next-generation microelectronic devices, or discover materials that boost the performance of quantum computers.Research to develop new or better materials typically involves investigating properties that can be reliably measured with existing lab equipment, but this represents just a fraction of the properties that scientists could potentially probe in principle. Some properties remain effectively “invisible” because they are too difficult to capture directly with existing methods.Take electron-phonon interaction — this property plays a critical role in a material’s electrical, thermal, optical, and superconducting properties, but directly capturing it using existing techniques is notoriously challenging.Now, MIT researchers have proposed a theoretically justified approach that could turn this challenge into an opportunity. Their method reinterprets neutron scattering, an often-overlooked interference effect as a potential direct probe of electron-phonon coupling strength.The procedure creates two interaction effects in the material. The researchers show that, by deliberately designing their experiment to leverage the interference between the two interactions, they can capture the strength of a material’s electron-phonon interaction.The researchers’ theory-informed methodology could be used to shape the design of future experiments, opening the door to measuring new quantities that were previously out of reach.“Rather than discovering new spectroscopy techniques by pure accident, we can use theory to justify and inform the design of our experiments and our physical equipment,” says Mingda Li, the Class of 1947 Career Development Professor and an associate professor of nuclear science and engineering, and senior author of a paper on this experimental method.Li is joined on the paper by co-lead authors Chuliang Fu, an MIT postdoc; Phum Siriviboon and Artittaya Boonkird, both MIT graduate students; as well as others at MIT, the National Institute of Standards and Technology, the University of California at Riverside, Michigan State University, and Oak Ridge National Laboratory. The research appears this week in Materials Today Physics.Investigating interferenceNeutron scattering is a powerful measurement technique that involves aiming a beam of neutrons at a material and studying how the neutrons are scattered after they strike it. The method is ideal for measuring a material’s atomic structure and magnetic properties.When neutrons collide with the material sample, they interact with it through two different mechanisms, creating a nuclear interaction and a magnetic interaction. These interactions can interfere with each other.“The scientific community has known about this interference effect for a long time, but researchers tend to view it as a complication that can obscure measurement signals. So it hasn’t received much focused attention,” Fu says.The team and their collaborators took a conceptual “leap of faith” and decided to explore this oft-overlooked interference effect more deeply.They flipped the traditional materials research approach on its head by starting with a multifaceted theoretical analysis. They explored what happens inside a material when the nuclear interaction and magnetic interaction interfere with each other.Their analysis revealed that this interference pattern is directly proportional to the strength of the material’s electron-phonon interaction.“This makes the interference effect a probe we can use to detect this interaction,” explains Siriviboon.Electron-phonon interactions play a role in a wide range of material properties. They affect how heat flows through a material, impact a material’s ability to absorb and emit light, and can even lead to superconductivity.But the complexity of these interactions makes them hard to directly measure using existing experimental techniques. Instead, researchers often rely on less precise, indirect methods to capture electron-phonon interactions.However, leveraging this interference effect enables direct measurement of the electron-phonon interaction, a major advantage over other approaches.“Being able to directly measure the electron-phonon interaction opens the door to many new possibilities,” says Boonkird.Rethinking materials researchBased on their theoretical insights, the researchers designed an experimental setup to demonstrate their approach.Since the available equipment wasn’t powerful enough for this type of neutron scattering experiment, they were only able to capture a weak electron-phonon interaction signal — but the results were clear enough to support their theory.“These results justify the need for a new facility where the equipment might be 100 to 1,000 times more powerful, enabling scientists to clearly resolve the signal and measure the interaction,” adds Landry.With improved neutron scattering facilities, like those proposed for the upcoming Second Target Station at Oak Ridge National Laboratory, this experimental method could be an effective technique for measuring many crucial material properties.For instance, by helping scientists identify and harness better semiconductors, this approach could enable more energy-efficient appliances, faster wireless communication devices, and more reliable medical equipment like pacemakers and MRI scanners.   Ultimately, the team sees this work as a broader message about the need to rethink the materials research process.“Using theoretical insights to design experimental setups in advance can help us redefine the properties we can measure,” Fu says.To that end, the team and their collaborators are currently exploring other types of interactions they could leverage to investigate additional material properties.“This is a very interesting paper,” says Jon Taylor, director of the neutron scattering division at Oak Ridge National Laboratory, who was not involved with this research. “It would be interesting to have a neutron scattering method that is directly sensitive to charge lattice interactions or more generally electronic effects that were not just magnetic moments. It seems that such an effect is expectedly rather small, so facilities like STS could really help develop that fundamental understanding of the interaction and also leverage such effects routinely for research.”This work is funded, in part, by the U.S. Department of Energy and the National Science Foundation. More

  • in

    Model predicts long-term effects of nuclear waste on underground disposal systems

    As countries across the world experience a resurgence in nuclear energy projects, the questions of where and how to dispose of nuclear waste remain as politically fraught as ever. The United States, for instance, has indefinitely stalled its only long-term underground nuclear waste repository. Scientists are using both modeling and experimental methods to study the effects of underground nuclear waste disposal and ultimately, they hope, build public trust in the decision-making process.New research from scientists at MIT, Lawrence Berkeley National Lab, and the University of Orléans makes progress in that direction. The study shows that simulations of underground nuclear waste interactions, generated by new, high-performance-computing software, aligned well with experimental results from a research facility in Switzerland.The study, which was co-authored by MIT PhD student Dauren Sarsenbayev and Assistant Professor Haruko Wainwright, along with Christophe Tournassat and Carl Steefel, appears in the journal PNAS.“These powerful new computational tools, coupled with real-world experiments like those at the Mont Terri research site in Switzerland, help us understand how radionuclides will migrate in coupled underground systems,” says Sarsenbayev, who is first author of the new study.The authors hope the research will improve confidence among policymakers and the public in the long-term safety of underground nuclear waste disposal.“This research — coupling both computation and experiments — is important to improve our confidence in waste disposal safety assessments,” says Wainwright. “With nuclear energy re-emerging as a key source for tackling climate change and ensuring energy security, it is critical to validate disposal pathways.”Comparing simulations with experimentsDisposing of nuclear waste in deep underground geological formations is currently considered the safest long-term solution for managing high-level radioactive waste. As such, much effort has been put into studying the migration behaviors of radionuclides from nuclear waste within various natural and engineered geological materials.Since its founding in 1996, the Mont Terri research site in northern Switzerland has served as an important test bed for an international consortium of researchers interested in studying materials like Opalinus clay — a thick, water-tight claystone abundant in the tunneled areas of the mountain.“It is widely regarded as one of the most valuable real-world experiment sites because it provides us with decades of datasets around the interactions of cement and clay, and those are the key materials proposed to be used by countries across the world for engineered barrier systems and geological repositories for nuclear waste,” explains Sarsenbayev.For their study, Sarsenbayev and Wainwright collaborated with co-authors Tournassat and Steefel, who have developed high-performance computing software to improve modeling of interactions between the nuclear waste and both engineered and natural materials.To date, several challenges have limited scientists’ understanding of how nuclear waste reacts with cement-clay barriers. For one thing, the barriers are made up of irregularly mixed materials deep underground. Additionally, the existing class of models commonly used to simulate radionuclide interactions with cement-clay do not take into account electrostatic effects associated with the negatively charged clay minerals in the barriers.Tournassat and Steefel’s new software accounts for electrostatic effects, making it the only one that can simulate those interactions in three-dimensional space. The software, called CrunchODiTi, was developed from established software known as CrunchFlow and was most recently updated this year. It is designed to be run on many high-performance computers at once in parallel.For the study, the researchers looked at a 13-year-old experiment, with an initial focus on cement-clay rock interactions. Within the last several years, a mix of both negatively and positively charged ions were added to the borehole located near the center of the cement emplaced in the formation. The researchers focused on a 1-centimeter-thick zone between the radionuclides and cement-clay referred to as the “skin.” They compared their experimental results to the software simulation, finding the two datasets aligned.“The results are quite significant because previously, these models wouldn’t fit field data very well,” Sarsenbayev says. “It’s interesting how fine-scale phenomena at the ‘skin’ between cement and clay, the physical and chemical properties of which changes over time, could be used to reconcile the experimental and simulation data.” The experimental results showed the model successfully accounted for electrostatic effects associated with the clay-rich formation and the interaction between materials in Mont Terri over time.“This is all driven by decades of work to understand what happens at these interfaces,” Sarsenbayev says. “It’s been hypothesized that there is mineral precipitation and porosity clogging at this interface, and our results strongly suggest that.”“This application requires millions of degrees of freedom because these multibarrier systems require high resolution and a lot of computational power,” Sarsenbayev says. “This software is really ideal for the Mont Terri experiment.”Assessing waste disposal plansThe new model could now replace older models that have been used to conduct safety and performance assessments of underground geological repositories.“If the U.S. eventually decides to dispose nuclear waste in a geological repository, then these models could dictate the most appropriate materials to use,” Sarsenbayev says. “For instance, right now clay is considered an appropriate storage material, but salt formations are another potential medium that could be used. These models allow us to see the fate of radionuclides over millennia. We can use them to understand interactions at timespans that vary from months to years to many millions of years.”Sarsenbayev says the model is reasonably accessible to other researchers and that future efforts may focus on the use of machine learning to develop less computationally expensive surrogate models.Further data from the experiment will be available later this month. The team plans to compare those data to additional simulations.“Our collaborators will basically get this block of cement and clay, and they’ll be able to run experiments to determine the exact thickness of the skin along with all of the minerals and processes present at this interface,” Sarsenbayev says. “It’s a huge project and it takes time, but we wanted to share initial data and this software as soon as we could.”For now, the researchers hope their study leads to a long-term solution for storing nuclear waste that policymakers and the public can support.“This is an interdisciplinary study that includes real world experiments showing we’re able to predict radionuclides’ fate in the subsurface,” Sarsenbayev says. “The motto of MIT’s Department of Nuclear Science and Engineering is ‘Science. Systems. Society.’ I think this merges all three domains.” More

  • in

    Confronting the AI/energy conundrum

    The explosive growth of AI-powered computing centers is creating an unprecedented surge in electricity demand that threatens to overwhelm power grids and derail climate goals. At the same time, artificial intelligence technologies could revolutionize energy systems, accelerating the transition to clean power.“We’re at a cusp of potentially gigantic change throughout the economy,” said William H. Green, director of the MIT Energy Initiative (MITEI) and Hoyt C. Hottel Professor in the MIT Department of Chemical Engineering, at MITEI’s Spring Symposium, “AI and energy: Peril and promise,” held on May 13. The event brought together experts from industry, academia, and government to explore solutions to what Green described as both “local problems with electric supply and meeting our clean energy targets” while seeking to “reap the benefits of AI without some of the harms.” The challenge of data center energy demand and potential benefits of AI to the energy transition is a research priority for MITEI.AI’s startling energy demandsFrom the start, the symposium highlighted sobering statistics about AI’s appetite for electricity. After decades of flat electricity demand in the United States, computing centers now consume approximately 4 percent of the nation’s electricity. Although there is great uncertainty, some projections suggest this demand could rise to 12-15 percent by 2030, largely driven by artificial intelligence applications.Vijay Gadepally, senior scientist at MIT’s Lincoln Laboratory, emphasized the scale of AI’s consumption. “The power required for sustaining some of these large models is doubling almost every three months,” he noted. “A single ChatGPT conversation uses as much electricity as charging your phone, and generating an image consumes about a bottle of water for cooling.”Facilities requiring 50 to 100 megawatts of power are emerging rapidly across the United States and globally, driven both by casual and institutional research needs relying on large language programs such as ChatGPT and Gemini. Gadepally cited congressional testimony by Sam Altman, CEO of OpenAI, highlighting how fundamental this relationship has become: “The cost of intelligence, the cost of AI, will converge to the cost of energy.”“The energy demands of AI are a significant challenge, but we also have an opportunity to harness these vast computational capabilities to contribute to climate change solutions,” said Evelyn Wang, MIT vice president for energy and climate and the former director at the Advanced Research Projects Agency-Energy (ARPA-E) at the U.S. Department of Energy.Wang also noted that innovations developed for AI and data centers — such as efficiency, cooling technologies, and clean-power solutions — could have broad applications beyond computing facilities themselves.Strategies for clean energy solutionsThe symposium explored multiple pathways to address the AI-energy challenge. Some panelists presented models suggesting that while artificial intelligence may increase emissions in the short term, its optimization capabilities could enable substantial emissions reductions after 2030 through more efficient power systems and accelerated clean technology development.Research shows regional variations in the cost of powering computing centers with clean electricity, according to Emre Gençer, co-founder and CEO of Sesame Sustainability and former MITEI principal research scientist. Gençer’s analysis revealed that the central United States offers considerably lower costs due to complementary solar and wind resources. However, achieving zero-emission power would require massive battery deployments — five to 10 times more than moderate carbon scenarios — driving costs two to three times higher.“If we want to do zero emissions with reliable power, we need technologies other than renewables and batteries, which will be too expensive,” Gençer said. He pointed to “long-duration storage technologies, small modular reactors, geothermal, or hybrid approaches” as necessary complements.Because of data center energy demand, there is renewed interest in nuclear power, noted Kathryn Biegel, manager of R&D and corporate strategy at Constellation Energy, adding that her company is restarting the reactor at the former Three Mile Island site, now called the “Crane Clean Energy Center,” to meet this demand. “The data center space has become a major, major priority for Constellation,” she said, emphasizing how their needs for both reliability and carbon-free electricity are reshaping the power industry.Can AI accelerate the energy transition?Artificial intelligence could dramatically improve power systems, according to Priya Donti, assistant professor and the Silverman Family Career Development Professor in MIT’s Department of Electrical Engineering and Computer Science and the Laboratory for Information and Decision Systems. She showcased how AI can accelerate power grid optimization by embedding physics-based constraints into neural networks, potentially solving complex power flow problems at “10 times, or even greater, speed compared to your traditional models.”AI is already reducing carbon emissions, according to examples shared by Antonia Gawel, global director of sustainability and partnerships at Google. Google Maps’ fuel-efficient routing feature has “helped to prevent more than 2.9 million metric tons of GHG [greenhouse gas] emissions reductions since launch, which is the equivalent of taking 650,000 fuel-based cars off the road for a year,” she said. Another Google research project uses artificial intelligence to help pilots avoid creating contrails, which represent about 1 percent of global warming impact.AI’s potential to speed materials discovery for power applications was highlighted by Rafael Gómez-Bombarelli, the Paul M. Cook Career Development Associate Professor in the MIT Department of Materials Science and Engineering. “AI-supervised models can be trained to go from structure to property,” he noted, enabling the development of materials crucial for both computing and efficiency.Securing growth with sustainabilityThroughout the symposium, participants grappled with balancing rapid AI deployment against environmental impacts. While AI training receives most attention, Dustin Demetriou, senior technical staff member in sustainability and data center innovation at IBM, quoted a World Economic Forum article that suggested that “80 percent of the environmental footprint is estimated to be due to inferencing.” Demetriou emphasized the need for efficiency across all artificial intelligence applications.Jevons’ paradox, where “efficiency gains tend to increase overall resource consumption rather than decrease it” is another factor to consider, cautioned Emma Strubell, the Raj Reddy Assistant Professor in the Language Technologies Institute in the School of Computer Science at Carnegie Mellon University. Strubell advocated for viewing computing center electricity as a limited resource requiring thoughtful allocation across different applications.Several presenters discussed novel approaches for integrating renewable sources with existing grid infrastructure, including potential hybrid solutions that combine clean installations with existing natural gas plants that have valuable grid connections already in place. These approaches could provide substantial clean capacity across the United States at reasonable costs while minimizing reliability impacts.Navigating the AI-energy paradoxThe symposium highlighted MIT’s central role in developing solutions to the AI-electricity challenge.Green spoke of a new MITEI program on computing centers, power, and computation that will operate alongside the comprehensive spread of MIT Climate Project research. “We’re going to try to tackle a very complicated problem all the way from the power sources through the actual algorithms that deliver value to the customers — in a way that’s going to be acceptable to all the stakeholders and really meet all the needs,” Green said.Participants in the symposium were polled about priorities for MIT’s research by Randall Field, MITEI director of research. The real-time results ranked “data center and grid integration issues” as the top priority, followed by “AI for accelerated discovery of advanced materials for energy.”In addition, attendees revealed that most view AI’s potential regarding power as a “promise,” rather than a “peril,” although a considerable portion remain uncertain about the ultimate impact. When asked about priorities in power supply for computing facilities, half of the respondents selected carbon intensity as their top concern, with reliability and cost following. More

  • in

    Evelyn Wang: A new energy source at MIT

    Evelyn Wang ’00 knows a few things about engineering solutions to hard problems. After all, she invented a way to pull water out of thin air.Now, Wang is applying that problem-solving experience — and a deep, enduring sense of optimism — toward the critical issue of climate change, to strengthen the American energy economy and ensure resilience for all.Wang, a mechanical engineering professor by trade, began work this spring as MIT’s first vice president for energy and climate, overseeing the Institute’s expanding work on climate change. That means broadening the Institute’s already-wide research portfolio, scaling up existing innovations, seeking new breakthroughs, and channeling campus community input to drive work forward.“MIT has the potential to do so much, when we know that climate, energy, and resilience are paramount to events happening around us every day,” says Wang, who is also the Ford Professor of Engineering at MIT. “There’s no better place than MIT to come up with the transformational solutions that can help shape our world.”That also means developing partnerships with corporate allies, startups, government, communities, and other organizations. Tackling climate change, Wang says, “requires a lot of partnerships. It’s not an MIT-only endeavor. We’re going to have to collaborate with other institutions and think about where industry can help us deploy and scale so the impact can be greater.”She adds: “The more partnerships we have, the more understanding we have of the best pathways to make progress in difficult areas.”From MIT to ARPA-EAn MIT faculty member since 2007, Wang leads the Device Research Lab. Along with collaborators, she identifies new materials and optimizations based on heat and mass transport processes that unlock the creation of leading-edge innovations. Her development of the device that extracts water from even very dry air led Foreign Policy Magazine to name her its 2017 Global ReThinker, and she won the 2018 Eighth Prince Sultan bin Abdulaziz International Prize for Water.Her research also extends to other areas such as energy and desalination research. In 2016, Wang and several colleagues announced a device based on nanophotonic crystals with the potential to double the amount of power produced by a given area of solar panels, which led to one of her graduate researchers on the project to co-found the startup Antora Energy. More recently, Wang and colleagues developed an aerogel that improves window insulation, now being commercialized through her former graduate students in a startup, AeroShield.Wang also spent two years recently as director of the U.S. Department of Energy’s Advanced Research Projects Agency-Energy (ARPA-E), which supports early-stage R&D on energy generation, storage, and use.  Returning to MIT, she began her work as vice president for energy and climate in April, engaging with researchers, holding community workshops, and planning to build partnerships.“I’ve been energized coming back to the Institute, given the talented students, the faculty, the staff. It’s invigorating to be back in this community,” Wang says. “People are passionate, excited, and mission-driven, and that’s the energy we need to make a big impact in the world.”Wang is also working to help align the Institute’s many existing climate efforts. This includes the Climate Project at MIT, an Institute-wide presidential initiative announced in 2024, which aims to accelerate and scale up climate solutions while generating new tools and policy proposals. All told, about 300 MIT faculty conduct research related to climate issues in one form or another.“The fact that there are so many faculty working on climate is astounding,” Wang says. “Everyone’s doing exciting work, but how can we leverage our unique strengths to create something bigger than the sum of its parts? That’s what I’m working toward. We’ve spun out so many technologies. How do we do more of that? How do we do that faster, and in a way so the world will feel the impact?”A deep connection to campus — and strong sense of optimismUnderstanding MIT is one of Wang’s strengths, given that she has spent over two decades at the Institute.Wang earned her undergraduate degree from MIT in mechanical engineering, and her MS and PhD in mechanical engineering from Stanford University. She has held several chaired faculty positions at MIT. In 2008, Wang was named the Esther and Harold E. Edgerton Assistant Professor; in 2015, she was named the Gail E. Kendall Professor; and in 2021, she became the Ford Professor of Engineering. Wang served as head of the Department of Mechanical Engineering from 2018 through 2022.As it happens, Wang’s parents, Kang and Edith, met as graduate students at the Institute. Her father, an electrical engineer, became a professor at the University of California at Los Angeles. Wang also met her husband at MIT, and both of her brothers graduated from the Institute.Along with her deep institutional knowledge, administrative experience, and track record as an innovator, Wang is bringing several other things to her new role as vice president for climate: a sense of urgency about the issue, coupled with a continual sense of optimism that innovators can meet society’s needs.“I think optimism can make a difference, and is great to have in the midst of collective challenge,” Wang says. “We’re such a mission-driven university, and people come here to solve real-world problems.”That hopeful approach is why Wang describes the work as not only as a challenge but also a generational opportunity. “We have the chance to design the world we want,” she says, “one that’s cleaner, more sustainable and more resilient. This future is ours to shape and build together.”Wang thinks MIT contains many examples of world-shaping progress, She cites MIT’s announcement this month of the creation of the Schmidt Laboratory for Materials in Nuclear Technologies, at the MIT Plasma Science and Fusion center, to conduct research on next-generation materials that could help enable the construction of fusion power plants. Another example Wang references is MIT research earlier this year on developing clean ammonia, a way to make the world’s most widely-produced chemical with drastically-reduced greenhouse gas emissions.“Those solutions could be breakthroughs,” Wang says. “Those are the kinds of things that give us optimism. There’s still a lot of research to be done, but it suggests the potential of what our world can be.”Optimism: There’s that word again.“Optimism is the only way to go,” Wang says. “Yes, the world is challenged. But this is where MIT’s strengths — in research, innovation, and education — can bring optimism to the table.” More

  • in

    Decarbonizing steel is as tough as steel

    The long-term aspirational goal of the Paris Agreement on climate change is to cap global warming at 1.5 degrees Celsius above preindustrial levels, and thereby reduce the frequency and severity of floods, droughts, wildfires, and other extreme weather events. Achieving that goal will require a massive reduction in global carbon dioxide (CO2) emissions across all economic sectors. A major roadblock, however, could be the industrial sector, which accounts for roughly 25 percent of global energy- and process-related CO2 emissions — particularly within the iron and steel sector, industry’s largest emitter of CO2.Iron and steel production now relies heavily on fossil fuels (coal or natural gas) for heat, converting iron ore to iron, and making steel strong. Steelmaking could be decarbonized by a combination of several methods, including carbon capture technology, the use of low- or zero-carbon fuels, and increased use of recycled steel. Now a new study in the Journal of Cleaner Production systematically explores the viability of different iron-and-steel decarbonization strategies.Today’s strategy menu includes improving energy efficiency, switching fuels and technologies, using more scrap steel, and reducing demand. Using the MIT Economic Projection and Policy Analysis model, a multi-sector, multi-region model of the world economy, researchers at MIT, the University of Illinois at Urbana-Champaign, and ExxonMobil Technology and Engineering Co. evaluate the decarbonization potential of replacing coal-based production processes with electric arc furnaces (EAF), along with either scrap steel or “direct reduced iron” (DRI), which is fueled by natural gas with carbon capture and storage (NG CCS DRI-EAF) or by hydrogen (H2 DRI-EAF).Under a global climate mitigation scenario aligned with the 1.5 C climate goal, these advanced steelmaking technologies could result in deep decarbonization of the iron and steel sector by 2050, as long as technology costs are low enough to enable large-scale deployment. Higher costs would favor the replacement of coal with electricity and natural gas, greater use of scrap steel, and reduced demand, resulting in a more-than-50-percent reduction in emissions relative to current levels. Lower technology costs would enable massive deployment of NG CCS DRI-EAF or H2 DRI-EAF, reducing emissions by up to 75 percent.Even without adoption of these advanced technologies, the iron-and-steel sector could significantly reduce its CO2 emissions intensity (how much CO2 is released per unit of production) with existing steelmaking technologies, primarily by replacing coal with gas and electricity (especially if it is generated by renewable energy sources), using more scrap steel, and implementing energy efficiency measures.“The iron and steel industry needs to combine several strategies to substantially reduce its emissions by mid-century, including an increase in recycling, but investing in cost reductions in hydrogen pathways and carbon capture and sequestration will enable even deeper emissions mitigation in the sector,” says study supervising author Sergey Paltsev, deputy director of the MIT Center for Sustainability Science and Strategy (MIT CS3) and a senior research scientist at the MIT Energy Initiative (MITEI).This study was supported by MIT CS3 and ExxonMobil through its membership in MITEI. More