More stories

  • in

    Study shows how households can cut energy costs

    Many people around the globe are living in energy poverty, meaning they spend at least 8 percent of their annual household income on energy. Addressing this problem is not simple, but an experiment by MIT researchers shows that giving people better data about their energy use, plus some coaching on the subject, can lead them to substantially reduce their consumption and costs.The experiment, based in Amsterdam, resulted in households cutting their energy expenses in half, on aggregate — a savings big enough to move three-quarters of them out of energy poverty.“Our energy coaching project as a whole showed a 75 percent success rate at alleviating energy poverty,” says Joseph Llewellyn, a researcher with MIT’s Senseable City Lab and co-author of a newly published paper detailing the experiment’s results.“Energy poverty afflicts families all over the world. With empirical evidence on which policies work, governments could focus their efforts more effectively,” says Fábio Duarte, associate director of MIT’s Senseable City Lab, and another co-author of the paper.The paper, “Assessing the impact of energy coaching with smart technology interventions to alleviate energy poverty,” appears today in Nature Scientific Reports.The authors are Llewellyn, who is also a researcher at the Amsterdam Institute for Advanced Metropolitan Solutions (AMS) and the KTH Royal Institute of Technology in Stockholm; Titus Venverloo, a research fellow at the MIT Senseable City Lab and AMS; Fábio Duarte, who is also a principal researcher MIT’s Senseable City Lab; Carlo Ratti, director of the Senseable City Lab; Cecilia Katzeff; Fredrik Johansson; and Daniel Pargman of the KTH Royal Institute of Technology.The researchers developed the study after engaging with city officials in Amsterdam. In the Netherlands, about 550,000 households, or 7 percent of the population, are considered to be in energy poverty; in the European Union, that figure is about 50 million. In the U.S., separate research has shown that about three in 10 households report trouble paying energy bills.To conduct the experiment, the researchers ran two versions of an energy coaching intervention. In one version, 67 households received one report on their energy usage, along with coaching about how to increase energy efficiency. In the other version, 50 households received those things as well as a smart device giving them real-time updates on their energy consumption. (All households also received some modest energy-savings improvements at the outset, such as additional insulation.)Across the two groups, homes typically reduced monthly consumption of electricity by 33 percent and gas by 42 percent. They lowered their bills by 53 percent, on aggregate, and the percentage of income they spent on energy dropped from 10.1 percent to 5.3 percent.What were these households doing differently? Some of the biggest behavioral changes included things such as only heating rooms that were in use and unplugging devices not being used. Both of those changes save energy, but their benefits were not always understood by residents before they received energy coaching.“The range of energy literacy was quite wide from one home to the next,” Llewellyn says. “And when I went somewhere as an energy coach, it was never to moralize about energy use. I never said, ‘Oh, you’re using way too much.’ It was always working on it with the households, depending on what people need for their homes.”Intriguingly, the homes receiving the small devices that displayed real-time energy data only tended to use them for three or four weeks following a coaching visit. After that, people seemed to lose interest in very frequent monitoring of their energy use. And yet, a few weeks of consulting the devices tended to be long enough to get people to change their habits in a lasting way.“Our research shows that smart devices need to be accompanied by a close understanding of what drives families to change their behaviors,” Venverloo says.As the researchers acknowledge, working with consumers to reduce their energy consumption is just one way to help people escape energy poverty. Other “structural” factors that can help include lower energy prices and more energy-efficient buildings.On the latter note, the current paper has given rise to a new experiment Llewellyn is developing with Amsterdam officials, to examine the benefits of retrofitting residental buildings to lower energy costs. In that case, local policymakers are trying to work out how to fund the retrofitting in such a way that landlords do not simply pass those costs on to tenants.“We don’t want a household to save money on their energy bills if it also means the rent increases, because then we’ve just displaced expenses from one item to another,” Llewellyn says.Households can also invest in products like better insulation themselves, for windows or heating components, although for low-income households, finding the money to pay for such things may not be trivial. That is especially the case, Llewellyn suggests, because energy costs can seem “invisible,” and a lower priority, than feeding and clothing a family.“It’s a big upfront cost for a household that does not have 100 Euros to spend,” Llewellyn says. Compared to paying for other necessities, he notes, “Energy is often the thing that tends to fall last on their list. Energy is always going to be this invisible thing that hides behind the walls, and it’s not easy to change that.”  More

  • in

    The role of modeling in the energy transition

    Joseph F. DeCarolis, administrator for the U.S. Energy Information Administration (EIA), has one overarching piece of advice for anyone poring over long-term energy projections.“Whatever you do, don’t start believing the numbers,” DeCarolis said at the MIT Energy Initiative (MITEI) Fall Colloquium. “There’s a tendency when you sit in front of the computer and you’re watching the model spit out numbers at you … that you’ll really start to believe those numbers with high precision. Don’t fall for it. Always remain skeptical.”This event was part of MITEI’s new speaker series, MITEI Presents: Advancing the Energy Transition, which connects the MIT community with the energy experts and leaders who are working on scientific, technological, and policy solutions that are urgently needed to accelerate the energy transition.The point of DeCarolis’s talk, titled “Stay humble and prepare for surprises: Lessons for the energy transition,” was not that energy models are unimportant. On the contrary, DeCarolis said, energy models give stakeholders a framework that allows them to consider present-day decisions in the context of potential future scenarios. However, he repeatedly stressed the importance of accounting for uncertainty, and not treating these projections as “crystal balls.”“We can use models to help inform decision strategies,” DeCarolis said. “We know there’s a bunch of future uncertainty. We don’t know what’s going to happen, but we can incorporate that uncertainty into our model and help come up with a path forward.”Dialogue, not forecastsEIA is the statistical and analytic agency within the U.S. Department of Energy, with a mission to collect, analyze, and disseminate independent and impartial energy information to help stakeholders make better-informed decisions. Although EIA analyzes the impacts of energy policies, the agency does not make or advise on policy itself. DeCarolis, who was previously professor and University Faculty Scholar in the Department of Civil, Construction, and Environmental Engineering at North Carolina State University, noted that EIA does not need to seek approval from anyone else in the federal government before publishing its data and reports. “That independence is very important to us, because it means that we can focus on doing our work and providing the best information we possibly can,” he said.Among the many reports produced by EIA is the agency’s Annual Energy Outlook (AEO), which projects U.S. energy production, consumption, and prices. Every other year, the agency also produces the AEO Retrospective, which shows the relationship between past projections and actual energy indicators.“The first question you might ask is, ‘Should we use these models to produce a forecast?’” DeCarolis said. “The answer for me to that question is: No, we should not do that. When models are used to produce forecasts, the results are generally pretty dismal.”DeCarolis pointed to wildly inaccurate past projections about the proliferation of nuclear energy in the United States as an example of the problems inherent in forecasting. However, he noted, there are “still lots of really valuable uses” for energy models. Rather than using them to predict future energy consumption and prices, DeCarolis said, stakeholders should use models to inform their own thinking.“[Models] can simply be an aid in helping us think and hypothesize about the future of energy,” DeCarolis said. “They can help us create a dialogue among different stakeholders on complex issues. If we’re thinking about something like the energy transition, and we want to start a dialogue, there has to be some basis for that dialogue. If you have a systematic representation of the energy system that you can advance into the future, we can start to have a debate about the model and what it means. We can also identify key sources of uncertainty and knowledge gaps.”Modeling uncertaintyThe key to working with energy models is not to try to eliminate uncertainty, DeCarolis said, but rather to account for it. One way to better understand uncertainty, he noted, is to look at past projections, and consider how they ended up differing from real-world results. DeCarolis pointed to two “surprises” over the past several decades: the exponential growth of shale oil and natural gas production (which had the impact of limiting coal’s share of the energy market and therefore reducing carbon emissions), as well as the rapid rise in wind and solar energy. In both cases, market conditions changed far more quickly than energy modelers anticipated, leading to inaccurate projections.“For all those reasons, we ended up with [projected] CO2 [carbon dioxide] emissions that were quite high compared to actual,” DeCarolis said. “We’re a statistical agency, so we’re really looking carefully at the data, but it can take some time to identify the signal through the noise.”Although EIA does not produce forecasts in the AEO, people have sometimes interpreted the reference case in the agency’s reports as predictions. In an effort to illustrate the unpredictability of future outcomes in the 2023 edition of the AEO, the agency added “cones of uncertainty” to its projection of energy-related carbon dioxide emissions, with ranges of outcomes based on the difference between past projections and actual results. One cone captures 50 percent of historical projection errors, while another represents 95 percent of historical errors.“They capture whatever bias there is in our projections,” DeCarolis said of the uncertainty cones. “It’s being captured because we’re comparing actual [emissions] to projections. The weakness of this, though, is: who’s to say that those historical projection errors apply to the future? We don’t know that, but I still think that there’s something useful to be learned from this exercise.”The future of energy modelingLooking ahead, DeCarolis said, there is a “laundry list of things that keep me up at night as a modeler.” These include the impacts of climate change; how those impacts will affect demand for renewable energy; how quickly industry and government will overcome obstacles to building out clean energy infrastructure and supply chains; technological innovation; and increased energy demand from data centers running compute-intensive workloads.“What about enhanced geothermal? Fusion? Space-based solar power?” DeCarolis asked. “Should those be in the model? What sorts of technology breakthroughs are we missing? And then, of course, there are the unknown unknowns — the things that I can’t conceive of to put on this list, but are probably going to happen.”In addition to capturing the fullest range of outcomes, DeCarolis said, EIA wants to be flexible, nimble, transparent, and accessible — creating reports that can easily incorporate new model features and produce timely analyses. To that end, the agency has undertaken two new initiatives. First, the 2025 AEO will use a revamped version of the National Energy Modeling System that includes modules for hydrogen production and pricing, carbon management, and hydrocarbon supply. Second, an effort called Project BlueSky is aiming to develop the agency’s next-generation energy system model, which DeCarolis said will be modular and open source.DeCarolis noted that the energy system is both highly complex and rapidly evolving, and he warned that “mental shortcuts” and the fear of being wrong can lead modelers to ignore possible future developments. “We have to remain humble and intellectually honest about what we know,” DeCarolis said. “That way, we can provide decision-makers with an honest assessment of what we think could happen in the future.”  More

  • in

    How hard is it to prevent recurring blackouts in Puerto Rico?

    Researchers at MIT’s Laboratory for Information and Decision Systems (LIDS) have shown that using decision-making software and dynamic monitoring of weather and energy use can significantly improve resiliency in the face of weather-related outages, and can also help to efficiently integrate renewable energy sources into the grid.The researchers point out that the system they suggest might have prevented or at least lessened the kind of widespread power outage that Puerto Rico experienced last week by providing analysis to guide rerouting of power through different lines and thus limit the spread of the outage.The computer platform, which the researchers describe as DyMonDS, for Dynamic Monitoring and Decision Systems, can be used to enhance the existing operating and planning practices used in the electric industry. The platform supports interactive information exchange and decision-making between the grid operators and grid-edge users — all the distributed power sources, storage systems and software that contribute to the grid. It also supports optimization of available resources and controllable grid equipment as system conditions vary. It further lends itself to implementing cooperative decision-making by different utility- and non-utility-owned electric power grid users, including portfolios of mixed resources, users, and storage. Operating and planning the interactions of the end-to-end high-voltage transmission grid with local distribution grids and microgrids represents another major potential use of this platform.This general approach was illustrated using a set of publicly-available data on both meteorology and details of electricity production and distribution in Puerto Rico. An extended AC Optimal Power Flow software developed by SmartGridz Inc. is used for system-level optimization of controllable equipment. This provides real-time guidance for deciding how much power, and through which transmission lines, should be channeled by adjusting plant dispatch and voltage-related set points, and in extreme cases, where to reduce or cut power in order to maintain physically-implementable service for as many customers as possible. The team found that the use of such a system can help to ensure that the greatest number of critical services maintain power even during a hurricane, and at the same time can lead to a substantial decrease in the need for construction of new power plants thanks to more efficient use of existing resources.The findings are described in a paper in the journal Foundations and Trends in Electric Energy Systems, by MIT LIDS researchers Marija Ilic and Laurentiu Anton, along with recent alumna Ramapathi Jaddivada.“Using this software,” Ilic says, they show that “even during bad weather, if you predict equipment failures, and by using that information exchange, you can localize the effect of equipment failures and still serve a lot of customers, 50 percent of customers, when otherwise things would black out.”Anton says that “the way many grids today are operated is sub-optimal.” As a result, “we showed how much better they could do even under normal conditions, without any failures, by utilizing this software.” The savings resulting from this optimization, under everyday conditions, could be in the tens of percents, they say.The way utility systems plan currently, Ilic says, “usually the standard is that they have to build enough capacity and operate in real time so that if one large piece of equipment fails, like a large generator or transmission line, you still serve customers in an uninterrupted way. That’s what’s called N-minus-1.” Under this policy, if one major component of the system fails, they should be able to maintain service for at least 30 minutes. That system allows utilities to plan for how much reserve generating capacity they need to have on hand. That’s expensive, Ilic points out, because it means maintaining this reserve capacity all the time, even under normal operating conditions when it’s not needed.In addition, “right now there are no criteria for what I call N-minus-K,” she says. If bad weather causes five pieces of equipment to fail at once, “there is no software to help utilities decide what to schedule” in terms of keeping the most customers, and the most important services such as hospitals and emergency services, provided with power. They showed that even with 50 percent of the infrastructure out of commission, it would still be possible to keep power flowing to a large proportion of customers.Their work on analyzing the power situation in Puerto Rico started after the island had been devastated by hurricanes Irma and Maria. Most of the electric generation capacity is in the south, yet the largest loads are in San Juan, in the north, and Mayaguez in the west. When transmission lines get knocked down, a lot of rerouting of power needs to happen quickly.With the new systems, “the software finds the optimal adjustments for set points,” for example, changing voltages can allow for power to be redirected through less-congested lines, or can be increased to lessen power losses, Anton says.The software also helps in the long-term planning for the grid. As many fossil-fuel power plants are scheduled to be decommissioned soon in Puerto Rico, as they are in many other places, planning for how to replace that power without having to resort to greenhouse gas-emitting sources is a key to achieving carbon-reduction goals. And by analyzing usage patterns, the software can guide the placement of new renewable power sources where they can most efficiently provide power where and when it’s needed.As plants are retired or as components are affected by weather, “We wanted to ensure the dispatchability of power when the load changes,” Anton says, “but also when crucial components are lost, to ensure the robustness at each step of the retirement schedule.”One thing they found was that “if you look at how much generating capacity exists, it’s more than the peak load, even after you retire a few fossil plants,” Ilic says. “But it’s hard to deliver.” Strategic planning of new distribution lines could make a big difference.Jaddivada, director of innovation at SmartGridz, says that “we evaluated different possible architectures in Puerto Rico, and we showed the ability of this software to ensure uninterrupted electricity service. This is the most important challenge utilities have today. They have to go through a computationally tedious process to make sure the grid functions for any possible outage in the system. And that can be done in a much more efficient way through the software that the company  developed.”The project was a collaborative effort between the MIT LIDS researchers and others at MIT Lincoln Laboratory, the Pacific Northwest National Laboratory, with overall help of SmartGridz software.  More

  • in

    Unlocking the hidden power of boiling — for energy, space, and beyond

    Most people take boiling water for granted. For Associate Professor Matteo Bucci, uncovering the physics behind boiling has been a decade-long journey filled with unexpected challenges and new insights.The seemingly simple phenomenon is extremely hard to study in complex systems like nuclear reactors, and yet it sits at the core of a wide range of important industrial processes. Unlocking its secrets could thus enable advances in efficient energy production, electronics cooling, water desalination, medical diagnostics, and more.“Boiling is important for applications way beyond nuclear,” says Bucci, who earned tenure at MIT in July. “Boiling is used in 80 percent of the power plants that produce electricity. My research has implications for space propulsion, energy storage, electronics, and the increasingly important task of cooling computers.”Bucci’s lab has developed new experimental techniques to shed light on a wide range of boiling and heat transfer phenomena that have limited energy projects for decades. Chief among those is a problem caused by bubbles forming so quickly they create a band of vapor across a surface that prevents further heat transfer. In 2023, Bucci and collaborators developed a unifying principle governing the problem, known as the boiling crisis, which could enable more efficient nuclear reactors and prevent catastrophic failures.For Bucci, each bout of progress brings new possibilities — and new questions to answer.“What’s the best paper?” Bucci asks. “The best paper is the next one. I think Alfred Hitchcock used to say it doesn’t matter how good your last movie was. If your next one is poor, people won’t remember it. I always tell my students that our next paper should always be better than the last. It’s a continuous journey of improvement.”From engineering to bubblesThe Italian village where Bucci grew up had a population of about 1,000 during his childhood. He gained mechanical skills by working in his father’s machine shop and by taking apart and reassembling appliances like washing machines and air conditioners to see what was inside. He also gained a passion for cycling, competing in the sport until he attended the University of Pisa for undergraduate and graduate studies.In college, Bucci was fascinated with matter and the origins of life, but he also liked building things, so when it came time to pick between physics and engineering, he decided nuclear engineering was a good middle ground.“I have a passion for construction and for understanding how things are made,” Bucci says. “Nuclear engineering was a very unlikely but obvious choice. It was unlikely because in Italy, nuclear was already out of the energy landscape, so there were very few of us. At the same time, there were a combination of intellectual and practical challenges, which is what I like.”For his PhD, Bucci went to France, where he met his wife, and went on to work at a French national lab. One day his department head asked him to work on a problem in nuclear reactor safety known as transient boiling. To solve it, he wanted to use a method for making measurements pioneered by MIT Professor Jacopo Buongiorno, so he received grant money to become a visiting scientist at MIT in 2013. He’s been studying boiling at MIT ever since.Today Bucci’s lab is developing new diagnostic techniques to study boiling and heat transfer along with new materials and coatings that could make heat transfer more efficient. The work has given researchers an unprecedented view into the conditions inside a nuclear reactor.“The diagnostics we’ve developed can collect the equivalent of 20 years of experimental work in a one-day experiment,” Bucci says.That data, in turn, led Bucci to a remarkably simple model describing the boiling crisis.“The effectiveness of the boiling process on the surface of nuclear reactor cladding determines the efficiency and the safety of the reactor,” Bucci explains. “It’s like a car that you want to accelerate, but there is an upper limit. For a nuclear reactor, that upper limit is dictated by boiling heat transfer, so we are interested in understanding what that upper limit is and how we can overcome it to enhance the reactor performance.”Another particularly impactful area of research for Bucci is two-phase immersion cooling, a process wherein hot server parts bring liquid to boil, then the resulting vapor condenses on a heat exchanger above to create a constant, passive cycle of cooling.“It keeps chips cold with minimal waste of energy, significantly reducing the electricity consumption and carbon dioxide emissions of data centers,” Bucci explains. “Data centers emit as much CO2 as the entire aviation industry. By 2040, they will account for over 10 percent of emissions.”Supporting studentsBucci says working with students is the most rewarding part of his job. “They have such great passion and competence. It’s motivating to work with people who have the same passion as you.”“My students have no fear to explore new ideas,” Bucci adds. “They almost never stop in front of an obstacle — sometimes to the point where you have to slow them down and put them back on track.”In running the Red Lab in the Department of Nuclear Science and Engineering, Bucci tries to give students independence as well as support.“We’re not educating students, we’re educating future researchers,” Bucci says. “I think the most important part of our work is to not only provide the tools, but also to give the confidence and the self-starting attitude to fix problems. That can be business problems, problems with experiments, problems with your lab mates.”Some of the more unique experiments Bucci’s students do require them to gather measurements while free falling in an airplane to achieve zero gravity.“Space research is the big fantasy of all the kids,” says Bucci, who joins students in the experiments about twice a year. “It’s very fun and inspiring research for students. Zero g gives you a new perspective on life.”Applying AIBucci is also excited about incorporating artificial intelligence into his field. In 2023, he was a co-recipient of a multi-university research initiative (MURI) project in thermal science dedicated solely to machine learning. In a nod to the promise AI holds in his field, Bucci also recently founded a journal called AI Thermal Fluids to feature AI-driven research advances.“Our community doesn’t have a home for people that want to develop machine-learning techniques,” Bucci says. “We wanted to create an avenue for people in computer science and thermal science to work together to make progress. I think we really need to bring computer scientists into our community to speed this process up.”Bucci also believes AI can be used to process huge reams of data gathered using the new experimental techniques he’s developed as well as to model phenomena researchers can’t yet study.“It’s possible that AI will give us the opportunity to understand things that cannot be observed, or at least guide us in the dark as we try to find the root causes of many problems,” Bucci says. More

  • in

    Helping students bring about decarbonization, from benchtop to global energy marketplace

    MIT students are adept at producing research and innovations at the cutting edge of their fields. But addressing a problem as large as climate change requires understanding the world’s energy landscape, as well as the ways energy technologies evolve over time.Since 2010, the course IDS.521/IDS.065 (Energy Systems for Climate Change Mitigation) has equipped students with the skills they need to evaluate the various energy decarbonization pathways available to the world. The work is designed to help them maximize their impact on the world’s emissions by making better decisions along their respective career paths.“The question guiding my teaching and research is how do we solve big societal challenges with technology, and how can we be more deliberate in developing and supporting technologies to get us there?” says Professor Jessika Trancik, who started the course to help fill a gap in knowledge about the ways technologies evolve and scale over time.Since its inception in 2010, the course has attracted graduate students from across MIT’s five schools. The course has also recently opened to undergraduate students and been adapted to an online course for professionals.Class sessions alternate between lectures and student discussions that lead up to semester-long projects in which groups of students explore specific strategies and technologies for reducing global emissions. This year’s projects span several topics, including how quickly transmission infrastructure is expanding, the relationship between carbon emissions and human development, and how to decarbonize the production of key chemicals.The curriculum is designed to help students identify the most promising ways to mitigate climate change whether they plan to be scientists, engineers, policymakers, investors, urban planners, or just more informed citizens.“We’re coming at this issue from both sides,” explains Trancik, who is part of MIT’s Institute for Data, Systems, and Society. “Engineers are used to designing a technology to work as well as possible here and now, but not always thinking over a longer time horizon about a technology evolving and succeeding in the global marketplace. On the flip side, for students at the macro level, often studies in policy and economics of technological change don’t fully account for the physical and engineering constraints of rates of improvement. But all of that information allows you to make better decisions.”Bridging the gapAs a young researcher working on low-carbon polymers and electrode materials for solar cells, Trancik always wondered how the materials she worked on would scale in the real world. They might achieve promising performance benchmarks in the lab, but would they actually make a difference in mitigating climate change? Later, she began focusing increasingly on developing methods for predicting how technologies might evolve.“I’ve always been interested in both the macro and the micro, or even nano, scales,” Trancik says. “I wanted to know how to bridge these new technologies we’re working on with the big picture of where we want to go.”Trancik’ described her technology-grounded approach to decarbonization in a paper that formed the basis for IDS.065. In the paper, she presented a way to evaluate energy technologies against climate-change mitigation goals while focusing on the technology’s evolution.“That was a departure from previous approaches, which said, given these technologies with fixed characteristics and assumptions about their rates of change, how do I choose the best combination?” Trancik explains. “Instead we asked: Given a goal, how do we develop the best technologies to meet that goal? That inverts the problem in a way that’s useful to engineers developing these technologies, but also to policymakers and investors that want to use the evolution of technologies as a tool for achieving their objectives.”This past semester, the class took place every Tuesday and Thursday in a classroom on the first floor of the Stata Center. Students regularly led discussions where they reflected on the week’s readings and offered their own insights.“Students always share their takeaways and get to ask open questions of the class,” says Megan Herrington, a PhD candidate in the Department of Chemical Engineering. “It helps you understand the readings on a deeper level because people with different backgrounds get to share their perspectives on the same questions and problems. Everybody comes to class with their own lens, and the class is set up to highlight those differences.”The semester begins with an overview of climate science, the origins of emissions reductions goals, and technology’s role in achieving those goals. Students then learn how to evaluate technologies against decarbonization goals.But technologies aren’t static, and neither is the world. Later lessons help students account for the change of technologies over time, identifying the mechanisms for that change and even forecasting rates of change.Students also learn about the role of government policy. This year, Trancik shared her experience traveling to the COP29 United Nations Climate Change Conference.“It’s not just about technology,” Trancik says. “It’s also about the behaviors that we engage in and the choices we make. But technology plays a major role in determining what set of choices we can make.”From the classroom to the worldStudents in the class say it has given them a new perspective on climate change mitigation.“I have really enjoyed getting to see beyond the research people are doing at the benchtop,” says Herrington. “It’s interesting to see how certain materials or technologies that aren’t scalable yet may fit into a larger transformation in energy delivery and consumption. It’s also been interesting to pull back the curtain on energy systems analysis to understand where the metrics we cite in energy-related research originate from, and to anticipate trajectories of emerging technologies.”Onur Talu, a first-year master’s student in the Technology and Policy Program, says the class has made him more hopeful.“I came into this fairly pessimistic about the climate,” says Talu, who has worked for clean technology startups in the past. “This class has taught me different ways to look at the problem of climate change mitigation and developing renewable technologies. It’s also helped put into perspective how much we’ve accomplished so far.”Several student projects from the class over the years have been developed into papers published in peer-reviewed journals. They have also been turned into tools, like carboncounter.com, which plots the emissions and costs of cars and has been featured in The New York Times.Former class students have also launched startups; Joel Jean SM ’13, PhD ’17, for example, started Swift Solar. Others have drawn on the course material to develop impactful careers in government and academia, such as Patrick Brown PhD ’16 at the National Renewable Energy Laboratory and Leah Stokes SM ’15, PhD ’15 at the University of California at Santa Barbara.Overall, students say the course helps them take a more informed approach to applying their skills toward addressing climate change.“It’s not enough to just know how bad climate change could be,” says Yu Tong, a first-year master’s student in civil and environmental engineering. “It’s also important to understand how technology can work to mitigate climate change from both a technological and market perspective. It’s about employing technology to solve these issues rather than just working in a vacuum.” More

  • in

    MIT spinout Commonwealth Fusion Systems unveils plans for the world’s first fusion power plant

    America is one step closer to tapping into a new and potentially limitless clean energy source today, with the announcement from MIT spinout Commonwealth Fusion Systems (CFS) that it plans to build the world’s first grid-scale fusion power plant in Chesterfield County, Virginia.The announcement is the latest milestone for the company, which has made groundbreaking progress toward harnessing fusion — the reaction that powers the sun — since its founders first conceived of their approach in an MIT classroom in 2012. CFS is now commercializing a suite of advanced technologies developed in MIT research labs.“This moment exemplifies the power of MIT’s mission, which is to create knowledge that serves the nation and the world, whether via the classroom, the lab, or out in communities,” MIT Vice President for Research Ian Waitz says. “From student coursework 12 years ago to today’s announcement of the siting in Virginia of the world’s first fusion power plant, progress has been amazingly rapid. At the same time, we owe this progress to over 65 years of sustained investment by the U.S. federal government in basic science and energy research.”The new fusion power plant, named ARC, is expected to come online in the early 2030s and generate about 400 megawatts of clean, carbon-free electricity — enough energy to power large industrial sites or about 150,000 homes.The plant will be built at the James River Industrial Park outside of Richmond through a nonfinancial collaboration with Dominion Energy Virginia, which will provide development and technical expertise along with leasing rights for the site. CFS will independently finance, build, own, and operate the power plant.The plant will support Virginia’s economic and clean energy goals by generating what is expected to be billions of dollars in economic development and hundreds of jobs during its construction and long-term operation.More broadly, ARC will position the U.S. to lead the world in harnessing a new form of safe and reliable energy that could prove critical for economic prosperity and national security, including for meeting increasing electricity demands driven by needs like artificial intelligence.“This will be a watershed moment for fusion,” says CFS co-founder Dennis Whyte, the Hitachi America Professor of Engineering at MIT. “It sets the pace in the race toward commercial fusion power plants. The ambition is to build thousands of these power plants and to change the world.”Fusion can generate energy from abundant fuels like hydrogen and lithium isotopes, which can be sourced from seawater, and leave behind no emissions or toxic waste. However, harnessing fusion in a way that produces more power than it takes in has proven difficult because of the high temperatures needed to create and maintain the fusion reaction. Over the course of decades, scientists and engineers have worked to make the dream of fusion power plants a reality.In 2012, teaching the MIT class 22.63 (Principles of Fusion Engineering), Whyte challenged a group of graduate students to design a fusion device that would use a new kind of superconducting magnet to confine the plasma used in the reaction. It turned out the magnets enabled a more compact and economic reactor design. When Whyte reviewed his students’ work, he realized that could mean a new development path for fusion.Since then, a huge amount of capital and expertise has rushed into the once fledgling fusion industry. Today there are dozens of private fusion companies around the world racing to develop the first net-energy fusion power plants, many utilizing the new superconducting magnets. CFS, which Whyte founded with several students from his class, has attracted more than $2 billion in funding.“It all started with that class, where our ideas kept evolving as we challenged the standard assumptions that came with fusion,” Whyte says. “We had this new superconducting technology, so much of the common wisdom was no longer valid. It was a perfect forum for students, who can challenge the status quo.”Since the company’s founding in 2017, it has collaborated with researchers in MIT’s Plasma Science and Fusion Center (PFSC) on a range of initiatives, from validating the underlying plasma physics for the first demonstration machine to breaking records with a new kind of magnet to be used in commercial fusion power plants. Each piece of progress moves the U.S. closer to harnessing a revolutionary new energy source.CFS is currently completing development of its fusion demonstration machine, SPARC, at its headquarters in Devens, Massachusetts. SPARC is expected to produce its first plasma in 2026 and net fusion energy shortly after, demonstrating for the first time a commercially relevant design that will produce more power than it consumes. SPARC will pave the way for ARC, which is expected to deliver power to the grid in the early 2030s.“There’s more challenging engineering and science to be done in this field, and we’re very enthusiastic about the progress that CFS and the researchers on our campus are making on those problems,” Waitz says. “We’re in a ‘hockey stick’ moment in fusion energy, where things are moving incredibly quickly now. On the other hand, we can’t forget about the much longer part of that hockey stick, the sustained support for very complex, fundamental research that underlies great innovations. If we’re going to continue to lead the world in these cutting-edge technologies, continued investment in those areas will be crucial.” More

  • in

    New climate chemistry model finds “non-negligible” impacts of potential hydrogen fuel leakage

    As the world looks for ways to stop climate change, much discussion focuses on using hydrogen instead of fossil fuels, which emit climate-warming greenhouse gases (GHGs) when they’re burned. The idea is appealing. Burning hydrogen doesn’t emit GHGs to the atmosphere, and hydrogen is well-suited for a variety of uses, notably as a replacement for natural gas in industrial processes, power generation, and home heating.But while burning hydrogen won’t emit GHGs, any hydrogen that’s leaked from pipelines or storage or fueling facilities can indirectly cause climate change by affecting other compounds that are GHGs, including tropospheric ozone and methane, with methane impacts being the dominant effect. A much-cited 2022 modeling study analyzing hydrogen’s effects on chemical compounds in the atmosphere concluded that these climate impacts could be considerable. With funding from the MIT Energy Initiative’s Future Energy Systems Center, a team of MIT researchers took a more detailed look at the specific chemistry that poses the risks of using hydrogen as a fuel if it leaks.The researchers developed a model that tracks many more chemical reactions that may be affected by hydrogen and includes interactions among chemicals. Their open-access results, published Oct. 28 in Frontiers in Energy Research, showed that while the impact of leaked hydrogen on the climate wouldn’t be as large as the 2022 study predicted — and that it would be about a third of the impact of any natural gas that escapes today — leaked hydrogen will impact the climate. Leak prevention should therefore be a top priority as the hydrogen infrastructure is built, state the researchers.Hydrogen’s impact on the “detergent” that cleans our atmosphereGlobal three-dimensional climate-chemistry models using a large number of chemical reactions have also been used to evaluate hydrogen’s potential climate impacts, but results vary from one model to another, motivating the MIT study to analyze the chemistry. Most studies of the climate effects of using hydrogen consider only the GHGs that are emitted during the production of the hydrogen fuel. Different approaches may make “blue hydrogen” or “green hydrogen,” a label that relates to the GHGs emitted. Regardless of the process used to make the hydrogen, the fuel itself can threaten the climate. For widespread use, hydrogen will need to be transported, distributed, and stored — in short, there will be many opportunities for leakage. The question is, What happens to that leaked hydrogen when it reaches the atmosphere? The 2022 study predicting large climate impacts from leaked hydrogen was based on reactions between pairs of just four chemical compounds in the atmosphere. The results showed that the hydrogen would deplete a chemical species that atmospheric chemists call the “detergent of the atmosphere,” explains Candice Chen, a PhD candidate in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “It goes around zapping greenhouse gases, pollutants, all sorts of bad things in the atmosphere. So it’s cleaning our air.” Best of all, that detergent — the hydroxyl radical, abbreviated as OH — removes methane, which is an extremely potent GHG in the atmosphere. OH thus plays an important role in slowing the rate at which global temperatures rise. But any hydrogen leaked to the atmosphere would reduce the amount of OH available to clean up methane, so the concentration of methane would increase.However, chemical reactions among compounds in the atmosphere are notoriously complicated. While the 2022 study used a “four-equation model,” Chen and her colleagues — Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies and Chemistry; and Kane Stone, a research scientist in EAPS — developed a model that includes 66 chemical reactions. Analyses using their 66-equation model showed that the four-equation system didn’t capture a critical feedback involving OH — a feedback that acts to protect the methane-removal process.Here’s how that feedback works: As the hydrogen decreases the concentration of OH, the cleanup of methane slows down, so the methane concentration increases. However, that methane undergoes chemical reactions that can produce new OH radicals. “So the methane that’s being produced can make more of the OH detergent,” says Chen. “There’s a small countering effect. Indirectly, the methane helps produce the thing that’s getting rid of it.” And, says Chen, that’s a key difference between their 66-equation model and the four-equation one. “The simple model uses a constant value for the production of OH, so it misses that key OH-production feedback,” she says.To explore the importance of including that feedback effect, the MIT researchers performed the following analysis: They assumed that a single pulse of hydrogen was injected into the atmosphere and predicted the change in methane concentration over the next 100 years, first using four-equation model and then using the 66-equation model. With the four-equation system, the additional methane concentration peaked at nearly 2 parts per billion (ppb); with the 66-equation system, it peaked at just over 1 ppb.Because the four-equation analysis assumes only that the injected hydrogen destroys the OH, the methane concentration increases unchecked for the first 10 years or so. In contrast, the 66-equation analysis goes one step further: the methane concentration does increase, but as the system re-equilibrates, more OH forms and removes methane. By not accounting for that feedback, the four-equation analysis overestimates the peak increase in methane due to the hydrogen pulse by about 85 percent. Spread over time, the simple model doubles the amount of methane that forms in response to the hydrogen pulse.Chen cautions that the point of their work is not to present their result as “a solid estimate” of the impact of hydrogen. Their analysis is based on a simple “box” model that represents global average conditions and assumes that all the chemical species present are well mixed. Thus, the species can vary over time — that is, they can be formed and destroyed — but any species that are present are always perfectly mixed. As a result, a box model does not account for the impact of, say, wind on the distribution of species. “The point we’re trying to make is that you can go too simple,” says Chen. “If you’re going simpler than what we’re representing, you will get further from the right answer.” She goes on to note, “The utility of a relatively simple model like ours is that all of the knobs and levers are very clear. That means you can explore the system and see what affects a value of interest.”Leaked hydrogen versus leaked natural gas: A climate comparisonBurning natural gas produces fewer GHG emissions than does burning coal or oil; but as with hydrogen, any natural gas that’s leaked from wells, pipelines, and processing facilities can have climate impacts, negating some of the perceived benefits of using natural gas in place of other fossil fuels. After all, natural gas consists largely of methane, the highly potent GHG in the atmosphere that’s cleaned up by the OH detergent. Given its potency, even small leaks of methane can have a large climate impact.So when thinking about replacing natural gas fuel — essentially methane — with hydrogen fuel, it’s important to consider how the climate impacts of the two fuels compare if and when they’re leaked. The usual way to compare the climate impacts of two chemicals is using a measure called the global warming potential, or GWP. The GWP combines two measures: the radiative forcing of a gas — that is, its heat-trapping ability — with its lifetime in the atmosphere. Since the lifetimes of gases differ widely, to compare the climate impacts of two gases, the convention is to relate the GWP of each one to the GWP of carbon dioxide. But hydrogen and methane leakage cause increases in methane, and that methane decays according to its lifetime. Chen and her colleagues therefore realized that an unconventional procedure would work: they could compare the impacts of the two leaked gases directly. What they found was that the climate impact of hydrogen is about three times less than that of methane (on a per mass basis). So switching from natural gas to hydrogen would not only eliminate combustion emissions, but also potentially reduce the climate effects, depending on how much leaks.Key takeawaysIn summary, Chen highlights some of what she views as the key findings of the study. First on her list is the following: “We show that a really simple four-equation system is not what should be used to project out the atmospheric response to more hydrogen leakages in the future.” The researchers believe that their 66-equation model is a good compromise for the number of chemical reactions to include. It generates estimates for the GWP of methane “pretty much in line with the lower end of the numbers that most other groups are getting using much more sophisticated climate chemistry models,” says Chen. And it’s sufficiently transparent to use in exploring various options for protecting the climate. Indeed, the MIT researchers plan to use their model to examine scenarios that involve replacing other fossil fuels with hydrogen to estimate the climate benefits of making the switch in coming decades.The study also demonstrates a valuable new way to compare the greenhouse effects of two gases. As long as their effects exist on similar time scales, a direct comparison is possible — and preferable to comparing each with carbon dioxide, which is extremely long-lived in the atmosphere. In this work, the direct comparison generates a simple look at the relative climate impacts of leaked hydrogen and leaked methane — valuable information to take into account when considering switching from natural gas to hydrogen.Finally, the researchers offer practical guidance for infrastructure development and use for both hydrogen and natural gas. Their analyses determine that hydrogen fuel itself has a “non-negligible” GWP, as does natural gas, which is mostly methane. Therefore, minimizing leakage of both fuels will be necessary to achieve net-zero carbon emissions by 2050, the goal set by both the European Commission and the U.S. Department of State. Their paper concludes, “If used nearly leak-free, hydrogen is an excellent option. Otherwise, hydrogen should only be a temporary step in the energy transition, or it must be used in tandem with carbon-removal steps [elsewhere] to counter its warming effects.” More

  • in

    Transforming fusion from a scientific curiosity into a powerful clean energy source

    If you’re looking for hard problems, building a nuclear fusion power plant is a pretty good place to start. Fusion — the process that powers the sun — has proven to be a difficult thing to recreate here on Earth despite decades of research.“There’s something very attractive to me about the magnitude of the fusion challenge,” Hartwig says. “It’s probably true of a lot of people at MIT. I’m driven to work on very hard problems. There’s something intrinsically satisfying about that battle. It’s part of the reason I’ve stayed in this field. We have to cross multiple frontiers of physics and engineering if we’re going to get fusion to work.”The problem got harder when, in Hartwig’s last year in graduate school, the Department of Energy announced plans to terminate funding for the Alcator C-Mod tokamak, a major fusion experiment in MIT’s Plasma Science and Fusion Center that Hartwig needed to do to graduate. Hartwig was able to finish his PhD, and the scare didn’t dissuade him from the field. In fact, he took an associate professor position at MIT in 2017 to keep working on fusion.“It was a pretty bleak time to take a faculty position in fusion energy, but I am a person who loves to find a vacuum,” says Hartwig, who is a newly tenured associate professor at MIT. “I adore a vacuum because there’s enormous opportunity in chaos.”Hartwig did have one very good reason for hope. In 2012, he had taken a class taught by Professor Dennis Whyte that challenged students to design and assess the economics of a nuclear fusion power plant that incorporated a new kind of high-temperature superconducting magnet. Hartwig says the magnets enable fusion reactors to be much smaller, cheaper, and faster.Whyte, Hartwig, and a few other members of the class started working nights and weekends to prove the reactors were feasible. In 2017, the group founded Commonwealth Fusion Systems (CFS) to build the world’s first commercial-scale fusion power plants.Over the next four years, Hartwig led a research project at MIT with CFS that further developed the magnet technology and scaled it to create a 20-Tesla superconducting magnet — a suitable size for a nuclear fusion power plant.The magnet and subsequent tests of its performance represented a turning point for the industry. Commonwealth Fusion Systems has since attracted more than $2 billion in investments to build its first reactors, while the fusion industry overall has exceeded $8 billion in private investment.The old joke in fusion is that the technology is always 30 years away. But fewer people are laughing these days.“The perspective in 2024 looks quite a bit different than it did in 2016, and a huge part of that is tied to the institutional capability of a place like MIT and the willingness of people here to accomplish big things,” Hartwig says.A path to the starsAs a child growing up in St. Louis, Hartwig was interested in sports and playing outside with friends but had little interest in physics. When he went to Boston University as an undergraduate, he studied biomedical engineering simply because his older brother had done it, so he thought he could get a job. But as he was introduced to tools for structural experiments and analysis, he found himself more interested in how the tools worked than what they could do.“That led me to physics, and physics ended up leading me to nuclear science, where I’m basically still doing applied physics,” Hartwig explains.Joining the field late in his undergraduate studies, Hartwig worked hard to get his physics degree on time. After graduation, he was burnt out, so he took two years off and raced his bicycle competitively while working in a bike shop.“There’s so much pressure on people in science and engineering to go straight through,” Hartwig says. “People say if you take time off, you won’t be able to get into graduate school, you won’t be able to get recommendation letters. I always tell my students, ‘It depends on the person.’ Everybody’s different, but it was a great period for me, and it really set me up to enter graduate school with a more mature mindset and to be more focused.”Hartwig returned to academia as a PhD student in MIT’s Department of Nuclear Science and Engineering in 2007. When his thesis advisor, Dennis Whyte, announced a course focused on designing nuclear fusion power plants, it caught Hartwig’s eye. The final projects showed a surprisingly promising path forward for a fusion field that had been stagnant for decades. The rest was history.“We started CFS with the idea that it would partner deeply with MIT and MIT’s Plasma Science and Fusion Center to leverage the infrastructure, expertise, people, and capabilities that we have MIT,” Hartwig says. “We had to start the company with the idea that it would be deeply partnered with MIT in an innovative way that hadn’t really been done before.”Guided by impactHartwig says the Department of Nuclear Science and Engineering, and the Plasma Science and Fusion Center in particular, have seen a huge influx in graduate student applications in recent years.“There’s so much demand, because people are excited again about the possibilities,” Hartwig says. “Instead of having fusion and a machine built in one or two generations, we’ll hopefully be learning how these things work in under a decade.”Hartwig’s research group is still testing CFS’ new magnets, but it is also partnering with other fusion companies in an effort to advance the field more broadly.Overall, when Hartwig looks back at his career, the thing he is most proud of is switching specialties every six years or so, from building equipment for his PhD to conducting fundamental experiments to designing reactors to building magnets.“It’s not that traditional in academia,” Hartwig says. “Where I’ve found success is coming into something new, bringing a naivety but also realism to a new field, and offering a different toolkit, a different approach, or a different idea about what can be done.”Now Hartwig is onto his next act, developing new ways to study materials for use in fusion and fission reactors.“I’m already interested in moving on to the next thing; the next field where I’m not a trained expert,” Hartwig says. “It’s about identifying where there’s stagnation in fusion and in technology, where innovation is not happening where we desperately need it, and bringing new ideas to that.” More