More stories

  • in

    What choices does the world need to make to keep global warming below 2 C?

    When the 2015 Paris Agreement set a long-term goal of keeping global warming “well below 2 degrees Celsius, compared to pre-industrial levels” to avoid the worst impacts of climate change, it did not specify how its nearly 200 signatory nations could collectively achieve that goal. Each nation was left to its own devices to reduce greenhouse gas emissions in alignment with the 2 C target. Now a new modeling strategy developed at the MIT Joint Program on the Science and Policy of Global Change that explores hundreds of potential future development pathways provides new insights on the energy and technology choices needed for the world to meet that target.

    Described in a study appearing in the journal Earth’s Future, the new strategy combines two well-known computer modeling techniques to scope out the energy and technology choices needed over the coming decades to reduce emissions sufficiently to achieve the Paris goal.

    The first technique, Monte Carlo analysis, quantifies uncertainty levels for dozens of energy and economic indicators including fossil fuel availability, advanced energy technology costs, and population and economic growth; feeds that information into a multi-region, multi-economic-sector model of the world economy that captures the cross-sectoral impacts of energy transitions; and runs that model hundreds of times to estimate the likelihood of different outcomes. The MIT study focuses on projections through the year 2100 of economic growth and emissions for different sectors of the global economy, as well as energy and technology use.

    The second technique, scenario discovery, uses machine learning tools to screen databases of model simulations in order to identify outcomes of interest and their conditions for occurring. The MIT study applies these tools in a unique way by combining them with the Monte Carlo analysis to explore how different outcomes are related to one another (e.g., do low-emission outcomes necessarily involve large shares of renewable electricity?). This approach can also identify individual scenarios, out of the hundreds explored, that result in specific combinations of outcomes of interest (e.g., scenarios with low emissions, high GDP growth, and limited impact on electricity prices), and also provide insight into the conditions needed for that combination of outcomes.

    Using this unique approach, the MIT Joint Program researchers find several possible patterns of energy and technology development under a specified long-term climate target or economic outcome.

    “This approach shows that there are many pathways to a successful energy transition that can be a win-win for the environment and economy,” says Jennifer Morris, an MIT Joint Program research scientist and the study’s lead author. “Toward that end, it can be used to guide decision-makers in government and industry to make sound energy and technology choices and avoid biases in perceptions of what ’needs’ to happen to achieve certain outcomes.”

    For example, while achieving the 2 C goal, the global level of combined wind and solar electricity generation by 2050 could be less than three times or more than 12 times the current level (which is just over 2,000 terawatt hours). These are very different energy pathways, but both can be consistent with the 2 C goal. Similarly, there are many different energy mixes that can be consistent with maintaining high GDP growth in the United States while also achieving the 2 C goal, with different possible roles for renewables, natural gas, carbon capture and storage, and bioenergy. The study finds renewables to be the most robust electricity investment option, with sizable growth projected under each of the long-term temperature targets explored.

    The researchers also find that long-term climate targets have little impact on economic output for most economic sectors through 2050, but do require each sector to significantly accelerate reduction of its greenhouse gas emissions intensity (emissions per unit of economic output) so as to reach near-zero levels by midcentury.

    “Given the range of development pathways that can be consistent with meeting a 2 degrees C goal, policies that target only specific sectors or technologies can unnecessarily narrow the solution space, leading to higher costs,” says former MIT Joint Program Co-Director John Reilly, a co-author of the study. “Our findings suggest that policies designed to encourage a portfolio of technologies and sectoral actions can be a wise strategy that hedges against risks.”

    The research was supported by the U.S. Department of Energy Office of Science. More

  • in

    At Climate Grand Challenges showcase event, an exploration of how to accelerate breakthrough solutions

    On the eve of Earth Day, more than 300 faculty, researchers, students, government officials, and industry leaders gathered in the Samberg Conference Center, along with thousands more who tuned in online, to celebrate MIT’s first-ever Climate Grand Challenges and the five most promising concepts to emerge from the two-year competition.

    The event began with a climate policy conversation between MIT President L. Rafael Reif and Special Presidential Envoy for Climate John Kerry, followed by presentations from each of the winning flagship teams, and concluded with an expert panel that explored pathways for moving from ideas to impact at scale as quickly as possible.

    “In 2020, when we launched the Climate Grand Challenges, we wanted to focus the daring creativity and pioneering expertise of the MIT community on the urgent problem of climate change,” said President Reif in kicking off the event. “Together these flagship projects will define a transformative new research agenda at MIT, one that has the potential to make meaningful contributions to the global climate response.”

    Reif and Kerry discussed multiple aspects of the climate crisis, including mitigation, adaptation, and the policies and strategies that can help the world avert the worst consequences of climate change and make the United States a leader again in bringing technology into commercial use. Referring to the accelerated wartime research effort that helped turn the tide in World War II, which included work conducted at MIT, Kerry said, “We need about five Manhattan Projects, frankly.”

    “People are now sensing a much greater urgency to finding solutions — new technology — and taking to scale some of the old technologies,” Kerry said. “There are things that are happening that I think are exciting, but the problem is it’s not happening fast enough.”

    Strategies for taking technology from the lab to the marketplace were the basis for the final portion of the event. The panel was moderated by Alicia Barton, president and CEO of FirstLight Power, and included Manish Bapna, president and CEO of the Natural Resources Defense Council; Jack Little, CEO and co-founder of MathWorks; Arati Prabhakar, president of Actuate and former head of the Defense Advanced Research Projects Agency; and Katie Rae, president and managing director of The Engine. The discussion touched upon the importance of marshaling the necessary resources and building the cross-sector partnerships required to scale the technologies being developed by the flagship teams and to deliver them to the world in time to make a difference. 

    “MIT doesn’t sit on its hands ever, and innovation is central to its founding,” said Rae. “The students coming out of MIT at every level, along with the professors, have been committed to these challenges for a long time and therefore will have a big impact. These flagships have always been in process, but now we have an extraordinary moment to commercialize these projects.”

    The panelists weighed in on how to change the mindset around finance, policy, business, and community adoption to scale massive shifts in energy generation, transportation, and other major carbon-emitting industries. They stressed the importance of policies that address the economic, equity, and public health impacts of climate change and of reimagining supply chains and manufacturing to grow and distribute these technologies quickly and affordably. 

    “We are embarking on five adventures, but we do not know yet, cannot know yet, where these projects will take us,” said Maria Zuber, MIT’s vice president for research. “These are powerful and promising ideas. But each one will require focused effort, creative and interdisciplinary teamwork, and sustained commitment and support if they are to become part of the climate and energy revolution that the world urgently needs. This work begins now.” 

    Zuber called for investment from philanthropists and financiers, and urged companies, governments, and others to join this all-of-humanity effort. Associate Provost for International Activities Richard Lester echoed this message in closing the event. 

    “Every one of us needs to put our shoulder to the wheel at the points where our leverage is maximized — where we can do what we’re best at,” Lester said. “For MIT, Climate Grand Challenges is one of those maximum leverage points.” More

  • in

    Developing electricity-powered, low-emissions alternatives to carbon-intensive industrial processes

    On April 11, 2022, MIT announced five multiyear flagship projects in the first-ever Climate Grand Challenges, a new initiative to tackle complex climate problems and deliver breakthrough solutions to the world as quickly as possible. This is the second article in a five-part series highlighting the most promising concepts to emerge from the competition, and the interdisciplinary research teams behind them.

    One of the biggest leaps that humankind could take to drastically lower greenhouse gas emissions globally would be the complete decarbonization of industry. But without finding low-cost, environmentally friendly substitutes for industrial materials, the traditional production of steel, cement, ammonia, and ethylene will continue pumping out billions of tons of carbon annually; these sectors alone are responsible for at least one third of society’s global greenhouse gas emissions. 

    A major problem is that industrial manufacturers, whose success depends on reliable, cost-efficient, and large-scale production methods, are too heavily invested in processes that have historically been powered by fossil fuels to quickly switch to new alternatives. It’s a machine that kicked on more than 100 years ago, and which MIT electrochemical engineer Yet-Ming Chiang says we can’t shut off without major disruptions to the world’s massive supply chain of these materials. What’s needed, Chiang says, is a broader, collaborative clean energy effort that takes “targeted fundamental research, all the way through to pilot demonstrations that greatly lowers the risk for adoption of new technology by industry.”

    This would be a new approach to decarbonization of industrial materials production that relies on largely unexplored but cleaner electrochemical processes. New production methods could be optimized and integrated into the industrial machine to make it run on low-cost, renewable electricity in place of fossil fuels. 

    Recognizing this, Chiang, the Kyocera Professor in the Department of Materials Science and Engineering, teamed with research collaborator Bilge Yildiz, the Breene M. Kerr Professor of Nuclear Science and Engineering and professor of materials science and engineering, with key input from Karthish Manthiram, visiting professor in the Department of Chemical Engineering, to submit a project proposal to the MIT Climate Grand Challenges. Their plan: to create an innovation hub on campus that would bring together MIT researchers individually investigating decarbonization of steel, cement, ammonia, and ethylene under one roof, combining research equipment and directly collaborating on new methods to produce these four key materials.

    Many researchers across MIT have already signed on to join the effort, including Antoine Allanore, associate professor of metallurgy, who specializes in the development of sustainable materials and manufacturing processes, and Elsa Olivetti, the Esther and Harold E. Edgerton Associate Professor in the Department of Materials Science and Engineering, who is an expert in materials economics and sustainability. Other MIT faculty currently involved include Fikile Brushett, Betar Gallant, Ahmed Ghoniem, William Green, Jeffrey Grossman, Ju Li, Yuriy Román-Leshkov, Yang Shao-Horn, Robert Stoner, Yogesh Surendranath, Timothy Swager, and Kripa Varanasi.

    “The team we brought together has the expertise needed to tackle these challenges, including electrochemistry — using electricity to decarbonize these chemical processes — and materials science and engineering, process design and scale-up technoeconomic analysis, and system integration, which is all needed for this to go out from our labs to the field,” says Yildiz.

    Selected from a field of more than 100 proposals, their Center for Electrification and Decarbonization of Industry (CEDI) will be the first such institute worldwide dedicated to testing and scaling the most innovative and promising technologies in sustainable chemicals and materials. CEDI will work to facilitate rapid translation of lab discoveries into affordable, scalable industry solutions, with potential to offset as much as 15 percent of greenhouse gas emissions. The team estimates that some CEDI projects already underway could be commercialized within three years.

    “The real timeline is as soon as possible,” says Chiang.

    To achieve CEDI’s ambitious goals, a physical location is key, staffed with permanent faculty, as well as undergraduates, graduate students, and postdocs. Yildiz says the center’s success will depend on engaging student researchers to carry forward with research addressing the biggest ongoing challenges to decarbonization of industry.

    “We are training young scientists, students, on the learned urgency of the problem,” says Yildiz. “We empower them with the skills needed, and even if an individual project does not find the implementation in the field right away, at least, we would have trained the next generation that will continue to go after them in the field.”

    Chiang’s background in electrochemistry showed him how the efficiency of cement production could benefit from adopting clean electricity sources, and Yildiz’s work on ethylene, the source of plastic and one of industry’s most valued chemicals, has revealed overlooked cost benefits to switching to electrochemical processes with less expensive starting materials. With industry partners, they hope to continue these lines of fundamental research along with Allanore, who is focused on electrifying steel production, and Manthiram, who is developing new processes for ammonia. Olivetti will focus on understanding risks and barriers to implementation. This multilateral approach aims to speed up the timeline to industry adoption of new technologies at the scale needed for global impact.

    “One of the points of emphasis in this whole center is going to be applying technoeconomic analysis of what it takes to be successful at a technical and economic level, as early in the process as possible,” says Chiang.

    The impact of large-scale industry adoption of clean energy sources in these four key areas that CEDI plans to target first would be profound, as these sectors are currently responsible for 7.5 billion tons of emissions annually. There is the potential for even greater impact on emissions as new knowledge is applied to other industrial products beyond the initial four targets of steel, cement, ammonia, and ethylene. Meanwhile, the center will stand as a hub to attract new industry, government stakeholders, and research partners to collaborate on urgently needed solutions, both newly arising and long overdue.

    When Chiang and Yildiz first met to discuss ideas for MIT Climate Grand Challenges, they decided they wanted to build a climate research center that functioned unlike any other to help pivot large industry toward decarbonization. Beyond considering how new solutions will impact industry’s bottom line, CEDI will also investigate unique synergies that could arise from the electrification of industry, like processes that would create new byproducts that could be the feedstock to other industry processes, reducing waste and increasing efficiencies in the larger system. And because industry is so good at scaling, those added benefits would be widespread, finally replacing century-old technologies with critical updates designed to improve production and markedly reduce industry’s carbon footprint sooner rather than later.

    “Everything we do, we’re going to try to do with urgency,” Chiang says. “The fundamental research will be done with urgency, and the transition to commercialization, we’re going to do with urgency.” More

  • in

    A new heat engine with no moving parts is as efficient as a steam turbine

    Engineers at MIT and the National Renewable Energy Laboratory (NREL) have designed a heat engine with no moving parts. Their new demonstrations show that it converts heat to electricity with over 40 percent efficiency — a performance better than that of traditional steam turbines.

    The heat engine is a thermophotovoltaic (TPV) cell, similar to a solar panel’s photovoltaic cells, that passively captures high-energy photons from a white-hot heat source and converts them into electricity. The team’s design can generate electricity from a heat source of between 1,900 to 2,400 degrees Celsius, or up to about 4,300 degrees Fahrenheit.

    The researchers plan to incorporate the TPV cell into a grid-scale thermal battery. The system would absorb excess energy from renewable sources such as the sun and store that energy in heavily insulated banks of hot graphite. When the energy is needed, such as on overcast days, TPV cells would convert the heat into electricity, and dispatch the energy to a power grid.

    With the new TPV cell, the team has now successfully demonstrated the main parts of the system in separate, small-scale experiments. They are working to integrate the parts to demonstrate a fully operational system. From there, they hope to scale up the system to replace fossil-fuel-driven power plants and enable a fully decarbonized power grid, supplied entirely by renewable energy.

    “Thermophotovoltaic cells were the last key step toward demonstrating that thermal batteries are a viable concept,” says Asegun Henry, the Robert N. Noyce Career Development Professor in MIT’s Department of Mechanical Engineering. “This is an absolutely critical step on the path to proliferate renewable energy and get to a fully decarbonized grid.”

    Henry and his collaborators have published their results today in the journal Nature. Co-authors at MIT include Alina LaPotin, Kevin Schulte, Kyle Buznitsky, Colin Kelsall, Andrew Rohskopf, and Evelyn Wang, the Ford Professor of Engineering and head of the Department of Mechanical Engineering, along with collaborators at NREL in Golden, Colorado.

    Jumping the gap

    More than 90 percent of the world’s electricity comes from sources of heat such as coal, natural gas, nuclear energy, and concentrated solar energy. For a century, steam turbines have been the industrial standard for converting such heat sources into electricity.

    On average, steam turbines reliably convert about 35 percent of a heat source into electricity, with about 60 percent representing the highest efficiency of any heat engine to date. But the machinery depends on moving parts that are temperature- limited. Heat sources higher than 2,000 degrees Celsius, such as Henry’s proposed thermal battery system, would be too hot for turbines.

    In recent years, scientists have looked into solid-state alternatives — heat engines with no moving parts, that could potentially work efficiently at higher temperatures.

    “One of the advantages of solid-state energy converters are that they can operate at higher temperatures with lower maintenance costs because they have no moving parts,” Henry says. “They just sit there and reliably generate electricity.”

    Thermophotovoltaic cells offered one exploratory route toward solid-state heat engines. Much like solar cells, TPV cells could be made from semiconducting materials with a particular bandgap — the gap between a material’s valence band and its conduction band. If a photon with a high enough energy is absorbed by the material, it can kick an electron across the bandgap, where the electron can then conduct, and thereby generate electricity — doing so without moving rotors or blades.

    To date, most TPV cells have only reached efficiencies of around 20 percent, with the record at 32 percent, as they have been made of relatively low-bandgap materials that convert lower-temperature, low-energy photons, and therefore convert energy less efficiently.

    Catching light

    In their new TPV design, Henry and his colleagues looked to capture higher-energy photons from a higher-temperature heat source, thereby converting energy more efficiently. The team’s new cell does so with higher-bandgap materials and multiple junctions, or material layers, compared with existing TPV designs.

    The cell is fabricated from three main regions: a high-bandgap alloy, which sits over a slightly lower-bandgap alloy, underneath which is a mirror-like layer of gold. The first layer captures a heat source’s highest-energy photons and converts them into electricity, while lower-energy photons that pass through the first layer are captured by the second and converted to add to the generated voltage. Any photons that pass through this second layer are then reflected by the mirror, back to the heat source, rather than being absorbed as wasted heat.

    The team tested the cell’s efficiency by placing it over a heat flux sensor — a device that directly measures the heat absorbed from the cell. They exposed the cell to a high-temperature lamp and concentrated the light onto the cell. They then varied the bulb’s intensity, or temperature, and observed how the cell’s power efficiency — the amount of power it produced, compared with the heat it absorbed — changed with temperature. Over a range of 1,900 to 2,400 degrees Celsius, the new TPV cell maintained an efficiency of around 40 percent.

    “We can get a high efficiency over a broad range of temperatures relevant for thermal batteries,” Henry says.

    The cell in the experiments is about a square centimeter. For a grid-scale thermal battery system, Henry envisions the TPV cells would have to scale up to about 10,000 square feet (about a quarter of a football field), and would operate in climate-controlled warehouses to draw power from huge banks of stored solar energy. He points out that an infrastructure exists for making large-scale photovoltaic cells, which could also be adapted to manufacture TPVs.

    “There’s definitely a huge net positive here in terms of sustainability,” Henry says. “The technology is safe, environmentally benign in its life cycle, and can have a tremendous impact on abating carbon dioxide emissions from electricity production.”

    This research was supported, in part, by the U.S. Department of Energy. More

  • in

    Engineers enlist AI to help scale up advanced solar cell manufacturing

    Perovskites are a family of materials that are currently the leading contender to potentially replace today’s silicon-based solar photovoltaics. They hold the promise of panels that are far thinner and lighter, that could be made with ultra-high throughput at room temperature instead of at hundreds of degrees, and that are cheaper and easier to transport and install. But bringing these materials from controlled laboratory experiments into a product that can be manufactured competitively has been a long struggle.

    Manufacturing perovskite-based solar cells involves optimizing at least a dozen or so variables at once, even within one particular manufacturing approach among many possibilities. But a new system based on a novel approach to machine learning could speed up the development of optimized production methods and help make the next generation of solar power a reality.

    The system, developed by researchers at MIT and Stanford University over the last few years, makes it possible to integrate data from prior experiments, and information based on personal observations by experienced workers, into the machine learning process. This makes the outcomes more accurate and has already led to the manufacturing of perovskite cells with an energy conversion efficiency of 18.5 percent, a competitive level for today’s market.

    The research is reported today in the journal Joule, in a paper by MIT professor of mechanical engineering Tonio Buonassisi, Stanford professor of materials science and engineering Reinhold Dauskardt, recent MIT research assistant Zhe Liu, Stanford doctoral graduate Nicholas Rolston, and three others.

    Perovskites are a group of layered crystalline compounds defined by the configuration of the atoms in their crystal lattice. There are thousands of such possible compounds and many different ways of making them. While most lab-scale development of perovskite materials uses a spin-coating technique, that’s not practical for larger-scale manufacturing, so companies and labs around the world have been searching for ways of translating these lab materials into a practical, manufacturable product.

    “There’s always a big challenge when you’re trying to take a lab-scale process and then transfer it to something like a startup or a manufacturing line,” says Rolston, who is now an assistant professor at Arizona State University. The team looked at a process that they felt had the greatest potential, a method called rapid spray plasma processing, or RSPP.

    The manufacturing process would involve a moving roll-to-roll surface, or series of sheets, on which the precursor solutions for the perovskite compound would be sprayed or ink-jetted as the sheet rolled by. The material would then move on to a curing stage, providing a rapid and continuous output “with throughputs that are higher than for any other photovoltaic technology,” Rolston says.

    “The real breakthrough with this platform is that it would allow us to scale in a way that no other material has allowed us to do,” he adds. “Even materials like silicon require a much longer timeframe because of the processing that’s done. Whereas you can think of [this approach as more] like spray painting.”

    Within that process, at least a dozen variables may affect the outcome, some of them more controllable than others. These include the composition of the starting materials, the temperature, the humidity, the speed of the processing path, the distance of the nozzle used to spray the material onto a substrate, and the methods of curing the material. Many of these factors can interact with each other, and if the process is in open air, then humidity, for example, may be uncontrolled. Evaluating all possible combinations of these variables through experimentation is impossible, so machine learning was needed to help guide the experimental process.

    But while most machine-learning systems use raw data such as measurements of the electrical and other properties of test samples, they don’t typically incorporate human experience such as qualitative observations made by the experimenters of the visual and other properties of the test samples, or information from other experiments reported by other researchers. So, the team found a way to incorporate such outside information into the machine learning model, using a probability factor based on a mathematical technique called Bayesian Optimization.

    Using the system, he says, “having a model that comes from experimental data, we can find out trends that we weren’t able to see before.” For example, they initially had trouble adjusting for uncontrolled variations in humidity in their ambient setting. But the model showed them “that we could overcome our humidity challenges by changing the temperature, for instance, and by changing some of the other knobs.”

    The system now allows experimenters to much more rapidly guide their process in order to optimize it for a given set of conditions or required outcomes. In their experiments, the team focused on optimizing the power output, but the system could also be used to simultaneously incorporate other criteria, such as cost and durability — something members of the team are continuing to work on, Buonassisi says.

    The researchers were encouraged by the Department of Energy, which sponsored the work, to commercialize the technology, and they’re currently focusing on tech transfer to existing perovskite manufacturers. “We are reaching out to companies now,” Buonassisi says, and the code they developed has been made freely available through an open-source server. “It’s now on GitHub, anyone can download it, anyone can run it,” he says. “We’re happy to help companies get started in using our code.”

    Already, several companies are gearing up to produce perovskite-based solar panels, even though they are still working out the details of how to produce them, says Liu, who is now at the Northwestern Polytechnical University in Xi’an, China. He says companies there are not yet doing large-scale manufacturing, but instead starting with smaller, high-value applications such as building-integrated solar tiles where appearance is important. Three of these companies “are on track or are being pushed by investors to manufacture 1 meter by 2-meter rectangular modules [comparable to today’s most common solar panels], within two years,” he says.

    ‘The problem is, they don’t have a consensus on what manufacturing technology to use,” Liu says. The RSPP method, developed at Stanford, “still has a good chance” to be competitive, he says. And the machine learning system the team developed could prove to be important in guiding the optimization of whatever process ends up being used.

    “The primary goal was to accelerate the process, so it required less time, less experiments, and less human hours to develop something that is usable right away, for free, for industry,” he says.

    “Existing work on machine-learning-driven perovskite PV fabrication largely focuses on spin-coating, a lab-scale technique,” says Ted Sargent, University Professor at the University of Toronto, who was not associated with this work, which he says demonstrates “a workflow that is readily adapted to the deposition techniques that dominate the thin-film industry. Only a handful of groups have the simultaneous expertise in engineering and computation to drive such advances.” Sargent adds that this approach “could be an exciting advance for the manufacture of a broader family of materials” including LEDs, other PV technologies, and graphene, “in short, any industry that uses some form of vapor or vacuum deposition.” 

    The team also included Austin Flick and Thomas Colburn at Stanford and Zekun Ren at the Singapore-MIT Alliance for Science and Technology (SMART). In addition to the Department of Energy, the work was supported by a fellowship from the MIT Energy Initiative, the Graduate Research Fellowship Program from the National Science Foundation, and the SMART program. More

  • in

    MIT announces five flagship projects in first-ever Climate Grand Challenges competition

    MIT today announced the five flagship projects selected in its first-ever Climate Grand Challenges competition. These multiyear projects will define a dynamic research agenda focused on unraveling some of the toughest unsolved climate problems and bringing high-impact, science-based solutions to the world on an accelerated basis.

    Representing the most promising concepts to emerge from the two-year competition, the five flagship projects will receive additional funding and resources from MIT and others to develop their ideas and swiftly transform them into practical solutions at scale.

    “Climate Grand Challenges represents a whole-of-MIT drive to develop game-changing advances to confront the escalating climate crisis, in time to make a difference,” says MIT President L. Rafael Reif. “We are inspired by the creativity and boldness of the flagship ideas and by their potential to make a significant contribution to the global climate response. But given the planet-wide scale of the challenge, success depends on partnership. We are eager to work with visionary leaders in every sector to accelerate this impact-oriented research, implement serious solutions at scale, and inspire others to join us in confronting this urgent challenge for humankind.”

    Brief descriptions of the five Climate Grand Challenges flagship projects are provided below.

    Bringing Computation to the Climate Challenge

    This project leverages advances in artificial intelligence, machine learning, and data sciences to improve the accuracy of climate models and make them more useful to a variety of stakeholders — from communities to industry. The team is developing a digital twin of the Earth that harnesses more data than ever before to reduce and quantify uncertainties in climate projections.

    Research leads: Raffaele Ferrari, the Cecil and Ida Green Professor of Oceanography in the Department of Earth, Atmospheric and Planetary Sciences, and director of the Program in Atmospheres, Oceans, and Climate; and Noelle Eckley Selin, director of the Technology and Policy Program and professor with a joint appointment in the Institute for Data, Systems, and Society and the Department of Earth, Atmospheric and Planetary Sciences

    Center for Electrification and Decarbonization of Industry

    This project seeks to reinvent and electrify the processes and materials behind hard-to-decarbonize industries like steel, cement, ammonia, and ethylene production. A new innovation hub will perform targeted fundamental research and engineering with urgency, pushing the technological envelope on electricity-driven chemical transformations.

    Research leads: Yet-Ming Chiang, the Kyocera Professor of Materials Science and Engineering, and Bilge Yıldız, the Breene M. Kerr Professor in the Department of Nuclear Science and Engineering and professor in the Department of Materials Science and Engineering

    Preparing for a new world of weather and climate extremes

    This project addresses key gaps in knowledge about intensifying extreme events such as floods, hurricanes, and heat waves, and quantifies their long-term risk in a changing climate. The team is developing a scalable climate-change adaptation toolkit to help vulnerable communities and low-carbon energy providers prepare for these extreme weather events.

    Research leads: Kerry Emanuel, the Cecil and Ida Green Professor of Atmospheric Science in the Department of Earth, Atmospheric and Planetary Sciences and co-director of the MIT Lorenz Center; Miho Mazereeuw, associate professor of architecture and urbanism in the Department of Architecture and director of the Urban Risk Lab; and Paul O’Gorman, professor in the Program in Atmospheres, Oceans, and Climate in the Department of Earth, Atmospheric and Planetary Sciences

    The Climate Resilience Early Warning System

    The CREWSnet project seeks to reinvent climate change adaptation with a novel forecasting system that empowers underserved communities to interpret local climate risk, proactively plan for their futures incorporating resilience strategies, and minimize losses. CREWSnet will initially be demonstrated in southwestern Bangladesh, serving as a model for similarly threatened regions around the world.

    Research leads: John Aldridge, assistant leader of the Humanitarian Assistance and Disaster Relief Systems Group at MIT Lincoln Laboratory, and Elfatih Eltahir, the H.M. King Bhumibol Professor of Hydrology and Climate in the Department of Civil and Environmental Engineering

    Revolutionizing agriculture with low-emissions, resilient crops

    This project works to revolutionize the agricultural sector with climate-resilient crops and fertilizers that have the ability to dramatically reduce greenhouse gas emissions from food production.

    Research lead: Christopher Voigt, the Daniel I.C. Wang Professor in the Department of Biological Engineering

    “As one of the world’s leading institutions of research and innovation, it is incumbent upon MIT to draw on our depth of knowledge, ingenuity, and ambition to tackle the hard climate problems now confronting the world,” says Richard Lester, MIT associate provost for international activities. “Together with collaborators across industry, finance, community, and government, the Climate Grand Challenges teams are looking to develop and implement high-impact, path-breaking climate solutions rapidly and at a grand scale.”

    The initial call for ideas in 2020 yielded nearly 100 letters of interest from almost 400 faculty members and senior researchers, representing 90 percent of MIT departments. After an extensive evaluation, 27 finalist teams received a total of $2.7 million to develop comprehensive research and innovation plans. The projects address four broad research themes:

    To select the winning projects, research plans were reviewed by panels of international experts representing relevant scientific and technical domains as well as experts in processes and policies for innovation and scalability.

    “In response to climate change, the world really needs to do two things quickly: deploy the solutions we already have much more widely, and develop new solutions that are urgently needed to tackle this intensifying threat,” says Maria Zuber, MIT vice president for research. “These five flagship projects exemplify MIT’s strong determination to bring its knowledge and expertise to bear in generating new ideas and solutions that will help solve the climate problem.”

    “The Climate Grand Challenges flagship projects set a new standard for inclusive climate solutions that can be adapted and implemented across the globe,” says MIT Chancellor Melissa Nobles. “This competition propels the entire MIT research community — faculty, students, postdocs, and staff — to act with urgency around a worsening climate crisis, and I look forward to seeing the difference these projects can make.”

    “MIT’s efforts on climate research amid the climate crisis was a primary reason that I chose to attend MIT, and remains a reason that I view the Institute favorably. MIT has a clear opportunity to be a thought leader in the climate space in our own MIT way, which is why CGC fits in so well,” says senior Megan Xu, who served on the Climate Grand Challenges student committee and is studying ways to make the food system more sustainable.

    The Climate Grand Challenges competition is a key initiative of “Fast Forward: MIT’s Climate Action Plan for the Decade,” which the Institute published in May 2021. Fast Forward outlines MIT’s comprehensive plan for helping the world address the climate crisis. It consists of five broad areas of action: sparking innovation, educating future generations, informing and leveraging government action, reducing MIT’s own climate impact, and uniting and coordinating all of MIT’s climate efforts. More

  • in

    New England renewables + Canadian hydropower

    The urgent need to cut carbon emissions has prompted a growing number of U.S. states to commit to achieving 100 percent clean electricity by 2040 or 2050. But figuring out how to meet those commitments and still have a reliable and affordable power system is a challenge. Wind and solar installations will form the backbone of a carbon-free power system, but what technologies can meet electricity demand when those intermittent renewable sources are not adequate?

    In general, the options being discussed include nuclear power, natural gas with carbon capture and storage (CCS), and energy storage technologies such as new and improved batteries and chemical storage in the form of hydrogen. But in the northeastern United States, there is one more possibility being proposed: electricity imported from hydropower plants in the neighboring Canadian province of Quebec.

    The proposition makes sense. Those plants can produce as much electricity as about 40 large nuclear power plants, and some power generated in Quebec already comes to the Northeast. So, there could be abundant additional supply to fill any shortfall when New England’s intermittent renewables underproduce. However, U.S. wind and solar investors view Canadian hydropower as a competitor and argue that reliance on foreign supply discourages further U.S. investment.

    Two years ago, three researchers affiliated with the MIT Center for Energy and Environmental Policy Research (CEEPR) — Emil Dimanchev SM ’18, now a PhD candidate at the Norwegian University of Science and Technology; Joshua Hodge, CEEPR’s executive director; and John Parsons, a senior lecturer in the MIT Sloan School of Management — began wondering whether viewing Canadian hydro as another source of electricity might be too narrow. “Hydropower is a more-than-hundred-year-old technology, and plants are already built up north,” says Dimanchev. “We might not need to build something new. We might just need to use those plants differently or to a greater extent.”

    So the researchers decided to examine the potential role and economic value of Quebec’s hydropower resource in a future low-carbon system in New England. Their goal was to help inform policymakers, utility decision-makers, and others about how best to incorporate Canadian hydropower into their plans and to determine how much time and money New England should spend to integrate more hydropower into its system. What they found out was surprising, even to them.

    The analytical methods

    To explore possible roles for Canadian hydropower to play in New England’s power system, the MIT researchers first needed to predict how the regional power system might look in 2050 — both the resources in place and how they would be operated, given any policy constraints. To perform that analysis, they used GenX, a modeling tool originally developed by Jesse Jenkins SM ’14, PhD ’18 and Nestor Sepulveda SM ’16, PhD ’20 while they were researchers at the MIT Energy Initiative (MITEI).

    The GenX model is designed to support decision-making related to power system investment and real-time operation and to examine the impacts of possible policy initiatives on those decisions. Given information on current and future technologies — different kinds of power plants, energy storage technologies, and so on — GenX calculates the combination of equipment and operating conditions that can meet a defined future demand at the lowest cost. The GenX modeling tool can also incorporate specified policy constraints, such as limits on carbon emissions.

    For their study, Dimanchev, Hodge, and Parsons set parameters in the GenX model using data and assumptions derived from a variety of sources to build a representation of the interconnected power systems in New England, New York, and Quebec. (They included New York to account for that state’s existing demand on the Canadian hydro resources.) For data on the available hydropower, they turned to Hydro-Québec, the public utility that owns and operates most of the hydropower plants in Quebec.

    It’s standard in such analyses to include real-world engineering constraints on equipment, such as how quickly certain power plants can be ramped up and down. With help from Hydro-Québec, the researchers also put hour-to-hour operating constraints on the hydropower resource.

    Most of Hydro-Québec’s plants are “reservoir hydropower” systems. In them, when power isn’t needed, the flow on a river is restrained by a dam downstream of a reservoir, and the reservoir fills up. When power is needed, the dam is opened, and the water in the reservoir runs through downstream pipes, turning turbines and generating electricity. Proper management of such a system requires adhering to certain operating constraints. For example, to prevent flooding, reservoirs must not be allowed to overfill — especially prior to spring snowmelt. And generation can’t be increased too quickly because a sudden flood of water could erode the river edges or disrupt fishing or water quality.

    Based on projections from the National Renewable Energy Laboratory and elsewhere, the researchers specified electricity demand for every hour of the year 2050, and the model calculated the cost-optimal mix of technologies and system operating regime that would satisfy that hourly demand, including the dispatch of the Hydro-Québec hydropower system. In addition, the model determined how electricity would be traded among New England, New York, and Quebec.

    Effects of decarbonization limits on technology mix and electricity trading

    To examine the impact of the emissions-reduction mandates in the New England states, the researchers ran the model assuming reductions in carbon emissions between 80 percent and 100 percent relative to 1990 levels. The results of those runs show that, as emissions limits get more stringent, New England uses more wind and solar and extends the lifetime of its existing nuclear plants. To balance the intermittency of the renewables, the region uses natural gas plants, demand-side management, battery storage (modeled as lithium-ion batteries), and trading with Quebec’s hydropower-based system. Meanwhile, the optimal mix in Quebec is mostly composed of existing hydro generation. Some solar is added, but new reservoirs are built only if renewable costs are assumed to be very high.

    The most significant — and perhaps surprising — outcome is that in all the scenarios, the hydropower-based system of Quebec is not only an exporter but also an importer of electricity, with the direction of flow on the Quebec-New England transmission lines changing over time.

    Historically, energy has always flowed from Quebec to New England. The model results for 2018 show electricity flowing from north to south, with the quantity capped by the current transmission capacity limit of 2,225 megawatts (MW).

    An analysis for 2050, assuming that New England decarbonizes 90 percent and the capacity of the transmission lines remains the same, finds electricity flows going both ways. Flows from north to south still dominate. But for nearly 3,500 of the 8,760 hours of the year, electricity flows in the opposite direction — from New England to Quebec. And for more than 2,200 of those hours, the flow going north is at the maximum the transmission lines can carry.

    The direction of flow is motivated by economics. When renewable generation is abundant in New England, prices are low, and it’s cheaper for Quebec to import electricity from New England and conserve water in its reservoirs. Conversely, when New England’s renewables are scarce and prices are high, New England imports hydro-generated electricity from Quebec.

    So rather than delivering electricity, Canadian hydro provides a means of storing the electricity generated by the intermittent renewables in New England.

    “We see this in our modeling because when we tell the model to meet electricity demand using these resources, the model decides that it is cost-optimal to use the reservoirs to store energy rather than anything else,” says Dimanchev. “We should be sending the energy back and forth, so the reservoirs in Quebec are in essence a battery that we use to store some of the electricity produced by our intermittent renewables and discharge it when we need it.”

    Given that outcome, the researchers decided to explore the impact of expanding the transmission capacity between New England and Quebec. Building transmission lines is always contentious, but what would be the impact if it could be done?

    Their model results shows that when transmission capacity is increased from 2,225 MW to 6,225 MW, flows in both directions are greater, and in both cases the flow is at the new maximum for more than 1,000 hours.

    Results of the analysis thus confirm that the economic response to expanded transmission capacity is more two-way trading. To continue the battery analogy, more transmission capacity to and from Quebec effectively increases the rate at which the battery can be charged and discharged.

    Effects of two-way trading on the energy mix

    What impact would the advent of two-way trading have on the mix of energy-generating sources in New England and Quebec in 2050?

    Assuming current transmission capacity, in New England, the change from one-way to two-way trading increases both wind and solar power generation and to a lesser extent nuclear; it also decreases the use of natural gas with CCS. The hydro reservoirs in Canada can provide long-duration storage — over weeks, months, and even seasons — so there is less need for natural gas with CCS to cover any gaps in supply. The level of imports is slightly lower, but now there are also exports. Meanwhile, in Quebec, two-way trading reduces solar power generation, and the use of wind disappears. Exports are roughly the same, but now there are imports as well. Thus, two-way trading reallocates renewables from Quebec to New England, where it’s more economical to install and operate solar and wind systems.

    Another analysis examined the impact on the energy mix of assuming two-way trading plus expanded transmission capacity. For New England, greater transmission capacity allows wind, solar, and nuclear to expand further; natural gas with CCS all but disappears; and both imports and exports increase significantly. In Quebec, solar decreases still further, and both exports and imports of electricity increase.

    Those results assume that the New England power system decarbonizes by 99 percent in 2050 relative to 1990 levels. But at 90 percent and even 80 percent decarbonization levels, the model concludes that natural gas capacity decreases with the addition of new transmission relative to the current transmission scenario. Existing plants are retired, and new plants are not built as they are no longer economically justified. Since natural gas plants are the only source of carbon emissions in the 2050 energy system, the researchers conclude that the greater access to hydro reservoirs made possible by expanded transmission would accelerate the decarbonization of the electricity system.

    Effects of transmission changes on costs

    The researchers also explored how two-way trading with expanded transmission capacity would affect costs in New England and Quebec, assuming 99 percent decarbonization in New England. New England’s savings on fixed costs (investments in new equipment) are largely due to a decreased need to invest in more natural gas with CCS, and its savings on variable costs (operating costs) are due to a reduced need to run those plants. Quebec’s savings on fixed costs come from a reduced need to invest in solar generation. The increase in cost — borne by New England — reflects the construction and operation of the increased transmission capacity. The net benefit for the region is substantial.

    Thus, the analysis shows that everyone wins as transmission capacity increases — and the benefit grows as the decarbonization target tightens. At 99 percent decarbonization, the overall New England-Quebec region pays about $21 per megawatt-hour (MWh) of electricity with today’s transmission capacity but only $18/MWh with expanded transmission. Assuming 100 percent reduction in carbon emissions, the region pays $29/MWh with current transmission capacity and only $22/MWh with expanded transmission.

    Addressing misconceptions

    These results shed light on several misconceptions that policymakers, supporters of renewable energy, and others tend to have.

    The first misconception is that the New England renewables and Canadian hydropower are competitors. The modeling results instead show that they’re complementary. When the power systems in New England and Quebec work together as an integrated system, the Canadian reservoirs are used part of the time to store the renewable electricity. And with more access to hydropower storage in Quebec, there’s generally more renewable investment in New England.

    The second misconception arises when policymakers refer to Canadian hydro as a “baseload resource,” which implies a dependable source of electricity — particularly one that supplies power all the time. “Our study shows that by viewing Canadian hydropower as a baseload source of electricity — or indeed a source of electricity at all — you’re not taking full advantage of what that resource can provide,” says Dimanchev. “What we show is that Quebec’s reservoir hydro can provide storage, specifically for wind and solar. It’s a solution to the intermittency problem that we foresee in carbon-free power systems for 2050.”

    While the MIT analysis focuses on New England and Quebec, the researchers believe that their results may have wider implications. As power systems in many regions expand production of renewables, the value of storage grows. Some hydropower systems have storage capacity that has not yet been fully utilized and could be a good complement to renewable generation. Taking advantage of that capacity can lower the cost of deep decarbonization and help move some regions toward a decarbonized supply of electricity.

    This research was funded by the MIT Center for Energy and Environmental Policy Research, which is supported in part by a consortium of industry and government associates.

    This article appears in the Autumn 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Finding the questions that guide MIT fusion research

    “One of the things I learned was, doing good science isn’t so much about finding the answers as figuring out what the important questions are.”

    As Martin Greenwald retires from the responsibilities of senior scientist and deputy director of the MIT Plasma Science and Fusion Center (PSFC), he reflects on his almost 50 years of science study, 43 of them as a researcher at MIT, pursuing the question of how to make the carbon-free energy of fusion a reality.

    Most of Greenwald’s important questions about fusion began after graduating from MIT with a BS in both physics and chemistry. Beginning graduate work at the University of California at Berkeley, he felt compelled to learn more about fusion as an energy source that could have “a real societal impact.” At the time, researchers were exploring new ideas for devices that could create and confine fusion plasmas. Greenwald worked on Berkeley’s “alternate concept” TORMAC, a Toroidal Magnetic Cusp. “It didn’t work out very well,” he laughs. “The first thing I was known for was making the measurements that shut down the program.”

    Believing the temperature of the plasma generated by the device would not be as high as his group leader expected, Greenwald developed hardware that could measure the low temperatures predicted by his own “back of the envelope calculations.” As he anticipated, his measurements showed that “this was not a fusion plasma; this was hardly a confined plasma at all.”

    With a PhD from Berkeley, Greenwald returned to MIT for a research position at the PSFC, attracted by the center’s “esprit de corps.”

    He arrived in time to participate in the final experiments on Alcator A, the first in a series of tokamaks built at MIT, all characterized by compact size and featuring high-field magnets. The tokamak design was then becoming favored as the most effective route to fusion: its doughnut-shaped vacuum chamber, surrounded by electromagnets, could confine the turbulent plasma long enough, while increasing its heat and density, to make fusion occur.

    Alcator A showed that the energy confinement time improves in relation to increasing plasma density. MIT’s succeeding device, Alcator C, was designed to use higher magnetic fields, boosting expectations that it would reach higher densities and better confinement. To attain these goals, however, Greenwald had to pursue a new technique that increased density by injecting pellets of frozen fuel into the plasma, a method he likens to throwing “snowballs in hell.” This work was notable for the creation of a new regime of enhanced plasma confinement on Alcator C. In those experiments, a confined plasma surpassed for the first time one of the two Lawson criteria — the minimum required value for the product of the plasma density and confinement time — for making net power from fusion. This had been a milestone for fusion research since their publication by John Lawson in 1957.

    Greenwald continued to make a name for himself as part of a larger study into the physics of the Compact Ignition Tokamak — a high-field burning plasma experiment that the U.S. program was proposing to build in the late 1980s. The result, unexpectedly, was a new scaling law, later known as the “Greenwald Density Limit,” and a new theory for the mechanism of the limit. It has been used to accurately predict performance on much larger machines built since.

    The center’s next tokamak, Alcator C-Mod, started operation in 1993 and ran for more than 20 years, with Greenwald as the chair of its Experimental Program Committee. Larger than Alcator C, the new device supported a highly shaped plasma, strong radiofrequency heating, and an all-metal plasma-facing first wall. All of these would eventually be required in a fusion power system.

    C-Mod proved to be MIT’s most enduring fusion experiment to date, producing important results for 20 years. During that time Greenwald contributed not only to the experiments, but to mentoring the next generation. Research scientist Ryan Sweeney notes that “Martin quickly gained my trust as a mentor, in part due to his often casual dress and slightly untamed hair, which are embodiments of his transparency and his focus on what matters. He can quiet a room of PhDs and demand attention not by intimidation, but rather by his calmness and his ability to bring clarity to complicated problems, be they scientific or human in nature.”

    Greenwald worked closely with the group of students who, in PSFC Director Dennis Whyte’s class, came up with the tokamak concept that evolved into SPARC. MIT is now pursuing this compact, high-field tokamak with Commonwealth Fusion Systems, a startup that grew out of the collective enthusiasm for this concept, and the growing realization it could work. Greenwald now heads the Physics Group for the SPARC project at MIT. He has helped confirm the device’s physics basis in order to predict performance and guide engineering decisions.

    “Martin’s multifaceted talents are thoroughly embodied by, and imprinted on, SPARC” says Whyte. “First, his leadership in its plasma confinement physics validation and publication place SPARC on a firm scientific footing. Secondly, the impact of the density limit he discovered, which shows that fuel density increases with magnetic field and decreasing the size of the tokamak, is critical in obtaining high fusion power density not just in SPARC, but in future power plants. Third, and perhaps most impressive, is Martin’s mentorship of the SPARC generation of leadership.”

    Greenwald’s expertise and easygoing personality have made him an asset as head of the PSFC Office for Computer Services and group leader for data acquisition and computing, and sought for many professional committees. He has been an APS Fellow since 2000, and was an APS Distinguished Lecturer in Plasma Physics (2001-02). He was also presented in 2014 with a Leadership Award from Fusion Power Associates. He is currently an associate editor for Physics of Plasmas and a member of the Lawrence Livermore National Laboratory Physical Sciences Directorate External Review Committee.

    Although leaving his full-time responsibilities, Greenwald will remain at MIT as a visiting scientist, a role he says will allow him to “stick my nose into everything without being responsible for anything.”

    “At some point in the race you have to hand off the baton,“ he says. “And it doesn’t mean you’re not interested in the outcome; and it doesn’t mean you’re just going to walk away into the stands. I want to be there at the end when we succeed.” More