More stories

  • in

    MIT engineers’ new theory could improve the design and operation of wind farms

    The blades of propellers and wind turbines are designed based on aerodynamics principles that were first described mathematically more than a century ago. But engineers have long realized that these formulas don’t work in every situation. To compensate, they have added ad hoc “correction factors” based on empirical observations.Now, for the first time, engineers at MIT have developed a comprehensive, physics-based model that accurately represents the airflow around rotors even under extreme conditions, such as when the blades are operating at high forces and speeds, or are angled in certain directions. The model could improve the way rotors themselves are designed, but also the way wind farms are laid out and operated. The new findings are described today in the journal Nature Communications, in an open-access paper by MIT postdoc Jaime Liew, doctoral student Kirby Heck, and Michael Howland, the Esther and Harold E. Edgerton Assistant Professor of Civil and Environmental Engineering.“We’ve developed a new theory for the aerodynamics of rotors,” Howland says. This theory can be used to determine the forces, flow velocities, and power of a rotor, whether that rotor is extracting energy from the airflow, as in a wind turbine, or applying energy to the flow, as in a ship or airplane propeller. “The theory works in both directions,” he says.Because the new understanding is a fundamental mathematical model, some of its implications could potentially be applied right away. For example, operators of wind farms must constantly adjust a variety of parameters, including the orientation of each turbine as well as its rotation speed and the angle of its blades, in order to maximize power output while maintaining safety margins. The new model can provide a simple, speedy way of optimizing those factors in real time.“This is what we’re so excited about, is that it has immediate and direct potential for impact across the value chain of wind power,” Howland says.Modeling the momentumKnown as momentum theory, the previous model of how rotors interact with their fluid environment — air, water, or otherwise — was initially developed late in the 19th century. With this theory, engineers can start with a given rotor design and configuration, and determine the maximum amount of power that can be derived from that rotor — or, conversely, if it’s a propeller, how much power is needed to generate a given amount of propulsive force.Momentum theory equations “are the first thing you would read about in a wind energy textbook, and are the first thing that I talk about in my classes when I teach about wind power,” Howland says. From that theory, physicist Albert Betz calculated in 1920 the maximum amount of energy that could theoretically be extracted from wind. Known as the Betz limit, this amount is 59.3 percent of the kinetic energy of the incoming wind.But just a few years later, others found that the momentum theory broke down “in a pretty dramatic way” at higher forces that correspond to faster blade rotation speeds or different blade angles, Howland says. It fails to predict not only the amount, but even the direction of changes in thrust force at higher rotation speeds or different blade angles: Whereas the theory said the force should start going down above a certain rotation speed or blade angle, experiments show the opposite — that the force continues to increase. “So, it’s not just quantitatively wrong, it’s qualitatively wrong,” Howland says.The theory also breaks down when there is any misalignment between the rotor and the airflow, which Howland says is “ubiquitous” on wind farms, where turbines are constantly adjusting to changes in wind directions. In fact, in an earlier paper in 2022, Howland and his team found that deliberately misaligning some turbines slightly relative to the incoming airflow within a wind farm significantly improves the overall power output of the wind farm by reducing wake disturbances to the downstream turbines.In the past, when designing the profile of rotor blades, the layout of wind turbines in a farm, or the day-to-day operation of wind turbines, engineers have relied on ad hoc adjustments added to the original mathematical formulas, based on some wind tunnel tests and experience with operating wind farms, but with no theoretical underpinnings.Instead, to arrive at the new model, the team analyzed the interaction of airflow and turbines using detailed computational modeling of the aerodynamics. They found that, for example, the original model had assumed that a drop in air pressure immediately behind the rotor would rapidly return to normal ambient pressure just a short way downstream. But it turns out, Howland says, that as the thrust force keeps increasing, “that assumption is increasingly inaccurate.”And the inaccuracy occurs very close to the point of the Betz limit that theoretically predicts the maximum performance of a turbine — and therefore is just the desired operating regime for the turbines. “So, we have Betz’s prediction of where we should operate turbines, and within 10 percent of that operational set point that we think maximizes power, the theory completely deteriorates and doesn’t work,” Howland says.Through their modeling, the researchers also found a way to compensate for the original formula’s reliance on a one-dimensional modeling that assumed the rotor was always precisely aligned with the airflow. To do so, they used fundamental equations that were developed to predict the lift of three-dimensional wings for aerospace applications.The researchers derived their new model, which they call a unified momentum model, based on theoretical analysis, and then validated it using computational fluid dynamics modeling. In followup work not yet published, they are doing further validation using wind tunnel and field tests.Fundamental understandingOne interesting outcome of the new formula is that it changes the calculation of the Betz limit, showing that it’s possible to extract a bit more power than the original formula predicted. Although it’s not a significant change — on the order of a few percent — “it’s interesting that now we have a new theory, and the Betz limit that’s been the rule of thumb for a hundred years is actually modified because of the new theory,” Howland says. “And that’s immediately useful.” The new model shows how to maximize power from turbines that are misaligned with the airflow, which the Betz limit cannot account for.The aspects related to controlling both individual turbines and arrays of turbines can be implemented without requiring any modifications to existing hardware in place within wind farms. In fact, this has already happened, based on earlier work from Howland and his collaborators two years ago that dealt with the wake interactions between turbines in a wind farm, and was based on the existing, empirically based formulas.“This breakthrough is a natural extension of our previous work on optimizing utility-scale wind farms,” he says, because in doing that analysis, they saw the shortcomings of the existing methods for analyzing the forces at work and predicting power produced by wind turbines. “Existing modeling using empiricism just wasn’t getting the job done,” he says.In a wind farm, individual turbines will sap some of the energy available to neighboring turbines, because of wake effects. Accurate wake modeling is important both for designing the layout of turbines in a wind farm, and also for the operation of that farm, determining moment to moment how to set the angles and speeds of each turbine in the array.Until now, Howland says, even the operators of wind farms, the manufacturers, and the designers of the turbine blades had no way to predict how much the power output of a turbine would be affected by a given change such as its angle to the wind without using empirical corrections. “That’s because there was no theory for it. So, that’s what we worked on here. Our theory can directly tell you, without any empirical corrections, for the first time, how you should actually operate a wind turbine to maximize its power,” he says.Because the fluid flow regimes are similar, the model also applies to propellers, whether for aircraft or ships, and also for hydrokinetic turbines such as tidal or river turbines. Although they didn’t focus on that aspect in this research, “it’s in the theoretical modeling naturally,” he says.The new theory exists in the form of a set of mathematical formulas that a user could incorporate in their own software, or as an open-source software package that can be freely downloaded from GitHub. “It’s an engineering model developed for fast-running tools for rapid prototyping and control and optimization,” Howland says. “The goal of our modeling is to position the field of wind energy research to move more aggressively in the development of the wind capacity and reliability necessary to respond to climate change.”The work was supported by the National Science Foundation and Siemens Gamesa Renewable Energy. More

  • in

    Going Dutch on climate

    When MIT senior Rudiba Laiba saw that stores in the Netherlands eschewed plastic bags to save the planet, her first thought was, “that doesn’t happen in Bangladesh.”Laiba is one of eight MIT students who traveled to the Netherlands in June as part of an MIT Energy Initiative (MITEI)-sponsored trip to experience first-hand the country’s approach to the energy transition. The Netherlands aims to be carbon neutral by 2050, making it one of the top 10 countries leading the charge on climate change, according to U.S. News and World Report.MITEI sponsored the week-long trip to allow undergraduate and graduate students to collaboratively explore clean energy efforts with researchers, corporate leaders, and nongovernmental organizations. The students heard about projects ranging from creating hydrogen pipelines in the North Sea to climate-proofing a fuel-guzzling, asphalt-dense neighborhood.Felipe Abreu from Kissimmee, Florida, a rising second-year student studying materials science and engineering, is working this summer on ways to melt and reuse metal scraps discarded in manufacturing processes. “When MITEI put out this notice about visiting the Netherlands, I wanted to see if there were more advanced approaches to renewable energy that I’d never been exposed to,” Abreu says.Laiba notes that her native Bangladesh has not yet achieved the Netherlands’ nearly universal buy-in to tackling climate change, even though this South Asian country, like the Netherlands, is particularly vulnerable to rising sea levels due to topography and high population density.Laiba, who spent part of her childhood in New York City and lived in Bangladesh from ages 8 to 18, calls Bangladesh “on the front lines of climate change.“Even if I didn’t want to care about climate change, I had to, because I would see the effects of it,” she says.Key playersThe MIT students conducted hands-on exercises on how to switch from traditional energy sources to zero-carbon technologies. “We talked a lot about infrastructure, particularly how to repurpose natural gas infrastructure for hydrogen,” says Antje Danielson, director of education at MITEI, who led the trip with Em Schule, MITEI research and programming assistant. “The students were challenged to grapple with real-world decision-making.”The northern section of the Netherlands is known as the “hydrogen valley” of Europe. At the University of Groningen and Hanze University School of Applied Sciences, also in Groningen, the students heard about how the region profiles itself as a world capital for the energy transition through its push toward a hydrogen-based economy and its state-of-the-art global climate models.Erick Liang, a rising junior from Boston’s Roslindale neighborhood pursuing a dual major in nuclear science and engineering and physics, was intrigued by a massive wind farm in the port city of Eemshaven, one of the group’s first stops in the north of the country. “It was impressive as an engineering challenge, because they must have figured out ways to cheaply and effectively manufacture all these wind turbines,” he says.They visited German energy company RWE, which is generating 15 percent of Eemshaven’s electricity from biomass, replacing coal.Laiba, who is majoring in molecular biology and electrical engineering and computer science with a minor in business management, was intrigued by a presentation on biofuels. “It piqued my interest to see if they would use biomass on a large scale” because of the challenges and unpredictability associated with it as a fuel source.In Paddepoel, the students toured the first of several neighborhoods that once lacked greenery and used fossil fuel-based heating systems and now aim to generate more energy than they consume.“The students got to see what the size of the district heating pipes would be, and how they go through people’s gardens into the houses. We talked about the physical impact on the neighborhood of installing these pipes, as well as the potential social and political implications connected to a really difficult transition like this,” Danielson says.Going greenGreen hydrogen promises to be a key player in the energy transition, and Netherlands officials say they have committed to the new infrastructure and business models needed to move ahead with hydrogen as a fuel source.The students explored how green hydrogen differs from fossil fuel-generated hydrogen. They saw how Dutch companies grappled with siting hydrogen production facilities and handling hydrogen as a gas, which, unlike natural gas, does not yet have a detectable artificial odor. The students heard from energy network operator Gasunie about the science and engineering behind repurposing existing natural gas pipelines for a hydrogen network in the North Sea, and were challenged to solve the puzzle of combining hydrogen production with offshore wind energy. In the port of Rotterdam, they saw how the startup Battolyser Systems — which is working with Delft University of Technology on an electrolysis device that splits water into hydrogen and oxygen and doubles as a battery — is transitioning from lab bench to market.Laiba was impressed by how much capital was going into high-risk ventures and startups, “not only because they’re trying to make something revolutionary, but also because society needs to accept and use” their products.Abreu says that at Battolyser Systems, “I saw people my age on the forefront of green hydrogen, trying to make a difference.”The students visited the Global Center on Adaptation’s carbon-neutral floating offices and learned how this international organization supports climate adaptation actions around the world and the practice of mitigation.Also in Rotterdam, international marine contractor Van Oord took students to view a ship that installs wind turbines and explained how their new technology reduces the sound shockwave impact of the installations on marine life.At the Port of Rotterdam, the students heard about the challenges faced by Europe’s largest port in terms of global shipping and choosing the fuels of the future. The speaker tasked the MIT students with coming up with a plan to transition the privately owned, owner-inhabited barges that ply the region’s inland waterways to a zero-carbon system.“The Port Authority uses this exercise to illustrate the enormous complexity faced by companies in the energy transition,” Danielson says. “The fact that our students performed really well on the spot shows that we are doing something right at MIT.”Defining a path forwardLiang, Abreu, and Laiba were struck by how the Netherlands has come together as a country over climate change. “In the U.S., a lot of people disagree with the concept of climate change as a whole,” Liang says. “But in the Netherlands, everyone is on the same page that this is an issue that we should be working toward. They’re capable of seeing a path forward and trying to take action whenever possible.”Liang, a member of the MIT Solar Electric Vehicle Team, is doing undergraduate research sponsored by MITEI this summer, working to accelerate fusion manufacturing and development at the MIT Plasma Science and Fusion Center. He’s improving 3D printing processes to manufacture components that can accommodate the high temperatures and small space within a tokamak reactor, which uses magnetic fields to confine plasma and produce controlled thermonuclear fusion.“I personally would like to try finding a new solution” to achieving carbon neutrality, he says. That solution, to Liang, is fusion energy, with some entities hoping to demonstrate net energy gain through fusion in the next five years.Laiba is a researcher with the MIT Office of Sustainability, looking at ways to quantify and reduce the level of MIT’s Scope 3 greenhouse gas emissions. Scope 3 emissions are tied to the purchase of goods that use fossil fuels in their manufacture. She says, ​“Whatever I decide to do in the future will involve making a more sustainable future. And to me, renewable energy is the driving force behind that.”In the Netherlands, she says, “what we learned through the entire trip was that renewable energy powers the country to a large amount. Things I could see tangibly was Starbucks having paper cups even for our iced drinks, which I think would flop very hard in the U.S. I don’t think society’s ready for that yet.”Abreu says, “In America, sustainability has always been in the back seat while other things take the forefront. So going to a country where everybody you talk to has a stake (in sustainability) and actually cares, and they’re all pushing together for this common goal, it was inspiring. It gave me hope.” More

  • in

    Making the clean energy transition work for everyone

    The clean energy transition is already underway, but how do we make sure it happens in a manner that is affordable, sustainable, and fair for everyone?

    That was the overarching question at this year’s MIT Energy Conference, which took place March 11 and 12 in Boston and was titled “Short and Long: A Balanced Approach to the Energy Transition.”

    Each year, the student-run conference brings together leaders in the energy sector to discuss the progress and challenges they see in their work toward a greener future. Participants come from research, industry, government, academia, and the investment community to network and exchange ideas over two whirlwind days of keynote talks, fireside chats, and panel discussions.

    Several participants noted that clean energy technologies are already cost-competitive with fossil fuels, but changing the way the world works requires more than just technology.

    “None of this is easy, but I think developing innovative new technologies is really easy compared to the things we’re talking about here, which is how to blend social justice, soft engineering, and systems thinking that puts people first,” Daniel Kammen, a distinguished professor of energy at the University of California at Berkeley, said in a keynote talk. “While clean energy has a long way to go, it is more than ready to transition us from fossil fuels.”

    The event also featured a keynote discussion between MIT President Sally Kornbluth and MIT’s Kyocera Professor of Ceramics Yet-Ming Chiang, in which Kornbluth discussed her first year at MIT as well as a recently announced, campus-wide effort to solve critical climate problems known as the Climate Project at MIT.

    “The reason I wanted to come to MIT was I saw that MIT has the potential to solve the world’s biggest problems, and first among those for me was the climate crisis,” Kornbluth said. “I’m excited about where we are, I’m excited about the enthusiasm of the community, and I think we’ll be able to make really impactful discoveries through this project.”

    Fostering new technologies

    Several panels convened experts in new or emerging technology fields to discuss what it will take for their solutions to contribute to deep decarbonization.

    “The fun thing and challenging thing about first-of-a-kind technologies is they’re all kind of different,” said Jonah Wagner, principal assistant director for industrial innovation and clean energy in the U.S. Office of Science and Technology Policy. “You can map their growth against specific challenges you expect to see, but every single technology is going to face their own challenges, and every single one will have to defy an engineering barrier to get off the ground.”

    Among the emerging technologies discussed was next-generation geothermal energy, which uses new techniques to extract heat from the Earth’s crust in new places.

    A promising aspect of the technology is that it can leverage existing infrastructure and expertise from the oil and gas industry. Many newly developed techniques for geothermal production, for instance, use the same drills and rigs as those used for hydraulic fracturing.

    “The fact that we have a robust ecosystem of oil and gas labor and technology in the U.S. makes innovation in geothermal much more accessible compared to some of the challenges we’re seeing in nuclear or direct-air capture, where some of the supply chains are disaggregated around the world,” said Gabrial Malek, chief of staff at the geothermal company Fervo Energy.

    Another technology generating excitement — if not net energy quite yet — is fusion, the process of combining, or fusing, light atoms together to form heavier ones for a net energy gain, in the same process that powers the sun. MIT spinout Commonwealth Fusion Systems (CFS) has already validated many aspects of its approach for achieving fusion power, and the company’s unique partnership with MIT was discussed in a panel on the industry’s progress.

    “We’re standing on the shoulders of decades of research from the scientific community, and we want to maintain those ties even as we continue developing our technology,” CFS Chief Science Officer Brandon Sorbom PhD ’17 said, noting that CFS is one of the largest company sponsors of research at MIT and collaborates with institutions around the world. “Engaging with the community is a really valuable lever to get new ideas and to sanity check our own ideas.”

    Sorbom said that as CFS advances fusion energy, the company is thinking about how it can replicate its processes to lower costs and maximize the technology’s impact around the planet.

    “For fusion to work, it has to work for everyone,” Sorbom said. “I think the affordability piece is really important. We can’t just build this technological jewel that only one class of nations can afford. It has to be a technology that can be deployed throughout the entire world.”

    The event also gave students — many from MIT — a chance to learn more about careers in energy and featured a startup showcase, in which dozens of companies displayed their energy and sustainability solutions.

    “More than 700 people are here from every corner of the energy industry, so there are so many folks to connect with and help me push my vision into reality,” says GreenLIB CEO Fred Rostami, whose company recycles lithium-ion batteries. “The good thing about the energy transition is that a lot of these technologies and industries overlap, so I think we can enable this transition by working together at events like this.”

    A focused climate strategy

    Kornbluth noted that when she came to MIT, a large percentage of students and faculty were already working on climate-related technologies. With the Climate Project at MIT, she wanted to help ensure the whole of those efforts is greater than the sum of its parts.

    The project is organized around six distinct missions, including decarbonizing energy and industry, empowering frontline communities, and building healthy, resilient cities. Kornbluth says the mission areas will help MIT community members collaborate around multidisciplinary challenges. Her team, which includes a committee of faculty advisors, has begun to search for the leads of each mission area, and Kornbluth said she is planning to appoint a vice president for climate at the Institute.

    “I want someone who has the purview of the whole Institute and will report directly to me to help make sure this project stays on track,” Kornbluth explained.

    In his conversation about the initiative with Kornbluth, Yet-Ming Chiang said projects will be funded based on their potential to reduce emissions and make the planet more sustainable at scale.

    “Projects should be very high risk, with very high impact,” Chiang explained. “They should have a chance to prove themselves, and those efforts should not be limited by resources, only by time.”

    In discussing her vision of the climate project, Kornbluth alluded to the “short and long” theme of the conference.

    “It’s about balancing research and commercialization,” Kornbluth said. “The climate project has a very variable timeframe, and I think universities are the sector that can think about the things that might be 30 years out. We have to think about the incentives across the entire innovation pipeline and how we can keep an eye on the long term while making sure the short-term things get out rapidly.” More

  • in

    Cutting carbon emissions on the US power grid

    To help curb climate change, the United States is working to reduce carbon emissions from all sectors of the energy economy. Much of the current effort involves electrification — switching to electric cars for transportation, electric heat pumps for home heating, and so on. But in the United States, the electric power sector already generates about a quarter of all carbon emissions. “Unless we decarbonize our electric power grids, we’ll just be shifting carbon emissions from one source to another,” says Amanda Farnsworth, a PhD candidate in chemical engineering and research assistant at the MIT Energy Initiative (MITEI).

    But decarbonizing the nation’s electric power grids will be challenging. The availability of renewable energy resources such as solar and wind varies in different regions of the country. Likewise, patterns of energy demand differ from region to region. As a result, the least-cost pathway to a decarbonized grid will differ from one region to another.

    Over the past two years, Farnsworth and Emre Gençer, a principal research scientist at MITEI, developed a power system model that would allow them to investigate the importance of regional differences — and would enable experts and laypeople alike to explore their own regions and make informed decisions about the best way to decarbonize. “With this modeling capability you can really understand regional resources and patterns of demand, and use them to do a ‘bespoke’ analysis of the least-cost approach to decarbonizing the grid in your particular region,” says Gençer.

    To demonstrate the model’s capabilities, Gençer and Farnsworth performed a series of case studies. Their analyses confirmed that strategies must be designed for specific regions and that all the costs and carbon emissions associated with manufacturing and installing solar and wind generators must be included for accurate accounting. But the analyses also yielded some unexpected insights, including a correlation between a region’s wind energy and the ease of decarbonizing, and the important role of nuclear power in decarbonizing the California grid.

    A novel model

    For many decades, researchers have been developing “capacity expansion models” to help electric utility planners tackle the problem of designing power grids that are efficient, reliable, and low-cost. More recently, many of those models also factor in the goal of reducing or eliminating carbon emissions. While those models can provide interesting insights relating to decarbonization, Gençer and Farnsworth believe they leave some gaps that need to be addressed.

    For example, most focus on conditions and needs in a single U.S. region without highlighting the unique peculiarities of their chosen area of focus. Hardly any consider the carbon emitted in fabricating and installing such “zero-carbon” technologies as wind turbines and solar panels. And finally, most of the models are challenging to use. Even experts in the field must search out and assemble various complex datasets in order to perform a study of interest.

    Gençer and Farnsworth’s capacity expansion model — called Ideal Grid, or IG — addresses those and other shortcomings. IG is built within the framework of MITEI’s Sustainable Energy System Analysis Modeling Environment (SESAME), an energy system modeling platform that Gençer and his colleagues at MITEI have been developing since 2017. SESAME models the levels of greenhouse gas emissions from multiple, interacting energy sectors in future scenarios.

    Importantly, SESAME includes both techno-economic analyses and life-cycle assessments of various electricity generation and storage technologies. It thus considers costs and emissions incurred at each stage of the life cycle (manufacture, installation, operation, and retirement) for all generators. Most capacity expansion models only account for emissions from operation of fossil fuel-powered generators. As Farnsworth notes, “While this is a good approximation for our current grid, emissions from the full life cycle of all generating technologies become non-negligible as we transition to a highly renewable grid.”

    Through its connection with SESAME, the IG model has access to data on costs and emissions associated with many technologies critical to power grid operation. To explore regional differences in the cost-optimized decarbonization strategies, the IG model also includes conditions within each region, notably details on demand profiles and resource availability.

    In one recent study, Gençer and Farnsworth selected nine of the standard North American Electric Reliability Corporation (NERC) regions. For each region, they incorporated hourly electricity demand into the IG model. Farnsworth also gathered meteorological data for the nine U.S. regions for seven years — 2007 to 2013 — and calculated hourly power output profiles for the renewable energy sources, including solar and wind, taking into account the geography-limited maximum capacity of each technology.

    The availability of wind and solar resources differs widely from region to region. To permit a quick comparison, the researchers use a measure called “annual capacity factor,” which is the ratio between the electricity produced by a generating unit in a year and the electricity that could have been produced if that unit operated continuously at full power for that year. Values for the capacity factors in the nine U.S. regions vary between 20 percent and 30 percent for solar power and for between 25 percent and 45 percent for wind.

    Calculating optimized grids for different regions

    For their first case study, Gençer and Farnsworth used the IG model to calculate cost-optimized regional grids to meet defined caps on carbon dioxide (CO2) emissions. The analyses were based on cost and emissions data for 10 technologies: nuclear, wind, solar, three types of natural gas, three types of coal, and energy storage using lithium-ion batteries. Hydroelectric was not considered in this study because there was no comprehensive study outlining potential expansion sites with their respective costs and expected power output levels.

    To make region-to-region comparisons easy, the researchers used several simplifying assumptions. Their focus was on electricity generation, so the model calculations assume the same transmission and distribution costs and efficiencies for all regions. Also, the calculations did not consider the generator fleet currently in place. The goal was to investigate what happens if each region were to start from scratch and generate an “ideal” grid.

    To begin, Gençer and Farnsworth calculated the most economic combination of technologies for each region if it limits its total carbon emissions to 100, 50, and 25 grams of CO2 per kilowatt-hour (kWh) generated. For context, the current U.S. average emissions intensity is 386 grams of CO2 emissions per kWh.

    Given the wide variation in regional demand, the researchers needed to use a new metric to normalize their results and permit a one-to-one comparison between regions. Accordingly, the model calculates the required generating capacity divided by the average demand for each region. The required capacity accounts for both the variation in demand and the inability of generating systems — particularly solar and wind — to operate at full capacity all of the time.

    The analysis was based on regional demand data for 2021 — the most recent data available. And for each region, the model calculated the cost-optimized power grid seven times, using weather data from seven years. This discussion focuses on mean values for cost and total capacity installed and also total values for coal and for natural gas, although the analysis considered three separate technologies for each fuel.

    The results of the analyses confirm that there’s a wide variation in the cost-optimized system from one region to another. Most notable is that some regions require a lot of energy storage while others don’t require any at all. The availability of wind resources turns out to play an important role, while the use of nuclear is limited: the carbon intensity of nuclear (including uranium mining and transportation) is lower than that of either solar or wind, but nuclear is the most expensive technology option, so it’s added only when necessary. Finally, the change in the CO2 emissions cap brings some interesting responses.

    Under the most lenient limit on emissions — 100 grams of CO2 per kWh — there’s no coal in the mix anywhere. It’s the first to go, in general being replaced by the lower-carbon-emitting natural gas. Texas, Central, and North Central — the regions with the most wind — don’t need energy storage, while the other six regions do. The regions with the least wind — California and the Southwest — have the highest energy storage requirements. Unlike the other regions modeled, California begins installing nuclear, even at the most lenient limit.

    As the model plays out, under the moderate cap — 50 grams of CO2 per kWh — most regions bring in nuclear power. California and the Southeast — regions with low wind capacity factors — rely on nuclear the most. In contrast, wind-rich Texas, Central, and North Central don’t incorporate nuclear yet but instead add energy storage — a less-expensive option — to their mix. There’s still a bit of natural gas everywhere, in spite of its CO2 emissions.

    Under the most restrictive cap — 25 grams of CO2 per kWh — nuclear is in the mix everywhere. The highest use of nuclear is again correlated with low wind capacity factor. Central and North Central depend on nuclear the least. All regions continue to rely on a little natural gas to keep prices from skyrocketing due to the necessary but costly nuclear component. With nuclear in the mix, the need for storage declines in most regions.

    Results of the cost analysis are also interesting. Texas, Central, and North Central all have abundant wind resources, and they can delay incorporating the costly nuclear option, so the cost of their optimized system tends to be lower than costs for the other regions. In addition, their total capacity deployment — including all sources — tends to be lower than for the other regions. California and the Southwest both rely heavily on solar, and in both regions, costs and total deployment are relatively high.

    Lessons learned

    One unexpected result is the benefit of combining solar and wind resources. The problem with relying on solar alone is obvious: “Solar energy is available only five or six hours a day, so you need to build a lot of other generating sources and abundant storage capacity,” says Gençer. But an analysis of unit-by-unit operations at an hourly resolution yielded a less-intuitive trend: While solar installations only produce power in the midday hours, wind turbines generate the most power in the nighttime hours. As a result, solar and wind power are complementary. Having both resources available is far more valuable than having either one or the other. And having both impacts the need for storage, says Gençer: “Storage really plays a role either when you’re targeting a very low carbon intensity or where your resources are mostly solar and they’re not complemented by wind.”

    Gençer notes that the target for the U.S. electricity grid is to reach net zero by 2035. But the analysis showed that reaching just 100 grams of CO2 per kWh would require at least 50 percent of system capacity to be wind and solar. “And we’re nowhere near that yet,” he says.

    Indeed, Gençer and Farnsworth’s analysis doesn’t even include a zero emissions case. Why not? As Gençer says, “We cannot reach zero.” Wind and solar are usually considered to be net zero, but that’s not true. Wind, solar, and even storage have embedded carbon emissions due to materials, manufacturing, and so on. “To go to true net zero, you’d need negative emission technologies,” explains Gençer, referring to techniques that remove carbon from the air or ocean. That observation confirms the importance of performing life-cycle assessments.

    Farnsworth voices another concern: Coal quickly disappears in all regions because natural gas is an easy substitute for coal and has lower carbon emissions. “People say they’ve decreased their carbon emissions by a lot, but most have done it by transitioning from coal to natural gas power plants,” says Farnsworth. “But with that pathway for decarbonization, you hit a wall. Once you’ve transitioned from coal to natural gas, you’ve got to do something else. You need a new strategy — a new trajectory to actually reach your decarbonization target, which most likely will involve replacing the newly installed natural gas plants.”

    Gençer makes one final point: The availability of cheap nuclear — whether fission or fusion — would completely change the picture. When the tighter caps require the use of nuclear, the cost of electricity goes up. “The impact is quite significant,” says Gençer. “When we go from 100 grams down to 25 grams of CO2 per kWh, we see a 20 percent to 30 percent increase in the cost of electricity.” If it were available, a less-expensive nuclear option would likely be included in the technology mix under more lenient caps, significantly reducing the cost of decarbonizing power grids in all regions.

    The special case of California

    In another analysis, Gençer and Farnsworth took a closer look at California. In California, about 10 percent of total demand is now met with nuclear power. Yet current power plants are scheduled for retirement very soon, and a 1976 law forbids the construction of new nuclear plants. (The state recently extended the lifetime of one nuclear plant to prevent the grid from becoming unstable.) “California is very motivated to decarbonize their grid,” says Farnsworth. “So how difficult will that be without nuclear power?”

    To find out, the researchers performed a series of analyses to investigate the challenge of decarbonizing in California with nuclear power versus without it. At 200 grams of CO2 per kWh — about a 50 percent reduction — the optimized mix and cost look the same with and without nuclear. Nuclear doesn’t appear due to its high cost. At 100 grams of CO2 per kWh — about a 75 percent reduction — nuclear does appear in the cost-optimized system, reducing the total system capacity while having little impact on the cost.

    But at 50 grams of CO2 per kWh, the ban on nuclear makes a significant difference. “Without nuclear, there’s about a 45 percent increase in total system size, which is really quite substantial,” says Farnsworth. “It’s a vastly different system, and it’s more expensive.” Indeed, the cost of electricity would increase by 7 percent.

    Going one step further, the researchers performed an analysis to determine the most decarbonized system possible in California. Without nuclear, the state could reach 40 grams of CO2 per kWh. “But when you allow for nuclear, you can get all the way down to 16 grams of CO2 per kWh,” says Farnsworth. “We found that California needs nuclear more than any other region due to its poor wind resources.”

    Impacts of a carbon tax

    One more case study examined a policy approach to incentivizing decarbonization. Instead of imposing a ceiling on carbon emissions, this strategy would tax every ton of carbon that’s emitted. Proposed taxes range from zero to $100 per ton.

    To investigate the effectiveness of different levels of carbon tax, Farnsworth and Gençer used the IG model to calculate the minimum-cost system for each region, assuming a certain cost for emitting each ton of carbon. The analyses show that a low carbon tax — just $10 per ton — significantly reduces emissions in all regions by phasing out all coal generation. In the Northwest region, for example, a carbon tax of $10 per ton decreases system emissions by 65 percent while increasing system cost by just 2.8 percent (relative to an untaxed system).

    After coal has been phased out of all regions, every increase in the carbon tax brings a slow but steady linear decrease in emissions and a linear increase in cost. But the rates of those changes vary from region to region. For example, the rate of decrease in emissions for each added tax dollar is far lower in the Central region than in the Northwest, largely due to the Central region’s already low emissions intensity without a carbon tax. Indeed, the Central region without a carbon tax has a lower emissions intensity than the Northwest region with a tax of $100 per ton.

    As Farnsworth summarizes, “A low carbon tax — just $10 per ton — is very effective in quickly incentivizing the replacement of coal with natural gas. After that, it really just incentivizes the replacement of natural gas technologies with more renewables and more energy storage.” She concludes, “If you’re looking to get rid of coal, I would recommend a carbon tax.”

    Future extensions of IG

    The researchers have already added hydroelectric to the generating options in the IG model, and they are now planning further extensions. For example, they will include additional regions for analysis, add other long-term energy storage options, and make changes that allow analyses to take into account the generating infrastructure that already exists. Also, they will use the model to examine the cost and value of interregional transmission to take advantage of the diversity of available renewable resources.

    Farnsworth emphasizes that the analyses reported here are just samples of what’s possible using the IG model. The model is a web-based tool that includes embedded data covering the whole United States, and the output from an analysis includes an easy-to-understand display of the required installations, hourly operation, and overall techno-economic analysis and life-cycle assessment results. “The user is able to go in and explore a vast number of scenarios with no data collection or pre-processing,” she says. “There’s no barrier to begin using the tool. You can just hop on and start exploring your options so you can make an informed decision about the best path forward.”

    This work was supported by the International Energy Agency Gas and Oil Technology Collaboration Program and the MIT Energy Initiative Low-Carbon Energy Centers.

    This article appears in the Winter 2024 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Power when the sun doesn’t shine

    In 2016, at the huge Houston energy conference CERAWeek, MIT materials scientist Yet-Ming Chiang found himself talking to a Tesla executive about a thorny problem: how to store the output of solar panels and wind turbines for long durations.        

    Chiang, the Kyocera Professor of Materials Science and Engineering, and Mateo Jaramillo, a vice president at Tesla, knew that utilities lacked a cost-effective way to store renewable energy to cover peak levels of demand and to bridge the gaps during windless and cloudy days. They also knew that the scarcity of raw materials used in conventional energy storage devices needed to be addressed if renewables were ever going to displace fossil fuels on the grid at scale.

    Energy storage technologies can facilitate access to renewable energy sources, boost the stability and reliability of power grids, and ultimately accelerate grid decarbonization. The global market for these systems — essentially large batteries — is expected to grow tremendously in the coming years. A study by the nonprofit LDES (Long Duration Energy Storage) Council pegs the long-duration energy storage market at between 80 and 140 terawatt-hours by 2040. “That’s a really big number,” Chiang notes. “Every 10 people on the planet will need access to the equivalent of one EV [electric vehicle] battery to support their energy needs.”

    In 2017, one year after they met in Houston, Chiang and Jaramillo joined forces to co-found Form Energy in Somerville, Massachusetts, with MIT graduates Marco Ferrara SM ’06, PhD ’08 and William Woodford PhD ’13, and energy storage veteran Ted Wiley.

    “There is a burgeoning market for electrical energy storage because we want to achieve decarbonization as fast and as cost-effectively as possible,” says Ferrara, Form’s senior vice president in charge of software and analytics.

    Investors agreed. Over the next six years, Form Energy would raise more than $800 million in venture capital.

    Bridging gaps

    The simplest battery consists of an anode, a cathode, and an electrolyte. During discharge, with the help of the electrolyte, electrons flow from the negative anode to the positive cathode. During charge, external voltage reverses the process. The anode becomes the positive terminal, the cathode becomes the negative terminal, and electrons move back to where they started. Materials used for the anode, cathode, and electrolyte determine the battery’s weight, power, and cost “entitlement,” which is the total cost at the component level.

    During the 1980s and 1990s, the use of lithium revolutionized batteries, making them smaller, lighter, and able to hold a charge for longer. The storage devices Form Energy has devised are rechargeable batteries based on iron, which has several advantages over lithium. A big one is cost.

    Chiang once declared to the MIT Club of Northern California, “I love lithium-ion.” Two of the four MIT spinoffs Chiang founded center on innovative lithium-ion batteries. But at hundreds of dollars a kilowatt-hour (kWh) and with a storage capacity typically measured in hours, lithium-ion was ill-suited for the use he now had in mind.

    The approach Chiang envisioned had to be cost-effective enough to boost the attractiveness of renewables. Making solar and wind energy reliable enough for millions of customers meant storing it long enough to fill the gaps created by extreme weather conditions, grid outages, and when there is a lull in the wind or a few days of clouds.

    To be competitive with legacy power plants, Chiang’s method had to come in at around $20 per kilowatt-hour of stored energy — one-tenth the cost of lithium-ion battery storage.

    But how to transition from expensive batteries that store and discharge over a couple of hours to some as-yet-undefined, cheap, longer-duration technology?

    “One big ball of iron”

    That’s where Ferrara comes in. Ferrara has a PhD in nuclear engineering from MIT and a PhD in electrical engineering and computer science from the University of L’Aquila in his native Italy. In 2017, as a research affiliate at the MIT Department of Materials Science and Engineering, he worked with Chiang to model the grid’s need to manage renewables’ intermittency.

    How intermittent depends on where you are. In the United States, for instance, there’s the windy Great Plains; the sun-drenched, relatively low-wind deserts of Arizona, New Mexico, and Nevada; and the often-cloudy Pacific Northwest.

    Ferrara, in collaboration with Professor Jessika Trancik of MIT’s Institute for Data, Systems, and Society and her MIT team, modeled four representative locations in the United States and concluded that energy storage with capacity costs below roughly $20/kWh and discharge durations of multiple days would allow a wind-solar mix to provide cost-competitive, firm electricity in resource-abundant locations.

    Now that they had a time frame, they turned their attention to materials. At the price point Form Energy was aiming for, lithium was out of the question. Chiang looked at plentiful and cheap sulfur. But a sulfur, sodium, water, and air battery had technical challenges.

    Thomas Edison once used iron as an electrode, and iron-air batteries were first studied in the 1960s. They were too heavy to make good transportation batteries. But this time, Chiang and team were looking at a battery that sat on the ground, so weight didn’t matter. Their priorities were cost and availability.

    “Iron is produced, mined, and processed on every continent,” Chiang says. “The Earth is one big ball of iron. We wouldn’t ever have to worry about even the most ambitious projections of how much storage that the world might use by mid-century.” If Form ever moves into the residential market, “it’ll be the safest battery you’ve ever parked at your house,” Chiang laughs. “Just iron, air, and water.”

    Scientists call it reversible rusting. While discharging, the battery takes in oxygen and converts iron to rust. Applying an electrical current converts the rusty pellets back to iron, and the battery “breathes out” oxygen as it charges. “In chemical terms, you have iron, and it becomes iron hydroxide,” Chiang says. “That means electrons were extracted. You get those electrons to go through the external circuit, and now you have a battery.”

    Form Energy’s battery modules are approximately the size of a washer-and-dryer unit. They are stacked in 40-foot containers, and several containers are electrically connected with power conversion systems to build storage plants that can cover several acres.

    The right place at the right time

    The modules don’t look or act like anything utilities have contracted for before.

    That’s one of Form’s key challenges. “There is not widespread knowledge of needing these new tools for decarbonized grids,” Ferrara says. “That’s not the way utilities have typically planned. They’re looking at all the tools in the toolkit that exist today, which may not contemplate a multi-day energy storage asset.”

    Form Energy’s customers are largely traditional power companies seeking to expand their portfolios of renewable electricity. Some are in the process of decommissioning coal plants and shifting to renewables.

    Ferrara’s research pinpointing the need for very low-cost multi-day storage provides key data for power suppliers seeking to determine the most cost-effective way to integrate more renewable energy.

    Using the same modeling techniques, Ferrara and team show potential customers how the technology fits in with their existing system, how it competes with other technologies, and how, in some cases, it can operate synergistically with other storage technologies.

    “They may need a portfolio of storage technologies to fully balance renewables on different timescales of intermittency,” he says. But other than the technology developed at Form, “there isn’t much out there, certainly not within the cost entitlement of what we’re bringing to market.”  Thanks to Chiang and Jaramillo’s chance encounter in Houston, Form has a several-year lead on other companies working to address this challenge. 

    In June 2023, Form Energy closed its biggest deal to date for a single project: Georgia Power’s order for a 15-megawatt/1,500-megawatt-hour system. That order brings Form’s total amount of energy storage under contracts with utility customers to 40 megawatts/4 gigawatt-hours. To meet the demand, Form is building a new commercial-scale battery manufacturing facility in West Virginia.

    The fact that Form Energy is creating jobs in an area that lost more than 10,000 steel jobs over the past decade is not lost on Chiang. “And these new jobs are in clean tech. It’s super exciting to me personally to be doing something that benefits communities outside of our traditional technology centers.

    “This is the right time for so many reasons,” Chiang says. He says he and his Form Energy co-founders feel “tremendous urgency to get these batteries out into the world.”

    This article appears in the Winter 2024 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Accelerated climate action needed to sharply reduce current risks to life and life-support systems

    Hottest day on record. Hottest month on record. Extreme marine heatwaves. Record-low Antarctic sea-ice.

    While El Niño is a short-term factor in this year’s record-breaking heat, human-caused climate change is the long-term driver. And as global warming edges closer to 1.5 degrees Celsius — the aspirational upper limit set in the Paris Agreement in 2015 — ushering in more intense and frequent heatwaves, floods, wildfires, and other climate extremes much sooner than many expected, current greenhouse gas emissions-reduction policies are far too weak to keep the planet from exceeding that threshold. In fact, on roughly one-third of days in 2023, the average global temperature was at least 1.5 C higher than pre-industrial levels. Faster and bolder action will be needed — from the in-progress United Nations Climate Change Conference (COP28) and beyond — to stabilize the climate and minimize risks to human (and nonhuman) lives and the life-support systems (e.g., food, water, shelter, and more) upon which they depend.

    Quantifying the risks posed by simply maintaining existing climate policies — and the benefits (i.e., avoided damages and costs) of accelerated climate action aligned with the 1.5 C goal — is the central task of the 2023 Global Change Outlook, recently released by the MIT Joint Program on the Science and Policy of Global Change.

    Based on a rigorous, integrated analysis of population and economic growth, technological change, Paris Agreement emissions-reduction pledges (Nationally Determined Contributions, or NDCs), geopolitical tensions, and other factors, the report presents the MIT Joint Program’s latest projections for the future of the earth’s energy, food, water, and climate systems, as well as prospects for achieving the Paris Agreement’s short- and long-term climate goals.

    The 2023 Global Change Outlook performs its risk-benefit analysis by focusing on two scenarios. The first, Current Trends, assumes that Paris Agreement NDCs are implemented through the year 2030, and maintained thereafter. While this scenario represents an unprecedented global commitment to limit greenhouse gas emissions, it neither stabilizes climate nor limits climate change. The second scenario, Accelerated Actions, extends from the Paris Agreement’s initial NDCs and aligns with its long-term goals. This scenario aims to limit and stabilize human-induced global climate warming to 1.5 C by the end of this century with at least a 50 percent probability. Uncertainty is quantified using 400-member ensembles of projections for each scenario.

    This year’s report also includes a visualization tool that enables a higher-resolution exploration of both scenarios.

    Energy

    Between 2020 and 2050, population and economic growth are projected to drive continued increases in energy needs and electrification. Successful achievement of current Paris Agreement pledges will reinforce a shift away from fossil fuels, but additional actions will be required to accelerate the energy transition needed to cap global warming at 1.5 C by 2100.

    During this 30-year period under the Current Trends scenario, the share of fossil fuels in the global energy mix drops from 80 percent to 70 percent. Variable renewable energy (wind and solar) is the fastest growing energy source with more than an 8.6-fold increase. In the Accelerated Actions scenario, the share of low-carbon energy sources grows from 20 percent to slightly more than 60 percent, a much faster growth rate than in the Current Trends scenario; wind and solar energy undergo more than a 13.3-fold increase.

    While the electric power sector is expected to successfully scale up (with electricity production increasing by 73 percent under Current Trends, and 87 percent under Accelerated Actions) to accommodate increased demand (particularly for variable renewables), other sectors face stiffer challenges in their efforts to decarbonize.

    “Due to a sizeable need for hydrocarbons in the form of liquid and gaseous fuels for sectors such as heavy-duty long-distance transport, high-temperature industrial heat, agriculture, and chemical production, hydrogen-based fuels and renewable natural gas remain attractive options, but the challenges related to their scaling opportunities and costs must be resolved,” says MIT Joint Program Deputy Director Sergey Paltsev, a lead author of the 2023 Global Change Outlook.

    Water, food, and land

    With a global population projected to reach 9.9 billion by 2050, the Current Trends scenario indicates that more than half of the world’s population will experience pressures to its water supply, and that three of every 10 people will live in water basins where compounding societal and environmental pressures on water resources will be experienced. Population projections under combined water stress in all scenarios reveal that the Accelerated Actions scenario can reduce approximately 40 million of the additional 570 million people living in water-stressed basins at mid-century.

    Under the Current Trends scenario, agriculture and food production will keep growing. This will increase pressure for land-use change, water use, and use of energy-intensive inputs, which will also lead to higher greenhouse gas emissions. Under the Accelerated Actions scenario, less agricultural and food output is observed by 2050 compared to the Current Trends scenario, since this scenario affects economic growth and increases production costs. Livestock production is more greenhouse gas emissions-intensive than crop and food production, which, under carbon-pricing policies, drives demand downward and increases costs and prices. Such impacts are transmitted to the food sector and imply lower consumption of livestock-based products.

    Land-use changes in the Accelerated Actions scenario are similar to those in the Current Trends scenario by 2050, except for land dedicated to bioenergy production. At the world level, the Accelerated Actions scenario requires cropland area to increase by 1 percent and pastureland to decrease by 4.2 percent, but land use for bioenergy must increase by 44 percent.

    Climate trends

    Under the Current Trends scenario, the world is likely (more than 50 percent probability) to exceed 2 C global climate warming by 2060, 2.8 C by 2100, and 3.8 C by 2150. Our latest climate-model information indicates that maximum temperatures will likely outpace mean temperature trends over much of North and South America, Europe, northern and southeast Asia, and southern parts of Africa and Australasia. So as human-forced climate warming intensifies, these regions are expected to experience more pronounced record-breaking extreme heat events.

    Under the Accelerated Actions scenario, global temperature will continue to rise through the next two decades. But by 2050, global temperature will stabilize, and then slightly decline through the latter half of the century.

    “By 2100, the Accelerated Actions scenario indicates that the world can be virtually assured of remaining below 2 C of global warming,” says MIT Joint Program Deputy Director C. Adam Schlosser, a lead author of the report. “Nevertheless, additional policy mechanisms must be designed with more comprehensive targets that also support a cleaner environment, sustainable resources, as well as improved and equitable human health.”

    The Accelerated Actions scenario not only stabilizes global precipitation increase (by 2060), but substantially reduces the magnitude and potential range of increases to almost one-third of Current Trends global precipitation changes. Any global increase in precipitation heightens flood risk worldwide, so policies aligned with the Accelerated Actions scenario would considerably reduce that risk.

    Prospects for meeting Paris Agreement climate goals

    Numerous countries and regions are progressing in fulfilling their Paris Agreement pledges. Many have declared more ambitious greenhouse gas emissions-mitigation goals, while financing to assist the least-developed countries in sustainable development is not forthcoming at the levels needed. In this year’s Global Stocktake Synthesis Report, the U.N. Framework Convention on Climate Change evaluated emissions reductions communicated by the parties of the Paris Agreement and concluded that global emissions are not on track to fulfill the most ambitious long-term global temperature goals of the Paris Agreement (to keep warming well below 2 C — and, ideally, 1.5 C — above pre-industrial levels), and there is a rapidly narrowing window to raise ambition and implement existing commitments in order to achieve those targets. The Current Trends scenario arrives at the same conclusion.

    The 2023 Global Change Outlook finds that both global temperature targets remain achievable, but require much deeper near-term emissions reductions than those embodied in current NDCs.

    Reducing climate risk

    This report explores two well-known sets of risks posed by climate change. Research highlighted indicates that elevated climate-related physical risks will continue to evolve by mid-century, along with heightened transition risks that arise from shifts in the political, technological, social, and economic landscapes that are likely to occur during the transition to a low-carbon economy.

    “Our Outlook shows that without aggressive actions the world will surpass critical greenhouse gas concentration thresholds and climate targets in the coming decades,” says MIT Joint Program Director Ronald Prinn. “While the costs of inaction are getting higher, the costs of action are more manageable.” More

  • in

    Alumnus’ thermal battery helps industry eliminate fossil fuels

    The explosion of renewable energy projects around the globe is leading to a saturation problem. As more renewable power contributes to the grid, the value of electricity is plummeting during the times of day when wind and solar hit peak productivity. The problem is limiting renewable energy investments in some of the sunniest and windiest places in the world.

    Now Antora Energy, co-founded by David Bierman SM ’14, PhD ’17, is addressing the intermittent nature of wind and solar with a low-cost, highly efficient thermal battery that stores electricity as heat to allow manufacturers and other energy-hungry businesses to eliminate their use of fossil fuels.

    “We take electricity when it’s cheapest, meaning when wind gusts are strongest and the sun is shining brightest,” Bierman explains. “We run that electricity through a resistive heater to drive up the temperature of a very inexpensive material — we use carbon blocks, which are extremely stable, produced at incredible scales, and are some of the cheapest materials on Earth. When you need to pull energy from the battery, you open a large shutter to extract thermal radiation, which is used to generate process heat or power using our thermophotovoltaic, or TPV, technology. The end result is a zero-carbon, flexible, combined heat and power system for industry.”

    Antora’s battery could dramatically expand the application of renewable energy by enabling its use in industry, a sector of the U.S. economy that accounted for nearly a quarter of all greenhouse gas emissions in 2021.

    Antora says it is able to deliver on the long-sought promise of heat-to-power TPV technology because it has achieved new levels of efficiency and scalability with its cells. Earlier this year, Antora opened a new manufacturing facility that will be capable of producing 2 megawatts of its TPV cells each year — which the company says makes it the largest TPV production facility in the world.

    Antora’s thermal battery manufacturing facilities and demonstration unit are located in sun-soaked California, where renewables make up close to a third of all electricity. But Antora’s team says its technology holds promise in other regions as increasingly large renewable projects connect to grids across the globe.

    “We see places today [with high renewables] as a sign of where things are going,” Bierman says. “If you look at the tailwinds we have in the renewable industry, there’s a sense of inevitability about solar and wind, which will need to be deployed at incredible scales to avoid a climate catastrophe. We’ll see terawatts and terawatts of new additions of these renewables, so what you see today in California or Texas or Kansas, with significant periods of renewable overproduction, is just the tip of the iceberg.”

    Bierman has been working on thermal energy storage and thermophotovoltaics since his time at MIT, and Antora’s ties to MIT are especially strong because its progress is the result of two MIT startups becoming one.

    Alumni join forces

    Bierman did his masters and doctoral work in MIT’s Department of Mechanical Engineering, where he worked on solid-state solar thermal energy conversion systems. In 2016, while taking course 15.366 (Climate and Energy Ventures), he met Jordan Kearns SM ’17, then a graduate student in the Technology and Policy Program and the Department of Nuclear Science and Engineering. The two were studying renewable energy when they began to think about the intermittent nature of wind and solar as an opportunity rather than a problem.

    “There are already places in the U.S. where we have more wind and solar at times than we know what to do with,” Kearns says. “That is an opportunity for not only emissions reductions but also for reducing energy costs. What’s the application? I don’t think the overproduction of energy was being talked about as much as the intermittency problem.”

    Kearns did research through the MIT Energy Initiative and the researchers received support from MIT’s Venture Mentoring Service and the MIT Sandbox Innovation Fund to further explore ways to capitalize on fluctuating power prices.

    Kearns officially founded a company called Medley Thermal in 2017 to help companies that use natural gas switch to energy produced by renewables when the price was right. To accomplish that, he combined an off-the-shelf electric boiler with novel control software so the companies could switch energy sources seamlessly from fossil fuel to electricity at especially windy or sunny times. Medley went on to become a finalist for the MIT Clean Energy Prize, and Kearns wanted Bierman to join him as a co-founder, but Bierman had received a fellowship to commercialize a thermal energy storage solution and decided to pursue that after graduation.

    The split ended up working out for both alumni. In the ensuing years, Kearns led Medley Thermal through a number of projects in which gradually larger companies switched from relying on natural gas or propane sources to renewable electricity from the grid. The work culminated in an installment at the Jay Peak resort in Vermont that Kearns says is one of the largest projects in the U.S. using renewable energy to produce heat. The project is expected to reduce about 2,500 tons of carbon dioxide per year.

    Bierman, meanwhile, further developed a thermal energy storage solution for industrial decarbonization, which works by using renewable electricity to heat blocks of carbon, which are stored in insulation to retain energy for long periods of time. The heat from those blocks can then be used to deliver electricity or heat to customers, at temperatures that can exceed 1,500 C. When Antora raised a $50 million Series A funding round last year, Bierman asked Kearns if he could buy out Medley’s team, and the researchers finally became co-workers.

    “Antora and Medley Thermal have a similar value prop: There’s low-cost electricity, and we want to connect that to the industrial sector,” Kearns explains. “But whereas Medley used renewables on an as-available basis, and then when the winds stop we went back to burning fossil fuel with a boiler, Antora has a thermal battery that takes in the electricity, converts it to heat, but also stores it as heat so even when the wind stops blowing we have a reservoir of heat that we can continue to pull from to make steam or power or whatever the facility needs. So, we can now further reduce energy costs by offsetting more fuel and offer a 100 percent clean energy solution.”

    United we scale

    Today, Kearns runs the project development arm of Antora.

    “There are other, much larger projects in the pipeline,” Kearns says. “The Jay Peak project is about 3 megawatts of power, but some of the ones we’re working on now are 30, 60 megawatt projects. Those are more industrial focused, and they’re located in places where we have a strong industrial base and an abundance of renewables, everywhere from Texas to Kansas to the Dakotas — that heart of the country that our team lovingly calls the Wind Belt.”

    Antora’s future projects will be with companies in the chemicals, mining, food and beverage, and oil and gas industries. Some of those projects are expected to come online as early as 2025.          

    The company’s scaling strategy is centered on the inexpensive production process for its batteries.

    “We constantly ask ourselves, ‘What is the best product we can make here?’” Bierman says. “We landed on a compact, containerized, modular system that gets shipped to sites and is easily integrated into industrial processes. It means we don’t have huge construction projects, timelines, and budget overruns. Instead, it’s all about scaling up the factory that builds these thermal batteries and just churning them out.”

    It was a winding journey for Kearns and Bierman, but they now believe they’re positioned to help huge companies become carbon-free while promoting the growth of the solar and wind industries.

    “The more I dig into this, the more shocked I am at how important a piece of the decarbonization puzzle this is today,” Bierman says. “The need has become super real since we first started talking about this in 2016. The economic opportunity has grown, but more importantly the awareness from industries that they need to decarbonize is totally different. Antora can help with that, so we’re scaling up as rapidly as possible to meet the demand we see in the market.” More

  • in

    Embracing the future we need

    When you picture MIT doctoral students taking small PhD courses together, you probably don’t imagine them going on class field trips. But it does happen, sometimes, and one of those trips changed Andy Sun’s career.

    Today, Sun is a faculty member at the MIT Sloan School of Management and a leading global expert on integrating renewable energy into the electric grid. Back in 2007, Sun was an operations research PhD candidate with a diversified academic background: He had studied electrical engineering, quantum computing, and analog computing but was still searching for a doctoral research subject involving energy. 

    One day, as part of a graduate energy class taught by visiting professor Ignacio J. Pérez Arriaga, the students visited the headquarters of ISO-New England, the organization that operates New England’s entire power grid and wholesale electricity market. Suddenly, it hit Sun. His understanding of engineering, used to design and optimize computing systems, could be applied to the grid as a whole, with all its connections, circuitry, and need for efficiency. 

    “The power grids in the U.S. continent are composed of two major interconnections, the Western Interconnection, the Eastern Interconnection, and one minor interconnection, the Texas grid,” Sun says. “Within each interconnection, the power grid is one big machine, essentially. It’s connected by tens of thousands of miles of transmission lines, thousands of generators, and consumers, and if anything is not synchronized, the system may collapse. It’s one of the most complicated engineering systems.”

    And just like that, Sun had a subject he was motivated to pursue. “That’s how I got into this field,” he says. “Taking a field trip.”Sun has barely looked back. He has published dozens of papers about optimizing the flow of intermittent renewable energy through the electricity grid, a major practical issue for grid operators, while also thinking broadly about the future form of the grid and the process of making almost all energy renewable. Sun, who in 2022 rejoined MIT as the Iberdrola-Avangrid Associate Professor in Electric Power Systems, and is also an associate professor of operations research, emphasizes the urgency of rapidly switching to renewables.

    “The decarbonization of our energy system is fundamental,” Sun says. “It will change a lot of things because it has to. We don’t have much time to get there. Two decades, three decades is the window in which we have to get a lot of things done. If you think about how much money will need to be invested, it’s not actually that much. We should embrace this future that we have to get to.”

    Successful operations

    Unexpected as it may have been, Sun’s journey toward being an electricity grid expert was informed by all the stages of his higher education. Sun grew up in China, and received his BA in electronic engineering from Tsinghua University in Beijing, in 2003. He then moved to MIT, joining the Media Lab as a graduate student. Sun intended to study quantum computing but instead began working on analog computer circuit design for Professor Neil Gershenfeld, another person whose worldview influenced Sun.  

    “He had this vision about how optimization is very important in things,” Sun says. “I had never heard of optimization before.” 

    To learn more about it, Sun started taking MIT courses in operations research. “I really enjoyed it, especially the nonlinear optimization course taught by Robert Freund in the Operations Research Center,” he recalls. 

    Sun enjoyed it so much that after a while, he joined MIT’s PhD program in operations research, thanks to the guidance of Freund. Later, he started working with MIT Sloan Professor Dimitri Bertsimas, a leading figure in the field. Still, Sun hadn’t quite nailed down what he wanted to focus on within operations research. Thinking of Sun’s engineering skills, Bertsimas suggested that Sun look for a research topic related to energy. 

    “He wasn’t an expert in energy at that time, but he knew that there are important problems there and encouraged me to go ahead and learn,” Sun says. 

    So it was that Sun found himself in ISO-New England headquarters one day in 2007, finally knowing what he wanted to study, and quickly finding opportunities to start learning from the organization’s experts on electricity markets. By 2011, Sun had finished his MIT PhD dissertation. Based in part on ISO-New England data, the thesis presented new modeling to more efficiently integrate renewable energy into the grid; built some new modeling tools grid operators could use; and developed a way to add fair short-term energy auctions to an efficient grid system.

    The core problem Sun deals with is that, unlike some other sources of electricity, renewables tend to be intermittent, generating power in an uneven pattern over time. That’s not an insurmountable problem for grid operators, but it does require some new approaches. Many of the papers Sun has written focus on precisely how to increasingly draw upon intermittent energy sources while ensuring that the grid’s current level of functionality remains intact. This is also the focus of his 2021 book, co-authored with Antonio J. Conejo, “Robust Optimiziation in Electric Energy Systems.”

    “A major theme of my research is how to achieve the integration of renewables and still operate the system reliably,” Sun says. “You have to keep the balance of supply and demand. This requires many time scales of operation from multidecade planning, to monthly or annual maintenance, to daily operations, down through second-by-second. I work on problems in all these timescales.”

    “I sit in the interface between power engineering and operations research,” Sun says. “I’m not a power engineer, but I sit in this boundary, and I keep the problems in optimization as my motivation.”

    Culture shift

    Sun’s presence on the MIT campus represents a homecoming of sorts. After receiving his doctorate from MIT, Sun spent a year as a postdoc at IBM’s Thomas J. Watson Research Center, then joined the faculty at Georgia Tech, where he remained for a decade. He returned to the Institute in January of 2022.

    “I’m just very excited about the opportunity of being back at MIT,” Sun says. “The MIT Energy Initiative is a such a vibrant place, where many people come together to work on energy. I sit in Sloan, but one very strong point of MIT is there are not many barriers, institutionally. I really look forward to working with colleagues from engineering, Sloan, everywhere, moving forward. We’re moving in the right direction, with a lot of people coming together to break the traditional academic boundaries.” 

    Still, Sun warns that some people may be underestimating the severity of the challenge ahead and the need to implement changes right now. The assets in power grids have long life time, lasting multiple decades. That means investment decisions made now could affect how much clean power is being used a generation from now. 

    “We’re talking about a short timeline, for changing something as huge as how a society fundamentally powers itself with energy,” Sun says. “A lot of that must come from the technology we have today. Renewables are becoming much better and cheaper, so their use has to go up.”

    And that means more people need to work on issues of how to deploy and integrate renewables into everyday life, in the electric grid, transportation, and more. Sun hopes people will increasingly recognize energy as a huge growth area for research and applied work. For instance, when MIT President Sally Kornbluth gave her inaugural address on May 1 this year, she emphasized tackling the climate crisis as her highest priority, something Sun noticed and applauded. 

    “I think the most important thing is the culture,” Sun says. “Bring climate up to the front, and create the platform to encourage people to come together and work on this issue.” More