More stories

  • in

    Study of disordered rock salts leads to battery breakthrough

    For the past decade, disordered rock salt has been studied as a potential breakthrough cathode material for use in lithium-ion batteries and a key to creating low-cost, high-energy storage for everything from cell phones to electric vehicles to renewable energy storage.A new MIT study is making sure the material fulfills that promise.Led by Ju Li, the Tokyo Electric Power Company Professor in Nuclear Engineering and professor of materials science and engineering, a team of researchers describe a new class of partially disordered rock salt cathode, integrated with polyanions — dubbed disordered rock salt-polyanionic spinel, or DRXPS — that delivers high energy density at high voltages with significantly improved cycling stability.“There is typically a trade-off in cathode materials between energy density and cycling stability … and with this work we aim to push the envelope by designing new cathode chemistries,” says Yimeng Huang, a postdoc in the Department of Nuclear Science and Engineering and first author of a paper describing the work published today in Nature Energy. “(This) material family has high energy density and good cycling stability because it integrates two major types of cathode materials, rock salt and polyanionic olivine, so it has the benefits of both.”Importantly, Li adds, the new material family is primarily composed of manganese, an earth-abundant element that is significantly less expensive than elements like nickel and cobalt, which are typically used in cathodes today.“Manganese is at least five times less expensive than nickel, and about 30 times less expensive than cobalt,” Li says. “Manganese is also the one of the keys to achieving higher energy densities, so having that material be much more earth-abundant is a tremendous advantage.”A possible path to renewable energy infrastructureThat advantage will be particularly critical, Li and his co-authors wrote, as the world looks to build the renewable energy infrastructure needed for a low- or no-carbon future.Batteries are a particularly important part of that picture, not only for their potential to decarbonize transportation with electric cars, buses, and trucks, but also because they will be essential to addressing the intermittency issues of wind and solar power by storing excess energy, then feeding it back into the grid at night or on calm days, when renewable generation drops.Given the high cost and relative rarity of materials like cobalt and nickel, they wrote, efforts to rapidly scale up electric storage capacity would likely lead to extreme cost spikes and potentially significant materials shortages.“If we want to have true electrification of energy generation, transportation, and more, we need earth-abundant batteries to store intermittent photovoltaic and wind power,” Li says. “I think this is one of the steps toward that dream.”That sentiment was shared by Gerbrand Ceder, the Samsung Distinguished Chair in Nanoscience and Nanotechnology Research and a professor of materials science and engineering at the University of California at Berkeley.“Lithium-ion batteries are a critical part of the clean energy transition,” Ceder says. “Their continued growth and price decrease depends on the development of inexpensive, high-performance cathode materials made from earth-abundant materials, as presented in this work.”Overcoming obstacles in existing materialsThe new study addresses one of the major challenges facing disordered rock salt cathodes — oxygen mobility.While the materials have long been recognized for offering very high capacity — as much as 350 milliampere-hour per gram — as compared to traditional cathode materials, which typically have capacities of between 190 and 200 milliampere-hour per gram, it is not very stable.The high capacity is contributed partially by oxygen redox, which is activated when the cathode is charged to high voltages. But when that happens, oxygen becomes mobile, leading to reactions with the electrolyte and degradation of the material, eventually leaving it effectively useless after prolonged cycling.To overcome those challenges, Huang added another element — phosphorus — that essentially acts like a glue, holding the oxygen in place to mitigate degradation.“The main innovation here, and the theory behind the design, is that Yimeng added just the right amount of phosphorus, formed so-called polyanions with its neighboring oxygen atoms, into a cation-deficient rock salt structure that can pin them down,” Li explains. “That allows us to basically stop the percolating oxygen transport due to strong covalent bonding between phosphorus and oxygen … meaning we can both utilize the oxygen-contributed capacity, but also have good stability as well.”That ability to charge batteries to higher voltages, Li says, is crucial because it allows for simpler systems to manage the energy they store.“You can say the quality of the energy is higher,” he says. “The higher the voltage per cell, then the less you need to connect them in series in the battery pack, and the simpler the battery management system.”Pointing the way to future studiesWhile the cathode material described in the study could have a transformative impact on lithium-ion battery technology, there are still several avenues for study going forward.Among the areas for future study, Huang says, are efforts to explore new ways to fabricate the material, particularly for morphology and scalability considerations.“Right now, we are using high-energy ball milling for mechanochemical synthesis, and … the resulting morphology is non-uniform and has small average particle size (about 150 nanometers). This method is also not quite scalable,” he says. “We are trying to achieve a more uniform morphology with larger particle sizes using some alternate synthesis methods, which would allow us to increase the volumetric energy density of the material and may allow us to explore some coating methods … which could further improve the battery performance. The future methods, of course, should be industrially scalable.”In addition, he says, the disordered rock salt material by itself is not a particularly good conductor, so significant amounts of carbon — as much as 20 weight percent of the cathode paste — were added to boost its conductivity. If the team can reduce the carbon content in the electrode without sacrificing performance, there will be higher active material content in a battery, leading to an increased practical energy density.“In this paper, we just used Super P, a typical conductive carbon consisting of nanospheres, but they’re not very efficient,” Huang says. “We are now exploring using carbon nanotubes, which could reduce the carbon content to just 1 or 2 weight percent, which could allow us to dramatically increase the amount of the active cathode material.”Aside from decreasing carbon content, making thick electrodes, he adds, is yet another way to increase the practical energy density of the battery. This is another area of research that the team is working on.“This is only the beginning of DRXPS research, since we only explored a few chemistries within its vast compositional space,” he continues. “We can play around with different ratios of lithium, manganese, phosphorus, and oxygen, and with various combinations of other polyanion-forming elements such as boron, silicon, and sulfur.”With optimized compositions, more scalable synthesis methods, better morphology that allows for uniform coatings, lower carbon content, and thicker electrodes, he says, the DRXPS cathode family is very promising in applications of electric vehicles and grid storage, and possibly even in consumer electronics, where the volumetric energy density is very important.This work was supported with funding from the Honda Research Institute USA Inc. and the Molecular Foundry at Lawrence Berkeley National Laboratory, and used resources of the National Synchrotron Light Source II at Brookhaven National Laboratory and the Advanced Photon Source at Argonne National Laboratory.  More

  • in

    Offering clean energy around the clock

    As remarkable as the rise of solar and wind farms has been over the last 20 years, achieving complete decarbonization is going to require a host of complementary technologies. That’s because renewables offer only intermittent power. They also can’t directly provide the high temperatures necessary for many industrial processes.

    Now, 247Solar is building high-temperature concentrated solar power systems that use overnight thermal energy storage to provide round-the-clock power and industrial-grade heat.

    The company’s modular systems can be used as standalone microgrids for communities or to provide power in remote places like mines and farms. They can also be used in conjunction with wind and conventional solar farms, giving customers 24/7 power from renewables and allowing them to offset use of the grid.

    “One of my motivations for working on this system was trying to solve the problem of intermittency,” 247Solar CEO Bruce Anderson ’69, SM ’73 says. “I just couldn’t see how we could get to zero emissions with solar photovoltaics (PV) and wind. Even with PV, wind, and batteries, we can’t get there, because there’s always bad weather, and current batteries aren’t economical over long periods. You have to have a solution that operates 24 hours a day.”

    The company’s system is inspired by the design of a high-temperature heat exchanger by the late MIT Professor Emeritus David Gordon Wilson, who co-founded the company with Anderson. The company integrates that heat exchanger into what Anderson describes as a conventional, jet-engine-like turbine, enabling the turbine to produce power by circulating ambient pressure hot air with no combustion or emissions — what the company calls a first in the industry.

    Here’s how the system works: Each 247Solar system uses a field of sun-tracking mirrors called heliostats to reflect sunlight to the top of a central tower. The tower features a proprietary solar receiver that heats air to around 1,000 Celsius at atmospheric pressure. The air is then used to drive 247Solar’s turbines and generate 400 kilowatts of electricity and 600 kilowatts of heat. Some of the hot air is also routed through a long-duration thermal energy storage system, where it heats solid materials that retain the heat. The stored heat is then used to drive the turbines when the sun stops shining.

    “We offer round-the-clock electricity, but we also offer a combined heat and power option, with the ability to take heat up to 970 Celsius for use in industrial processes,” Anderson says. “It’s a very flexible system.”

    The company’s first deployment will be with a large utility in India. If that goes well, 247Solar hopes to scale up rapidly with other utilities, corporations, and communities around the globe.

    A new approach to concentrated solar

    Anderson kept in touch with his MIT network after graduating in 1973. He served as the director of MIT’s Industrial Liaison Program (ILP) between 1996 and 2000 and was elected as an alumni member of the MIT Corporation in 2013. The ILP connects companies with MIT’s network of students, faculty, and alumni to facilitate innovation, and the experience changed the course of Anderson’s career.

    “That was an extremely fascinating job, and from it two things happened,” Anderson says. “One is that I realized I was really an entrepreneur and was not well-suited to the university environment, and the other is that I was reminded of the countless amazing innovations coming out of MIT.”

    After leaving as director, Anderson began a startup incubator where he worked with MIT professors to start companies. Eventually, one of those professors was Wilson, who had invented the new heat exchanger and a ceramic turbine. Anderson and Wilson ended up putting together a small team to commercialize the technology in the early 2000s.

    Anderson had done his MIT master’s thesis on solar energy in the 1970s, and the team realized the heat exchanger made possible a novel approach to concentrated solar power. In 2010, they received a $6 million development grant from the U.S. Department of Energy. But their first solar receiver was damaged during shipping to a national laboratory for testing, and the company ran out of money.

    It wasn’t until 2015 that Anderson was able to raise money to get the company back off the ground. By that time, a new high-temperature metal alloy had been developed that Anderson swapped out for Wilson’s ceramic heat exchanger.

    The Covid-19 pandemic further slowed 247’s plans to build a demonstration facility at its test site in Arizona, but strong customer interest has kept the company busy. Concentrated solar power doesn’t work everywhere — Arizona’s clear sunshine is a better fit than Florida’s hazy skies, for example — but Anderson is currently in talks with communities in parts of the U.S., India, Africa, and Australia where the technology would be a good fit.

    These days, the company is increasingly proposing combining its systems with traditional solar PV, which lets customers reap the benefits of low-cost solar electricity during the day while using 247’s energy at night.

    “That way we can get at least 24, if not more, hours of energy from a sunny day,” Anderson says. “We’re really moving toward these hybrid systems, which work like a Prius: Sometimes you’re using one source of energy, sometimes you’re using the other.”

    The company also sells its HeatStorE thermal batteries as standalone systems. Instead of being heated by the solar system, the thermal storage is heated by circulating air through an electric coil that’s been heated by electricity, either from the grid, standalone PV, or wind. The heat can be stored for nine hours or more on a single charge and then dispatched as electricity plus industrial process heat at 250 Celsius, or as heat only, up to 970 Celsius.

    Anderson says 247’s thermal battery is about one-seventh the cost of lithium-ion batteries per kilowatt hour produced.

    Scaling a new model

    The company is keeping its system flexible for whatever path customers want to take to complete decarbonization.

    In addition to 247’s India project, the company is in advanced talks with off-grid communities in the Unites States and Egypt, mining operators around the world, and the government of a small country in Africa. Anderson says the company’s next customer will likely be an off-grid community in the U.S. that currently relies on diesel generators for power.

    The company has also partnered with a financial company that will allow it to access capital to fund its own projects and sell clean energy directly to customers, which Anderson says will help 247 grow faster than relying solely on selling entire systems to each customer.

    As it works to scale up its deployments, Anderson believes 247 offers a solution to help customers respond to increasing pressure from governments as well as community members.

    “Emerging economies in places like Africa don’t have any alternative to fossil fuels if they want 24/7 electricity,” Anderson says. “Our owning and operating costs are less than half that of diesel gen-sets. Customers today really want to stop producing emissions if they can, so you’ve got villages, mines, industries, and entire countries where the people inside are saying, ‘We can’t burn diesel anymore.’” More

  • in

    Cutting carbon emissions on the US power grid

    To help curb climate change, the United States is working to reduce carbon emissions from all sectors of the energy economy. Much of the current effort involves electrification — switching to electric cars for transportation, electric heat pumps for home heating, and so on. But in the United States, the electric power sector already generates about a quarter of all carbon emissions. “Unless we decarbonize our electric power grids, we’ll just be shifting carbon emissions from one source to another,” says Amanda Farnsworth, a PhD candidate in chemical engineering and research assistant at the MIT Energy Initiative (MITEI).

    But decarbonizing the nation’s electric power grids will be challenging. The availability of renewable energy resources such as solar and wind varies in different regions of the country. Likewise, patterns of energy demand differ from region to region. As a result, the least-cost pathway to a decarbonized grid will differ from one region to another.

    Over the past two years, Farnsworth and Emre Gençer, a principal research scientist at MITEI, developed a power system model that would allow them to investigate the importance of regional differences — and would enable experts and laypeople alike to explore their own regions and make informed decisions about the best way to decarbonize. “With this modeling capability you can really understand regional resources and patterns of demand, and use them to do a ‘bespoke’ analysis of the least-cost approach to decarbonizing the grid in your particular region,” says Gençer.

    To demonstrate the model’s capabilities, Gençer and Farnsworth performed a series of case studies. Their analyses confirmed that strategies must be designed for specific regions and that all the costs and carbon emissions associated with manufacturing and installing solar and wind generators must be included for accurate accounting. But the analyses also yielded some unexpected insights, including a correlation between a region’s wind energy and the ease of decarbonizing, and the important role of nuclear power in decarbonizing the California grid.

    A novel model

    For many decades, researchers have been developing “capacity expansion models” to help electric utility planners tackle the problem of designing power grids that are efficient, reliable, and low-cost. More recently, many of those models also factor in the goal of reducing or eliminating carbon emissions. While those models can provide interesting insights relating to decarbonization, Gençer and Farnsworth believe they leave some gaps that need to be addressed.

    For example, most focus on conditions and needs in a single U.S. region without highlighting the unique peculiarities of their chosen area of focus. Hardly any consider the carbon emitted in fabricating and installing such “zero-carbon” technologies as wind turbines and solar panels. And finally, most of the models are challenging to use. Even experts in the field must search out and assemble various complex datasets in order to perform a study of interest.

    Gençer and Farnsworth’s capacity expansion model — called Ideal Grid, or IG — addresses those and other shortcomings. IG is built within the framework of MITEI’s Sustainable Energy System Analysis Modeling Environment (SESAME), an energy system modeling platform that Gençer and his colleagues at MITEI have been developing since 2017. SESAME models the levels of greenhouse gas emissions from multiple, interacting energy sectors in future scenarios.

    Importantly, SESAME includes both techno-economic analyses and life-cycle assessments of various electricity generation and storage technologies. It thus considers costs and emissions incurred at each stage of the life cycle (manufacture, installation, operation, and retirement) for all generators. Most capacity expansion models only account for emissions from operation of fossil fuel-powered generators. As Farnsworth notes, “While this is a good approximation for our current grid, emissions from the full life cycle of all generating technologies become non-negligible as we transition to a highly renewable grid.”

    Through its connection with SESAME, the IG model has access to data on costs and emissions associated with many technologies critical to power grid operation. To explore regional differences in the cost-optimized decarbonization strategies, the IG model also includes conditions within each region, notably details on demand profiles and resource availability.

    In one recent study, Gençer and Farnsworth selected nine of the standard North American Electric Reliability Corporation (NERC) regions. For each region, they incorporated hourly electricity demand into the IG model. Farnsworth also gathered meteorological data for the nine U.S. regions for seven years — 2007 to 2013 — and calculated hourly power output profiles for the renewable energy sources, including solar and wind, taking into account the geography-limited maximum capacity of each technology.

    The availability of wind and solar resources differs widely from region to region. To permit a quick comparison, the researchers use a measure called “annual capacity factor,” which is the ratio between the electricity produced by a generating unit in a year and the electricity that could have been produced if that unit operated continuously at full power for that year. Values for the capacity factors in the nine U.S. regions vary between 20 percent and 30 percent for solar power and for between 25 percent and 45 percent for wind.

    Calculating optimized grids for different regions

    For their first case study, Gençer and Farnsworth used the IG model to calculate cost-optimized regional grids to meet defined caps on carbon dioxide (CO2) emissions. The analyses were based on cost and emissions data for 10 technologies: nuclear, wind, solar, three types of natural gas, three types of coal, and energy storage using lithium-ion batteries. Hydroelectric was not considered in this study because there was no comprehensive study outlining potential expansion sites with their respective costs and expected power output levels.

    To make region-to-region comparisons easy, the researchers used several simplifying assumptions. Their focus was on electricity generation, so the model calculations assume the same transmission and distribution costs and efficiencies for all regions. Also, the calculations did not consider the generator fleet currently in place. The goal was to investigate what happens if each region were to start from scratch and generate an “ideal” grid.

    To begin, Gençer and Farnsworth calculated the most economic combination of technologies for each region if it limits its total carbon emissions to 100, 50, and 25 grams of CO2 per kilowatt-hour (kWh) generated. For context, the current U.S. average emissions intensity is 386 grams of CO2 emissions per kWh.

    Given the wide variation in regional demand, the researchers needed to use a new metric to normalize their results and permit a one-to-one comparison between regions. Accordingly, the model calculates the required generating capacity divided by the average demand for each region. The required capacity accounts for both the variation in demand and the inability of generating systems — particularly solar and wind — to operate at full capacity all of the time.

    The analysis was based on regional demand data for 2021 — the most recent data available. And for each region, the model calculated the cost-optimized power grid seven times, using weather data from seven years. This discussion focuses on mean values for cost and total capacity installed and also total values for coal and for natural gas, although the analysis considered three separate technologies for each fuel.

    The results of the analyses confirm that there’s a wide variation in the cost-optimized system from one region to another. Most notable is that some regions require a lot of energy storage while others don’t require any at all. The availability of wind resources turns out to play an important role, while the use of nuclear is limited: the carbon intensity of nuclear (including uranium mining and transportation) is lower than that of either solar or wind, but nuclear is the most expensive technology option, so it’s added only when necessary. Finally, the change in the CO2 emissions cap brings some interesting responses.

    Under the most lenient limit on emissions — 100 grams of CO2 per kWh — there’s no coal in the mix anywhere. It’s the first to go, in general being replaced by the lower-carbon-emitting natural gas. Texas, Central, and North Central — the regions with the most wind — don’t need energy storage, while the other six regions do. The regions with the least wind — California and the Southwest — have the highest energy storage requirements. Unlike the other regions modeled, California begins installing nuclear, even at the most lenient limit.

    As the model plays out, under the moderate cap — 50 grams of CO2 per kWh — most regions bring in nuclear power. California and the Southeast — regions with low wind capacity factors — rely on nuclear the most. In contrast, wind-rich Texas, Central, and North Central don’t incorporate nuclear yet but instead add energy storage — a less-expensive option — to their mix. There’s still a bit of natural gas everywhere, in spite of its CO2 emissions.

    Under the most restrictive cap — 25 grams of CO2 per kWh — nuclear is in the mix everywhere. The highest use of nuclear is again correlated with low wind capacity factor. Central and North Central depend on nuclear the least. All regions continue to rely on a little natural gas to keep prices from skyrocketing due to the necessary but costly nuclear component. With nuclear in the mix, the need for storage declines in most regions.

    Results of the cost analysis are also interesting. Texas, Central, and North Central all have abundant wind resources, and they can delay incorporating the costly nuclear option, so the cost of their optimized system tends to be lower than costs for the other regions. In addition, their total capacity deployment — including all sources — tends to be lower than for the other regions. California and the Southwest both rely heavily on solar, and in both regions, costs and total deployment are relatively high.

    Lessons learned

    One unexpected result is the benefit of combining solar and wind resources. The problem with relying on solar alone is obvious: “Solar energy is available only five or six hours a day, so you need to build a lot of other generating sources and abundant storage capacity,” says Gençer. But an analysis of unit-by-unit operations at an hourly resolution yielded a less-intuitive trend: While solar installations only produce power in the midday hours, wind turbines generate the most power in the nighttime hours. As a result, solar and wind power are complementary. Having both resources available is far more valuable than having either one or the other. And having both impacts the need for storage, says Gençer: “Storage really plays a role either when you’re targeting a very low carbon intensity or where your resources are mostly solar and they’re not complemented by wind.”

    Gençer notes that the target for the U.S. electricity grid is to reach net zero by 2035. But the analysis showed that reaching just 100 grams of CO2 per kWh would require at least 50 percent of system capacity to be wind and solar. “And we’re nowhere near that yet,” he says.

    Indeed, Gençer and Farnsworth’s analysis doesn’t even include a zero emissions case. Why not? As Gençer says, “We cannot reach zero.” Wind and solar are usually considered to be net zero, but that’s not true. Wind, solar, and even storage have embedded carbon emissions due to materials, manufacturing, and so on. “To go to true net zero, you’d need negative emission technologies,” explains Gençer, referring to techniques that remove carbon from the air or ocean. That observation confirms the importance of performing life-cycle assessments.

    Farnsworth voices another concern: Coal quickly disappears in all regions because natural gas is an easy substitute for coal and has lower carbon emissions. “People say they’ve decreased their carbon emissions by a lot, but most have done it by transitioning from coal to natural gas power plants,” says Farnsworth. “But with that pathway for decarbonization, you hit a wall. Once you’ve transitioned from coal to natural gas, you’ve got to do something else. You need a new strategy — a new trajectory to actually reach your decarbonization target, which most likely will involve replacing the newly installed natural gas plants.”

    Gençer makes one final point: The availability of cheap nuclear — whether fission or fusion — would completely change the picture. When the tighter caps require the use of nuclear, the cost of electricity goes up. “The impact is quite significant,” says Gençer. “When we go from 100 grams down to 25 grams of CO2 per kWh, we see a 20 percent to 30 percent increase in the cost of electricity.” If it were available, a less-expensive nuclear option would likely be included in the technology mix under more lenient caps, significantly reducing the cost of decarbonizing power grids in all regions.

    The special case of California

    In another analysis, Gençer and Farnsworth took a closer look at California. In California, about 10 percent of total demand is now met with nuclear power. Yet current power plants are scheduled for retirement very soon, and a 1976 law forbids the construction of new nuclear plants. (The state recently extended the lifetime of one nuclear plant to prevent the grid from becoming unstable.) “California is very motivated to decarbonize their grid,” says Farnsworth. “So how difficult will that be without nuclear power?”

    To find out, the researchers performed a series of analyses to investigate the challenge of decarbonizing in California with nuclear power versus without it. At 200 grams of CO2 per kWh — about a 50 percent reduction — the optimized mix and cost look the same with and without nuclear. Nuclear doesn’t appear due to its high cost. At 100 grams of CO2 per kWh — about a 75 percent reduction — nuclear does appear in the cost-optimized system, reducing the total system capacity while having little impact on the cost.

    But at 50 grams of CO2 per kWh, the ban on nuclear makes a significant difference. “Without nuclear, there’s about a 45 percent increase in total system size, which is really quite substantial,” says Farnsworth. “It’s a vastly different system, and it’s more expensive.” Indeed, the cost of electricity would increase by 7 percent.

    Going one step further, the researchers performed an analysis to determine the most decarbonized system possible in California. Without nuclear, the state could reach 40 grams of CO2 per kWh. “But when you allow for nuclear, you can get all the way down to 16 grams of CO2 per kWh,” says Farnsworth. “We found that California needs nuclear more than any other region due to its poor wind resources.”

    Impacts of a carbon tax

    One more case study examined a policy approach to incentivizing decarbonization. Instead of imposing a ceiling on carbon emissions, this strategy would tax every ton of carbon that’s emitted. Proposed taxes range from zero to $100 per ton.

    To investigate the effectiveness of different levels of carbon tax, Farnsworth and Gençer used the IG model to calculate the minimum-cost system for each region, assuming a certain cost for emitting each ton of carbon. The analyses show that a low carbon tax — just $10 per ton — significantly reduces emissions in all regions by phasing out all coal generation. In the Northwest region, for example, a carbon tax of $10 per ton decreases system emissions by 65 percent while increasing system cost by just 2.8 percent (relative to an untaxed system).

    After coal has been phased out of all regions, every increase in the carbon tax brings a slow but steady linear decrease in emissions and a linear increase in cost. But the rates of those changes vary from region to region. For example, the rate of decrease in emissions for each added tax dollar is far lower in the Central region than in the Northwest, largely due to the Central region’s already low emissions intensity without a carbon tax. Indeed, the Central region without a carbon tax has a lower emissions intensity than the Northwest region with a tax of $100 per ton.

    As Farnsworth summarizes, “A low carbon tax — just $10 per ton — is very effective in quickly incentivizing the replacement of coal with natural gas. After that, it really just incentivizes the replacement of natural gas technologies with more renewables and more energy storage.” She concludes, “If you’re looking to get rid of coal, I would recommend a carbon tax.”

    Future extensions of IG

    The researchers have already added hydroelectric to the generating options in the IG model, and they are now planning further extensions. For example, they will include additional regions for analysis, add other long-term energy storage options, and make changes that allow analyses to take into account the generating infrastructure that already exists. Also, they will use the model to examine the cost and value of interregional transmission to take advantage of the diversity of available renewable resources.

    Farnsworth emphasizes that the analyses reported here are just samples of what’s possible using the IG model. The model is a web-based tool that includes embedded data covering the whole United States, and the output from an analysis includes an easy-to-understand display of the required installations, hourly operation, and overall techno-economic analysis and life-cycle assessment results. “The user is able to go in and explore a vast number of scenarios with no data collection or pre-processing,” she says. “There’s no barrier to begin using the tool. You can just hop on and start exploring your options so you can make an informed decision about the best path forward.”

    This work was supported by the International Energy Agency Gas and Oil Technology Collaboration Program and the MIT Energy Initiative Low-Carbon Energy Centers.

    This article appears in the Winter 2024 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Power when the sun doesn’t shine

    In 2016, at the huge Houston energy conference CERAWeek, MIT materials scientist Yet-Ming Chiang found himself talking to a Tesla executive about a thorny problem: how to store the output of solar panels and wind turbines for long durations.        

    Chiang, the Kyocera Professor of Materials Science and Engineering, and Mateo Jaramillo, a vice president at Tesla, knew that utilities lacked a cost-effective way to store renewable energy to cover peak levels of demand and to bridge the gaps during windless and cloudy days. They also knew that the scarcity of raw materials used in conventional energy storage devices needed to be addressed if renewables were ever going to displace fossil fuels on the grid at scale.

    Energy storage technologies can facilitate access to renewable energy sources, boost the stability and reliability of power grids, and ultimately accelerate grid decarbonization. The global market for these systems — essentially large batteries — is expected to grow tremendously in the coming years. A study by the nonprofit LDES (Long Duration Energy Storage) Council pegs the long-duration energy storage market at between 80 and 140 terawatt-hours by 2040. “That’s a really big number,” Chiang notes. “Every 10 people on the planet will need access to the equivalent of one EV [electric vehicle] battery to support their energy needs.”

    In 2017, one year after they met in Houston, Chiang and Jaramillo joined forces to co-found Form Energy in Somerville, Massachusetts, with MIT graduates Marco Ferrara SM ’06, PhD ’08 and William Woodford PhD ’13, and energy storage veteran Ted Wiley.

    “There is a burgeoning market for electrical energy storage because we want to achieve decarbonization as fast and as cost-effectively as possible,” says Ferrara, Form’s senior vice president in charge of software and analytics.

    Investors agreed. Over the next six years, Form Energy would raise more than $800 million in venture capital.

    Bridging gaps

    The simplest battery consists of an anode, a cathode, and an electrolyte. During discharge, with the help of the electrolyte, electrons flow from the negative anode to the positive cathode. During charge, external voltage reverses the process. The anode becomes the positive terminal, the cathode becomes the negative terminal, and electrons move back to where they started. Materials used for the anode, cathode, and electrolyte determine the battery’s weight, power, and cost “entitlement,” which is the total cost at the component level.

    During the 1980s and 1990s, the use of lithium revolutionized batteries, making them smaller, lighter, and able to hold a charge for longer. The storage devices Form Energy has devised are rechargeable batteries based on iron, which has several advantages over lithium. A big one is cost.

    Chiang once declared to the MIT Club of Northern California, “I love lithium-ion.” Two of the four MIT spinoffs Chiang founded center on innovative lithium-ion batteries. But at hundreds of dollars a kilowatt-hour (kWh) and with a storage capacity typically measured in hours, lithium-ion was ill-suited for the use he now had in mind.

    The approach Chiang envisioned had to be cost-effective enough to boost the attractiveness of renewables. Making solar and wind energy reliable enough for millions of customers meant storing it long enough to fill the gaps created by extreme weather conditions, grid outages, and when there is a lull in the wind or a few days of clouds.

    To be competitive with legacy power plants, Chiang’s method had to come in at around $20 per kilowatt-hour of stored energy — one-tenth the cost of lithium-ion battery storage.

    But how to transition from expensive batteries that store and discharge over a couple of hours to some as-yet-undefined, cheap, longer-duration technology?

    “One big ball of iron”

    That’s where Ferrara comes in. Ferrara has a PhD in nuclear engineering from MIT and a PhD in electrical engineering and computer science from the University of L’Aquila in his native Italy. In 2017, as a research affiliate at the MIT Department of Materials Science and Engineering, he worked with Chiang to model the grid’s need to manage renewables’ intermittency.

    How intermittent depends on where you are. In the United States, for instance, there’s the windy Great Plains; the sun-drenched, relatively low-wind deserts of Arizona, New Mexico, and Nevada; and the often-cloudy Pacific Northwest.

    Ferrara, in collaboration with Professor Jessika Trancik of MIT’s Institute for Data, Systems, and Society and her MIT team, modeled four representative locations in the United States and concluded that energy storage with capacity costs below roughly $20/kWh and discharge durations of multiple days would allow a wind-solar mix to provide cost-competitive, firm electricity in resource-abundant locations.

    Now that they had a time frame, they turned their attention to materials. At the price point Form Energy was aiming for, lithium was out of the question. Chiang looked at plentiful and cheap sulfur. But a sulfur, sodium, water, and air battery had technical challenges.

    Thomas Edison once used iron as an electrode, and iron-air batteries were first studied in the 1960s. They were too heavy to make good transportation batteries. But this time, Chiang and team were looking at a battery that sat on the ground, so weight didn’t matter. Their priorities were cost and availability.

    “Iron is produced, mined, and processed on every continent,” Chiang says. “The Earth is one big ball of iron. We wouldn’t ever have to worry about even the most ambitious projections of how much storage that the world might use by mid-century.” If Form ever moves into the residential market, “it’ll be the safest battery you’ve ever parked at your house,” Chiang laughs. “Just iron, air, and water.”

    Scientists call it reversible rusting. While discharging, the battery takes in oxygen and converts iron to rust. Applying an electrical current converts the rusty pellets back to iron, and the battery “breathes out” oxygen as it charges. “In chemical terms, you have iron, and it becomes iron hydroxide,” Chiang says. “That means electrons were extracted. You get those electrons to go through the external circuit, and now you have a battery.”

    Form Energy’s battery modules are approximately the size of a washer-and-dryer unit. They are stacked in 40-foot containers, and several containers are electrically connected with power conversion systems to build storage plants that can cover several acres.

    The right place at the right time

    The modules don’t look or act like anything utilities have contracted for before.

    That’s one of Form’s key challenges. “There is not widespread knowledge of needing these new tools for decarbonized grids,” Ferrara says. “That’s not the way utilities have typically planned. They’re looking at all the tools in the toolkit that exist today, which may not contemplate a multi-day energy storage asset.”

    Form Energy’s customers are largely traditional power companies seeking to expand their portfolios of renewable electricity. Some are in the process of decommissioning coal plants and shifting to renewables.

    Ferrara’s research pinpointing the need for very low-cost multi-day storage provides key data for power suppliers seeking to determine the most cost-effective way to integrate more renewable energy.

    Using the same modeling techniques, Ferrara and team show potential customers how the technology fits in with their existing system, how it competes with other technologies, and how, in some cases, it can operate synergistically with other storage technologies.

    “They may need a portfolio of storage technologies to fully balance renewables on different timescales of intermittency,” he says. But other than the technology developed at Form, “there isn’t much out there, certainly not within the cost entitlement of what we’re bringing to market.”  Thanks to Chiang and Jaramillo’s chance encounter in Houston, Form has a several-year lead on other companies working to address this challenge. 

    In June 2023, Form Energy closed its biggest deal to date for a single project: Georgia Power’s order for a 15-megawatt/1,500-megawatt-hour system. That order brings Form’s total amount of energy storage under contracts with utility customers to 40 megawatts/4 gigawatt-hours. To meet the demand, Form is building a new commercial-scale battery manufacturing facility in West Virginia.

    The fact that Form Energy is creating jobs in an area that lost more than 10,000 steel jobs over the past decade is not lost on Chiang. “And these new jobs are in clean tech. It’s super exciting to me personally to be doing something that benefits communities outside of our traditional technology centers.

    “This is the right time for so many reasons,” Chiang says. He says he and his Form Energy co-founders feel “tremendous urgency to get these batteries out into the world.”

    This article appears in the Winter 2024 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Cobalt-free batteries could power cars of the future

    Many electric vehicles are powered by batteries that contain cobalt — a metal that carries high financial, environmental, and social costs.

    MIT researchers have now designed a battery material that could offer a more sustainable way to power electric cars. The new lithium-ion battery includes a cathode based on organic materials, instead of cobalt or nickel (another metal often used in lithium-ion batteries).

    In a new study, the researchers showed that this material, which could be produced at much lower cost than cobalt-containing batteries, can conduct electricity at similar rates as cobalt batteries. The new battery also has comparable storage capacity and can be charged up faster than cobalt batteries, the researchers report.

    “I think this material could have a big impact because it works really well,” says Mircea Dincă, the W.M. Keck Professor of Energy at MIT. “It is already competitive with incumbent technologies, and it can save a lot of the cost and pain and environmental issues related to mining the metals that currently go into batteries.”

    Dincă is the senior author of the study, which appears today in the journal ACS Central Science. Tianyang Chen PhD ’23 and Harish Banda, a former MIT postdoc, are the lead authors of the paper. Other authors include Jiande Wang, an MIT postdoc; Julius Oppenheim, an MIT graduate student; and Alessandro Franceschi, a research fellow at the University of Bologna.

    Alternatives to cobalt

    Most electric cars are powered by lithium-ion batteries, a type of battery that is recharged when lithium ions flow from a positively charged electrode, called a cathode, to a negatively electrode, called an anode. In most lithium-ion batteries, the cathode contains cobalt, a metal that offers high stability and energy density.

    However, cobalt has significant downsides. A scarce metal, its price can fluctuate dramatically, and much of the world’s cobalt deposits are located in politically unstable countries. Cobalt extraction creates hazardous working conditions and generates toxic waste that contaminates land, air, and water surrounding the mines.

    “Cobalt batteries can store a lot of energy, and they have all of features that people care about in terms of performance, but they have the issue of not being widely available, and the cost fluctuates broadly with commodity prices. And, as you transition to a much higher proportion of electrified vehicles in the consumer market, it’s certainly going to get more expensive,” Dincă says.

    Because of the many drawbacks to cobalt, a great deal of research has gone into trying to develop alternative battery materials. One such material is lithium-iron-phosphate (LFP), which some car manufacturers are beginning to use in electric vehicles. Although still practically useful, LFP has only about half the energy density of cobalt and nickel batteries.

    Another appealing option are organic materials, but so far most of these materials have not been able to match the conductivity, storage capacity, and lifetime of cobalt-containing batteries. Because of their low conductivity, such materials typically need to be mixed with binders such as polymers, which help them maintain a conductive network. These binders, which make up at least 50 percent of the overall material, bring down the battery’s storage capacity.

    About six years ago, Dincă’s lab began working on a project, funded by Lamborghini, to develop an organic battery that could be used to power electric cars. While working on porous materials that were partly organic and partly inorganic, Dincă and his students realized that a fully organic material they had made appeared that it might be a strong conductor.

    This material consists of many layers of TAQ (bis-tetraaminobenzoquinone), an organic small molecule that contains three fused hexagonal rings. These layers can extend outward in every direction, forming a structure similar to graphite. Within the molecules are chemical groups called quinones, which are the electron reservoirs, and amines, which help the material to form strong hydrogen bonds.

    Those hydrogen bonds make the material highly stable and also very insoluble. That insolubility is important because it prevents the material from dissolving into the battery electrolyte, as some organic battery materials do, thereby extending its lifetime.

    “One of the main methods of degradation for organic materials is that they simply dissolve into the battery electrolyte and cross over to the other side of the battery, essentially creating a short circuit. If you make the material completely insoluble, that process doesn’t happen, so we can go to over 2,000 charge cycles with minimal degradation,” Dincă says.

    Strong performance

    Tests of this material showed that its conductivity and storage capacity were comparable to that of traditional cobalt-containing batteries. Also, batteries with a TAQ cathode can be charged and discharged faster than existing batteries, which could speed up the charging rate for electric vehicles.

    To stabilize the organic material and increase its ability to adhere to the battery’s current collector, which is made of copper or aluminum, the researchers added filler materials such as cellulose and rubber. These fillers make up less than one-tenth of the overall cathode composite, so they don’t significantly reduce the battery’s storage capacity.

    These fillers also extend the lifetime of the battery cathode by preventing it from cracking when lithium ions flow into the cathode as the battery charges.

    The primary materials needed to manufacture this type of cathode are a quinone precursor and an amine precursor, which are already commercially available and produced in large quantities as commodity chemicals. The researchers estimate that the material cost of assembling these organic batteries could be about one-third to one-half the cost of cobalt batteries.

    Lamborghini has licensed the patent on the technology. Dincă’s lab plans to continue developing alternative battery materials and is exploring possible replacement of lithium with sodium or magnesium, which are cheaper and more abundant than lithium. More

  • in

    Study reveals a reaction at the heart of many renewable energy technologies

    A key chemical reaction — in which the movement of protons between the surface of an electrode and an electrolyte drives an electric current — is a critical step in many energy technologies, including fuel cells and the electrolyzers used to produce hydrogen gas.

    For the first time, MIT chemists have mapped out in detail how these proton-coupled electron transfers happen at an electrode surface. Their results could help researchers design more efficient fuel cells, batteries, or other energy technologies.

    “Our advance in this paper was studying and understanding the nature of how these electrons and protons couple at a surface site, which is relevant for catalytic reactions that are important in the context of energy conversion devices or catalytic reactions,” says Yogesh Surendranath, a professor of chemistry and chemical engineering at MIT and the senior author of the study.

    Among their findings, the researchers were able to trace exactly how changes in the pH of the electrolyte solution surrounding an electrode affect the rate of proton motion and electron flow within the electrode.

    MIT graduate student Noah Lewis is the lead author of the paper, which appears today in Nature Chemistry. Ryan Bisbey, a former MIT postdoc; Karl Westendorff, an MIT graduate student; and Alexander Soudackov, a research scientist at Yale University, are also authors of the paper.

    Passing protons

    Proton-coupled electron transfer occurs when a molecule, often water or an acid, transfers a proton to another molecule or to an electrode surface, which stimulates the proton acceptor to also take up an electron. This kind of reaction has been harnessed for many energy applications.

    “These proton-coupled electron transfer reactions are ubiquitous. They are often key steps in catalytic mechanisms, and are particularly important for energy conversion processes such as hydrogen generation or fuel cell catalysis,” Surendranath says.

    In a hydrogen-generating electrolyzer, this approach is used to remove protons from water and add electrons to the protons to form hydrogen gas. In a fuel cell, electricity is generated when protons and electrons are removed from hydrogen gas and added to oxygen to form water.

    Proton-coupled electron transfer is common in many other types of chemical reactions, for example, carbon dioxide reduction (the conversion of carbon dioxide into chemical fuels by adding electrons and protons). Scientists have learned a great deal about how these reactions occur when the proton acceptors are molecules, because they can precisely control the structure of each molecule and observe how electrons and protons pass between them. However, when proton-coupled electron transfer occurs at the surface of an electrode, the process is much more difficult to study because electrode surfaces are usually very heterogenous, with many different sites that a proton could potentially bind to.

    To overcome that obstacle, the MIT team developed a way to design electrode surfaces that gives them much more precise control over the composition of the electrode surface. Their electrodes consist of sheets of graphene with organic, ring-containing compounds attached to the surface. At the end of each of these organic molecules is a negatively charged oxygen ion that can accept protons from the surrounding solution, which causes an electron to flow from the circuit into the graphitic surface.

    “We can create an electrode that doesn’t consist of a wide diversity of sites but is a uniform array of a single type of very well-defined sites that can each bind a proton with the same affinity,” Surendranath says. “Since we have these very well-defined sites, what this allowed us to do was really unravel the kinetics of these processes.”

    Using this system, the researchers were able to measure the flow of electrical current to the electrodes, which allowed them to calculate the rate of proton transfer to the oxygen ion at the surface at equilibrium — the state when the rates of proton donation to the surface and proton transfer back to solution from the surface are equal. They found that the pH of the surrounding solution has a significant effect on this rate: The highest rates occurred at the extreme ends of the pH scale — pH 0, the most acidic, and pH 14, the most basic.

    To explain these results, researchers developed a model based on two possible reactions that can occur at the electrode. In the first, hydronium ions (H3O+), which are in high concentration in strongly acidic solutions, deliver protons to the surface oxygen ions, generating water. In the second, water delivers protons to the surface oxygen ions, generating hydroxide ions (OH-), which are in high concentration in strongly basic solutions.

    However, the rate at pH 0 is about four times faster than the rate at pH 14, in part because hydronium gives up protons at a faster rate than water.

    A reaction to reconsider

    The researchers also discovered, to their surprise, that the two reactions have equal rates not at neutral pH 7, where hydronium and hydroxide concentrations are equal, but at pH 10, where the concentration of hydroxide ions is 1 million times that of hydronium. The model suggests this is because the forward reaction involving proton donation from hydronium or water contributes more to the overall rate than the backward reaction involving proton removal by water or hydroxide.

    Existing models of how these reactions occur at electrode surfaces assume that the forward and backward reactions contribute equally to the overall rate, so the new findings suggest that those models may need to be reconsidered, the researchers say.

    “That’s the default assumption, that the forward and reverse reactions contribute equally to the reaction rate,” Surendranath says. “Our finding is really eye-opening because it means that the assumption that people are using to analyze everything from fuel cell catalysis to hydrogen evolution may be something we need to revisit.”

    The researchers are now using their experimental setup to study how adding different types of ions to the electrolyte solution surrounding the electrode may speed up or slow down the rate of proton-coupled electron flow.

    “With our system, we know that our sites are constant and not affecting each other, so we can read out what the change in the solution is doing to the reaction at the surface,” Lewis says.

    The research was funded by the U.S. Department of Energy Office of Basic Energy Sciences. More

  • in

    A green hydrogen innovation for clean energy

    Renewable energy today — mainly derived from the sun or wind — depends on batteries for storage. While costs have dropped in recent years, the pursuit of more efficient means of storing renewable power continues.

    “All of these technologies, unfortunately, have a long way to go,” said Sossina Haile SB ’86, PhD ’92, the Walter P. Murphy Professor of Materials Science and Engineering at Northwestern University, at recent talk at MIT. She was the speaker of the fall 2023 Wulff Lecture, an event hosted by the Department of Materials Science and Engineering (DMSE) to ignite enthusiasm for the discipline.

    To add to the renewable energy mix — and help quicken the pace to a sustainable future — Haile is working on an approach based on hydrogen in fuel cells, particularly for eco-friendly fuel in cars. Fuel cells, like batteries, produce electricity from chemical reactions but don’t lose their charge so long as fuel is supplied.

    To generate power, the hydrogen must be pure — not attached to another molecule. Most methods of producing hydrogen today require burning fossil fuel, which generates planet-heating carbon emissions. Haile proposes a “green” process using renewable electricity to extract the hydrogen from steam.

    When hydrogen is used in a fuel cell, “you have water as the product, and that’s the beautiful zero emissions,” Haile said, referring to the renewable energy production cycle that is set in motion.

    Ammonia fuels hydrogen’s potential

    Hydrogen is not yet widely used as a fuel because it’s difficult to transport. For one, it has low energy density, meaning a large volume of hydrogen gas is needed to store a large amount of energy. And storing it is challenging because hydrogen’s tiny molecules can infiltrate metal tanks or pipes, causing cracks and gas leakage.

    Haile’s solution for transporting hydrogen is using ammonia to “carry” it. Ammonia is three parts hydrogen and one part nitrogen, so the hydrogen needs to be separated from the nitrogen before it can be used in the kind of fuel cells that can power cars.

    Ammonia has some advantages, including using existing pipelines and a high transmission capacity, Haile said — so more power can be transmitted at any given time.

    To extract the hydrogen from ammonia, Haile has built devices that look a lot like fuel cells, with cesium dihydrogen phosphate as an electrolyte. The “superprotonic” material displays high proton conductivity — it allows protons, or positively charged particles, to move through it. This is important for hydrogen, which has just a proton and an electron. By letting only protons through the electrolyte, the device strips hydrogen from the ammonia, leaving behind the nitrogen.

    The material has other benefits, too, Haile said: “It’s inexpensive, nontoxic, earth-abundant — all these good things that you want to have when you think about a sustainable energy technology.”

    Play video

    2023 Fall Wulff LectureVideo: Department of Materials Science and Engineering

    Sparking interest — and hope

    Haile’s talk piqued interest in the audience, which nearly filled the 6-120 auditorium at MIT, which seats about 150 people.

    Materials science and engineering major Nikhita Law heard hope in Haile’s talk for a more sustainable future.

    “A major problem in making our energy system sustainable is finding ways to store energy from renewables,” Law says. Even if hydrogen-powered cars are not as wide-scale as lithium-battery-powered electric cars, “a permanent energy storage station where we convert electricity into hydrogen and convert it back seems like it makes more sense than mining more lithium.”

    Another DMSE student, senior Daniel Tong, learned about the challenges involved in transporting hydrogen at another seminar and was curious to learn more. “This was something I hadn’t thought of: Can you carry hydrogen more effectively in a different form? That’s really cool.”

    He adds that talks like the Wulff Lecture are helpful in keeping people up to date in a wide-ranging, interdisciplinary field such as materials science and engineering, which spans chemistry, physics, engineering, and other disciplines. “This is a really good way to get exposed to different parts of materials science. There are so many more facets than you know of.”

    In her talk, Haile encouraged audience members to get involved in sustainability research.

    “There’s lots of room for further insight and materials discovery,” she said.

    Haile concluded by underscoring the challenges faced by developing countries in dealing with climate change impacts, particularly those near the equator where there isn’t adequate infrastructure to deal with big swings in precipitation and temperature. For the people who aren’t driven to solve problems that affect people on the other side of the world, Haile offered some extra motivation.

    “I’m sure many of you enjoy coffee. This is going to put the coffee crops in jeopardy as well,” she said. More

  • in

    A civil discourse on climate change

    A new MIT initiative designed to encourage open dialogue on campus kicked off with a conversation focused on how to address challenges related to climate change.

    “Climate Change: Existential Threat or Bump in the Road” featured Steve Koonin, theoretical physicist and former U.S. undersecretary for science during the Obama administration, and Kerry Emanuel, professor emeritus of atmospheric science at MIT. A crowd of roughly 130 students, staff, and faculty gathered in an MIT lecture hall for the discussion on Tuesday, Oct. 24. 

    “The bump is strongly favored,” Koonin said when the talk began, referring to his contention that climate change was a “bump in the road” rather than an existential threat. After proposing a future in which we could potentially expect continued growth in America’s gross domestic product despite transportation and infrastructure challenges related to climate change, he concluded that investments in nuclear energy and capacity increases related to storing wind- and solar-generated energy could help mitigate climate-related phenomena. 

    Emanuel, while mostly agreeing with Koonin’s assessment of climate challenges and potential solutions, cautioned against underselling the threat of human-aided climate change.

    “Humanity’s adaptation to climate stability hasn’t prepared us to effectively manage massive increases in temperature and associated effects,” he argued. “We’re poorly adapted to less-frequent events like those we’re observing now.”

    Decarbonization, Emanuel noted, can help mitigate global conflicts related to fossil fuel usage. “Carbonization kills between 8 and 9 million people annually,” he said.

    The conversation on climate change is one of several planned on campus this academic year. The speaker series is one part of “Civil Discourse in the Classroom and Beyond,” an initiative being led by MIT philosophers Alex Byrne and Brad Skow. The two-year project is meant to encourage the open exchange of ideas inside and outside college and university classrooms. 

    The speaker series pairs external thought leaders with MIT faculty to encourage the interrogation and debate of all kinds of ideas.

    Finding common ground

    At the talk on climate change, both Koonin and Emanuel recommended a slow and steady approach to mitigation efforts, reminding attendees that, for example, developing nations can’t afford to take a developed world approach to climate change. 

    “These people have immediate needs to meet,” Koonin reminded the audience, “which can include fossil fuel use.”

    Both Koonin and Emanuel recommended a series of steps to assist with both climate change mitigation and effective messaging:

    Sustain and improve climate science — continue to investigate and report findings.
    Improve climate communications for non-experts — tell an easy-to-understand and cohesive story.
    Focus on reliability and affordability before mitigation — don’t undertake massive efforts that may disrupt existing energy transmission infrastructure.
    Adopt a “graceful” approach to decarbonization — consider impacts as broadly as possible.
    Don’t constrain energy supply in the developing world.
    Increase focus on developing and delivering alternative responses  — consider the potential ability to scale power generation, and delivery methods like nuclear energy.
    Mitigating climate risk requires political will, careful consideration, and an improved technical approach to energy policy, both concluded.

    “We have to learn to deal rationally with climate risk in a polarized society,” Koonin offered.

    The audience asked both speakers questions about impacts on nonhuman species (“We don’t know but we should,” both shared); nuclear fusion (“There isn’t enough tritium to effectively scale the widespread development of fusion-based energy; perhaps in 30 to 40 years,” Koonin suggested); and the planetary boundaries framework (“There’s good science underway in this space and I’m curious to see where it’s headed,” said Emanuel.) 

    “The event was a great success,” said Byrne, afterward. “The audience was engaged, and there was a good mix of faculty and students.”

    “One surprising thing,” Skow added, “was both Koonin and Emanuel were down on wind and solar power, [especially since] the idea that we need to transition to both is certainly in the air.”

    More conversations

    A second speaker series event, held earlier this month, was “Has Feminism Made Progress?” with Mary Harrington, author of “Feminism Against Progress,” and Anne McCants, MIT professor of history. An additional discussion planned for spring 2024 will cover the public health response to Covid-19.

    Discussions from the speaker series will appear as special episodes on “The Good Fight,” a podcast hosted by Johns Hopkins University political scientist Yascha Mounk.

    The Civil Discourse project is made possible due, in part, to funding from the Arthur Vining Davis Foundations and a collaboration between the MIT History Section and Concourse, a program featuring an integrated, cross-disciplinary approach to investigating some of humanity’s most interesting questions.

    The Civil Discourse initiative includes two components: the speaker series open to the MIT community, and seminars where students can discuss freedom of expression and develop skills for successfully engaging in civil discourse. More