More stories

  • in

    Making the case for hydrogen in a zero-carbon economy

    As the United States races to achieve its goal of zero-carbon electricity generation by 2035, energy providers are swiftly ramping up renewable resources such as solar and wind. But because these technologies churn out electrons only when the sun shines and the wind blows, they need backup from other energy sources, especially during seasons of high electric demand. Currently, plants burning fossil fuels, primarily natural gas, fill in the gaps.

    “As we move to more and more renewable penetration, this intermittency will make a greater impact on the electric power system,” says Emre Gençer, a research scientist at the MIT Energy Initiative (MITEI). That’s because grid operators will increasingly resort to fossil-fuel-based “peaker” plants that compensate for the intermittency of the variable renewable energy (VRE) sources of sun and wind. “If we’re to achieve zero-carbon electricity, we must replace all greenhouse gas-emitting sources,” Gençer says.

    Low- and zero-carbon alternatives to greenhouse-gas emitting peaker plants are in development, such as arrays of lithium-ion batteries and hydrogen power generation. But each of these evolving technologies comes with its own set of advantages and constraints, and it has proven difficult to frame the debate about these options in a way that’s useful for policymakers, investors, and utilities engaged in the clean energy transition.

    Now, Gençer and Drake D. Hernandez SM ’21 have come up with a model that makes it possible to pin down the pros and cons of these peaker-plant alternatives with greater precision. Their hybrid technological and economic analysis, based on a detailed inventory of California’s power system, was published online last month in Applied Energy. While their work focuses on the most cost-effective solutions for replacing peaker power plants, it also contains insights intended to contribute to the larger conversation about transforming energy systems.

    “Our study’s essential takeaway is that hydrogen-fired power generation can be the more economical option when compared to lithium-ion batteries — even today, when the costs of hydrogen production, transmission, and storage are very high,” says Hernandez, who worked on the study while a graduate research assistant for MITEI. Adds Gençer, “If there is a place for hydrogen in the cases we analyzed, that suggests there is a promising role for hydrogen to play in the energy transition.”

    Adding up the costs

    California serves as a stellar paradigm for a swiftly shifting power system. The state draws more than 20 percent of its electricity from solar and approximately 7 percent from wind, with more VRE coming online rapidly. This means its peaker plants already play a pivotal role, coming online each evening when the sun goes down or when events such as heat waves drive up electricity use for days at a time.

    “We looked at all the peaker plants in California,” recounts Gençer. “We wanted to know the cost of electricity if we replaced them with hydrogen-fired turbines or with lithium-ion batteries.” The researchers used a core metric called the levelized cost of electricity (LCOE) as a way of comparing the costs of different technologies to each other. LCOE measures the average total cost of building and operating a particular energy-generating asset per unit of total electricity generated over the hypothetical lifetime of that asset.

    Selecting 2019 as their base study year, the team looked at the costs of running natural gas-fired peaker plants, which they defined as plants operating 15 percent of the year in response to gaps in intermittent renewable electricity. In addition, they determined the amount of carbon dioxide released by these plants and the expense of abating these emissions. Much of this information was publicly available.

    Coming up with prices for replacing peaker plants with massive arrays of lithium-ion batteries was also relatively straightforward: “There are no technical limitations to lithium-ion, so you can build as many as you want; but they are super expensive in terms of their footprint for energy storage and the mining required to manufacture them,” says Gençer.

    But then came the hard part: nailing down the costs of hydrogen-fired electricity generation. “The most difficult thing is finding cost assumptions for new technologies,” says Hernandez. “You can’t do this through a literature review, so we had many conversations with equipment manufacturers and plant operators.”

    The team considered two different forms of hydrogen fuel to replace natural gas, one produced through electrolyzer facilities that convert water and electricity into hydrogen, and another that reforms natural gas, yielding hydrogen and carbon waste that can be captured to reduce emissions. They also ran the numbers on retrofitting natural gas plants to burn hydrogen as opposed to building entirely new facilities. Their model includes identification of likely locations throughout the state and expenses involved in constructing these facilities.

    The researchers spent months compiling a giant dataset before setting out on the task of analysis. The results from their modeling were clear: “Hydrogen can be a more cost-effective alternative to lithium-ion batteries for peaking operations on a power grid,” says Hernandez. In addition, notes Gençer, “While certain technologies worked better in particular locations, we found that on average, reforming hydrogen rather than electrolytic hydrogen turned out to be the cheapest option for replacing peaker plants.”

    A tool for energy investors

    When he began this project, Gençer admits he “wasn’t hopeful” about hydrogen replacing natural gas in peaker plants. “It was kind of shocking to see in our different scenarios that there was a place for hydrogen.” That’s because the overall price tag for converting a fossil-fuel based plant to one based on hydrogen is very high, and such conversions likely won’t take place until more sectors of the economy embrace hydrogen, whether as a fuel for transportation or for varied manufacturing and industrial purposes.

    A nascent hydrogen production infrastructure does exist, mainly in the production of ammonia for fertilizer. But enormous investments will be necessary to expand this framework to meet grid-scale needs, driven by purposeful incentives. “With any of the climate solutions proposed today, we will need a carbon tax or carbon pricing; otherwise nobody will switch to new technologies,” says Gençer.

    The researchers believe studies like theirs could help key energy stakeholders make better-informed decisions. To that end, they have integrated their analysis into SESAME, a life cycle and techno-economic assessment tool for a range of energy systems that was developed by MIT researchers. Users can leverage this sophisticated modeling environment to compare costs of energy storage and emissions from different technologies, for instance, or to determine whether it is cost-efficient to replace a natural gas-powered plant with one powered by hydrogen.

    “As utilities, industry, and investors look to decarbonize and achieve zero-emissions targets, they have to weigh the costs of investing in low-carbon technologies today against the potential impacts of climate change moving forward,” says Hernandez, who is currently a senior associate in the energy practice at Charles River Associates. Hydrogen, he believes, will become increasingly cost-competitive as its production costs decline and markets expand.

    A study group member of MITEI’s soon-to-be published Future of Storage study, Gençer knows that hydrogen alone will not usher in a zero-carbon future. But, he says, “Our research shows we need to seriously consider hydrogen in the energy transition, start thinking about key areas where hydrogen should be used, and start making the massive investments necessary.”

    Funding for this research was provided by MITEI’s Low-Carbon Energy Centers and Future of Storage study. More

  • in

    Countering climate change with cool pavements

    Pavements are an abundant urban surface, covering around 40 percent of American cities. But in addition to carrying traffic, they can also emit heat.

    Due to what’s called the urban heat island effect, densely built, impermeable surfaces like pavements can absorb solar radiation and warm up their surroundings by re-emitting that radiation as heat. This phenomenon poses a serious threat to cities. It increases air temperatures by up as much as 7 degrees Fahrenheit and contributes to health and environmental risks — risks that climate change will magnify.

    In response, researchers at the MIT Concrete Sustainability Hub (MIT CSHub) are studying how a surface that ordinarily heightens urban heat islands can instead lessen their intensity. Their research focuses on “cool pavements,” which reflect more solar radiation and emit less heat than conventional paving surfaces.

    A recent study by a team of current and former MIT CSHub researchers in the journal of Environmental Science and Technology outlines cool pavements and their implementation. The study found that they could lower air temperatures in Boston and Phoenix by up to 1.7 degrees Celsius (3 F) and 2.1 C (3.7 F), respectively. They would also reduce greenhouse gas emissions, cutting total emissions by up to 3 percent in Boston and 6 percent in Phoenix. Achieving these savings, however, requires that cool pavement strategies be selected according to the climate, traffic, and building configurations of each neighborhood.

    Cities like Los Angeles and Phoenix have already conducted sizeable experiments with cool pavements, but the technology is still not widely implemented. The CSHub team hopes their research can guide future cool paving projects to help cities cope with a changing climate.

    Scratching the surface

    It’s well known that darker surfaces get hotter in sunlight than lighter ones. Climate scientists use a metric called “albedo” to help describe this phenomenon.

    “Albedo is a measure of surface reflectivity,” explains Hessam AzariJafari, the paper’s lead author and a postdoc at the MIT CSHub. “Surfaces with low albedo absorb more light and tend to be darker, while high-albedo surfaces are brighter and reflect more light.”

    Albedo is central to cool pavements. Typical paving surfaces, like conventional asphalt, possess a low albedo and absorb more radiation and emit more heat. Cool pavements, however, have brighter materials that reflect more than three times as much radiation and, consequently, re-emit far less heat.

    “We can build cool pavements in many different ways,” says Randolph Kirchain, a researcher in the Materials Science Laboratory and co-director of the Concrete Sustainability Hub. “Brighter materials like concrete and lighter-colored aggregates offer higher albedo, while existing asphalt pavements can be made ‘cool’ through reflective coatings.”

    CSHub researchers considered these several options in a study of Boston and Phoenix. Their analysis considered different outcomes when concrete, reflective asphalt, and reflective concrete replaced conventional asphalt pavements — which make up more than 95 percent of pavements worldwide.

    Situational awareness

    For a comprehensive understanding of the environmental benefits of cool pavements in Boston and Phoenix, researchers had to look beyond just paving materials. That’s because in addition to lowering air temperatures, cool pavements exert direct and indirect impacts on climate change.  

    “The one direct impact is radiative forcing,” notes AzariJafari. “By reflecting radiation back into the atmosphere, cool pavements exert a radiative forcing, meaning that they change the Earth’s energy balance by sending more energy out of the atmosphere — similar to the polar ice caps.”

    Cool pavements also exert complex, indirect climate change impacts by altering energy use in adjacent buildings.

    “On the one hand, by lowering temperatures, cool pavements can reduce some need for AC [air conditioning] in the summer while increasing heating demand in the winter,” says AzariJafari. “Conversely, by reflecting light — called incident radiation — onto nearby buildings, cool pavements can warm structures up, which can increase AC usage in the summer and lower heating demand in the winter.”

    What’s more, albedo effects are only a portion of the overall life cycle impacts of a cool pavement. In fact, impacts from construction and materials extraction (referred to together as embodied impacts) and the use of the pavement both dominate the life cycle. The primary use phase impact of a pavement — apart from albedo effects  — is excess fuel consumption: Pavements with smooth surfaces and stiff structures cause less excess fuel consumption in the vehicles that drive on them.

    Assessing the climate-change impacts of cool pavements, then, is an intricate process — one involving many trade-offs. In their study, the researchers sought to analyze and measure them.

    A full reflection

    To determine the ideal implementation of cool pavements in Boston and Phoenix, researchers investigated the life cycle impacts of shifting from conventional asphalt pavements to three cool pavement options: reflective asphalt, concrete, and reflective concrete.

    To do this, they used coupled physical simulations to model buildings in thousands of hypothetical neighborhoods. Using this data, they then trained a neural network model to predict impacts based on building and neighborhood characteristics. With this tool in place, it was possible to estimate the impact of cool pavements for each of the thousands of roads and hundreds of thousands of buildings in Boston and Phoenix.

    In addition to albedo effects, they also looked at the embodied impacts for all pavement types and the effect of pavement type on vehicle excess fuel consumption due to surface qualities, stiffness, and deterioration rate.

    After assessing the life cycle impacts of each cool pavement type, the researchers calculated which material — conventional asphalt, reflective asphalt, concrete, and reflective concrete — benefited each neighborhood most. They found that while cool pavements were advantageous in Boston and Phoenix overall, the ideal materials varied greatly within and between both cities.

    “One benefit that was universal across neighborhood type and paving material, was the impact of radiative forcing,” notes AzariJafari. “This was particularly the case in areas with shorter, less-dense buildings, where the effect was most pronounced.”

    Unlike radiative forcing, however, changes to building energy demand differed by location. In Boston, cool pavements reduced energy demand as often as they increased it across all neighborhoods. In Phoenix, cool pavements had a negative impact on energy demand in most census tracts due to incident radiation. When factoring in radiative forcing, though, cool pavements ultimately had a net benefit.

    Only after considering embodied emissions and impacts on fuel consumption did the ideal pavement type manifest for each neighborhood. Once factoring in uncertainty over the life cycle, researchers found that reflective concrete pavements had the best results, proving optimal in 53 percent and 73 percent of the neighborhoods in Boston and Phoenix, respectively.

    Once again, uncertainties and variations were identified. In Boston, replacing conventional asphalt pavements with a cool option was always preferred, while in Phoenix concrete pavements — reflective or not — had better outcomes due to rigidity at high temperatures that minimized vehicle fuel consumption. And despite the dominance of concrete in Phoenix, in 17 percent of its neighborhoods all reflective paving options proved more or less as effective, while in 1 percent of cases, conventional pavements were actually superior.

    “Though the climate change impacts we studied have proven numerous and often at odds with each other, our conclusions are unambiguous: Cool pavements could offer immense climate change mitigation benefits for both cities,” says Kirchain.

    The improvements to air temperatures would be noticeable: the team found that cool pavements would lower peak summer air temperatures in Boston by 1.7 C (3 F) and in Phoenix by 2.1 C (3.7 F). The carbon dioxide emissions reductions would likewise be impressive. Boston would decrease its carbon dioxide emissions by as much as 3 percent over 50 years while reductions in Phoenix would reach 6 percent over the same period.

    This analysis is one of the most comprehensive studies of cool pavements to date — but there’s more to investigate. Just as with pavements, it’s also possible to adjust building albedo, which may result in changes to building energy demand. Intensive grid decarbonization and the introduction of low-carbon concrete mixtures may also alter the emissions generated by cool pavements.

    There’s still lots of ground to cover for the CSHub team. But by studying cool pavements, they’ve elevated a brilliant climate change solution and opened avenues for further research and future mitigation.

    The MIT Concrete Sustainability Hub is a team of researchers from several departments across MIT working on concrete and infrastructure science, engineering, and economics. Its research is supported by the Portland Cement Association and the Ready Mixed Concrete Research and Education Foundation. More

  • in

    A peculiar state of matter in layers of semiconductors

    Scientists around the world are developing new hardware for quantum computers, a new type of device that could accelerate drug design, financial modeling, and weather prediction. These computers rely on qubits, bits of matter that can represent some combination of 1 and 0 simultaneously. The problem is that qubits are fickle, degrading into regular bits when interactions with surrounding matter interfere. But new research at MIT suggests a way to protect their states, using a phenomenon called many-body localization (MBL).

    MBL is a peculiar phase of matter, proposed decades ago, that is unlike solid or liquid. Typically, matter comes to thermal equilibrium with its environment. That’s why soup cools and ice cubes melt. But in MBL, an object consisting of many strongly interacting bodies, such as atoms, never reaches such equilibrium. Heat, like sound, consists of collective atomic vibrations and can travel in waves; an object always has such heat waves internally. But when there’s enough disorder and enough interaction in the way its atoms are arranged, the waves can become trapped, thus preventing the object from reaching equilibrium.

    MBL had been demonstrated in “optical lattices,” arrangements of atoms at very cold temperatures held in place using lasers. But such setups are impractical. MBL had also arguably been shown in solid systems, but only with very slow temporal dynamics, in which the phase’s existence is hard to prove because equilibrium might be reached if researchers could wait long enough. The MIT research found a signatures of MBL in a “solid-state” system — one made of semiconductors — that would otherwise have reached equilibrium in the time it was watched.

    “It could open a new chapter in the study of quantum dynamics,” says Rahul Nandkishore, a physicist at the University of Colorado at Boulder, who was not involved in the work.

    Mingda Li, the Norman C Rasmussen Assistant Professor Nuclear Science and Engineering at MIT, led the new study, published in a recent issue of Nano Letters. The researchers built a system containing alternating semiconductor layers, creating a microscopic lasagna — aluminum arsenide, followed by gallium arsenide, and so on, for 600 layers, each 3 nanometers (millionths of a millimeter) thick. Between the layers they dispersed “nanodots,” 2-nanometer particles of erbium arsenide, to create disorder. The lasagna, or “superlattice,” came in three recipes: one with no nanodots, one in which nanodots covered 8 percent of each layer’s area, and one in which they covered 25 percent.

    According to Li, the team used layers of material, instead of a bulk material, to simplify the system so dissipation of heat across the planes was essentially one-dimensional. And they used nanodots, instead of mere chemical impurities, to crank up the disorder.

    To measure whether these disordered systems are still staying in equilibrium, the researchers measured them with X-rays. Using the Advanced Photon Source at Argonne National Lab, they shot beams of radiation at an energy of more than 20,000 electron volts, and to resolve the energy difference between the incoming X-ray and after its reflection off the sample’s surface, with an energy resolution less than one one-thousandth of an electron volt. To avoid penetrating the superlattice and hitting the underlying substrate, they shot it at an angle of just half a degree from parallel.

    Just as light can be measured as waves or particles, so too can heat. The collective atomic vibration for heat in the form of a heat-carrying unit is called a phonon. X-rays interact with these phonons, and by measuring how X-rays reflect off the sample, the experimenters can determine if it is in equilibrium.

    The researchers found that when the superlattice was cold — 30 kelvin, about -400 degrees Fahrenheit — and it contained nanodots, its phonons at certain frequencies remained were not in equilibrium.

    More work remains to prove conclusively that MBL has been achieved, but “this new quantum phase can open up a whole new platform to explore quantum phenomena,” Li says, “with many potential applications, from thermal storage to quantum computing.”

    To create qubits, some quantum computers employ specks of matter called quantum dots. Li says quantum dots similar to Li’s nanodots could act as qubits. Magnets could read or write their quantum states, while the many-body localization would keep them insulated from heat and other environmental factors.

    In terms of thermal storage, such a superlattice might switch in and out of an MBL phase by magnetically controlling the nanodots. It could insulate computer parts from heat at one moment, then allow parts to disperse heat when it won’t cause damage. Or it could allow heat to build up and be harnessed later for generating electricity.

    Conveniently, superlattices with nanodots can be constructed using traditional techniques for fabricating semiconductors, alongside other elements of computer chips. According to Li, “It’s a much larger design space than with chemical doping, and there are numerous applications.”

    “I am excited to see that signatures of MBL can now also be found in real material systems,” says Immanuel Bloch, scientific director at the Max-Planck-Institute of Quantum Optics, of the new work. “I believe this will help us to better understand the conditions under which MBL can be observed in different quantum many-body systems and how possible coupling to the environment affects the stability of the system. These are fundamental and important questions and the MIT experiment is an important step helping us to answer them.”

    Funding was provided by the U.S. Department of Energy’s Basic Energy Sciences program’s Neutron Scattering Program. More

  • in

    Designing better batteries for electric vehicles

    The urgent need to cut carbon emissions is prompting a rapid move toward electrified mobility and expanded deployment of solar and wind on the electric grid. If those trends escalate as expected, the need for better methods of storing electrical energy will intensify.

    “We need all the strategies we can get to address the threat of climate change,” says Elsa Olivetti PhD ’07, the Esther and Harold E. Edgerton Associate Professor in Materials Science and Engineering. “Obviously, developing technologies for grid-based storage at a large scale is critical. But for mobile applications — in particular, transportation — much research is focusing on adapting today’s lithium-ion battery to make versions that are safer, smaller, and can store more energy for their size and weight.”

    Traditional lithium-ion batteries continue to improve, but they have limitations that persist, in part because of their structure. A lithium-ion battery consists of two electrodes — one positive and one negative — sandwiched around an organic (carbon-containing) liquid. As the battery is charged and discharged, electrically charged particles (or ions) of lithium pass from one electrode to the other through the liquid electrolyte.

    One problem with that design is that at certain voltages and temperatures, the liquid electrolyte can become volatile and catch fire. “Batteries are generally safe under normal usage, but the risk is still there,” says Kevin Huang PhD ’15, a research scientist in Olivetti’s group.

    Another problem is that lithium-ion batteries are not well-suited for use in vehicles. Large, heavy battery packs take up space and increase a vehicle’s overall weight, reducing fuel efficiency. But it’s proving difficult to make today’s lithium-ion batteries smaller and lighter while maintaining their energy density — that is, the amount of energy they store per gram of weight.

    To solve those problems, researchers are changing key features of the lithium-ion battery to make an all-solid, or “solid-state,” version. They replace the liquid electrolyte in the middle with a thin, solid electrolyte that’s stable at a wide range of voltages and temperatures. With that solid electrolyte, they use a high-capacity positive electrode and a high-capacity, lithium metal negative electrode that’s far thinner than the usual layer of porous carbon. Those changes make it possible to shrink the overall battery considerably while maintaining its energy-storage capacity, thereby achieving a higher energy density.

    “Those features — enhanced safety and greater energy density — are probably the two most-often-touted advantages of a potential solid-state battery,” says Huang. He then quickly clarifies that “all of these things are prospective, hoped-for, and not necessarily realized.” Nevertheless, the possibility has many researchers scrambling to find materials and designs that can deliver on that promise.

    Thinking beyond the lab

    Researchers have come up with many intriguing options that look promising — in the lab. But Olivetti and Huang believe that additional practical considerations may be important, given the urgency of the climate change challenge. “There are always metrics that we researchers use in the lab to evaluate possible materials and processes,” says Olivetti. Examples might include energy-storage capacity and charge/discharge rate. When performing basic research — which she deems both necessary and important — those metrics are appropriate. “But if the aim is implementation, we suggest adding a few metrics that specifically address the potential for rapid scaling,” she says.

    Based on industry’s experience with current lithium-ion batteries, the MIT researchers and their colleague Gerbrand Ceder, the Daniel M. Tellep Distinguished Professor of Engineering at the University of California at Berkeley, suggest three broad questions that can help identify potential constraints on future scale-up as a result of materials selection. First, with this battery design, could materials availability, supply chains, or price volatility become a problem as production scales up? (Note that the environmental and other concerns raised by expanded mining are outside the scope of this study.) Second, will fabricating batteries from these materials involve difficult manufacturing steps during which parts are likely to fail? And third, do manufacturing measures needed to ensure a high-performance product based on these materials ultimately lower or raise the cost of the batteries produced?

    To demonstrate their approach, Olivetti, Ceder, and Huang examined some of the electrolyte chemistries and battery structures now being investigated by researchers. To select their examples, they turned to previous work in which they and their collaborators used text- and data-mining techniques to gather information on materials and processing details reported in the literature. From that database, they selected a few frequently reported options that represent a range of possibilities.

    Materials and availability

    In the world of solid inorganic electrolytes, there are two main classes of materials — the oxides, which contain oxygen, and the sulfides, which contain sulfur. Olivetti, Ceder, and Huang focused on one promising electrolyte option in each class and examined key elements of concern for each of them.

    The sulfide they considered was LGPS, which combines lithium, germanium, phosphorus, and sulfur. Based on availability considerations, they focused on the germanium, an element that raises concerns in part because it’s not generally mined on its own. Instead, it’s a byproduct produced during the mining of coal and zinc.

    To investigate its availability, the researchers looked at how much germanium was produced annually in the past six decades during coal and zinc mining and then at how much could have been produced. The outcome suggested that 100 times more germanium could have been produced, even in recent years. Given that supply potential, the availability of germanium is not likely to constrain the scale-up of a solid-state battery based on an LGPS electrolyte.

    The situation looked less promising with the researchers’ selected oxide, LLZO, which consists of lithium, lanthanum, zirconium, and oxygen. Extraction and processing of lanthanum are largely concentrated in China, and there’s limited data available, so the researchers didn’t try to analyze its availability. The other three elements are abundantly available. However, in practice, a small quantity of another element — called a dopant — must be added to make LLZO easy to process. So the team focused on tantalum, the most frequently used dopant, as the main element of concern for LLZO.

    Tantalum is produced as a byproduct of tin and niobium mining. Historical data show that the amount of tantalum produced during tin and niobium mining was much closer to the potential maximum than was the case with germanium. So the availability of tantalum is more of a concern for the possible scale-up of an LLZO-based battery.

    But knowing the availability of an element in the ground doesn’t address the steps required to get it to a manufacturer. So the researchers investigated a follow-on question concerning the supply chains for critical elements — mining, processing, refining, shipping, and so on. Assuming that abundant supplies are available, can the supply chains that deliver those materials expand quickly enough to meet the growing demand for batteries?

    In sample analyses, they looked at how much supply chains for germanium and tantalum would need to grow year to year to provide batteries for a projected fleet of electric vehicles in 2030. As an example, an electric vehicle fleet often cited as a goal for 2030 would require production of enough batteries to deliver a total of 100 gigawatt hours of energy. To meet that goal using just LGPS batteries, the supply chain for germanium would need to grow by 50 percent from year to year — a stretch, since the maximum growth rate in the past has been about 7 percent. Using just LLZO batteries, the supply chain for tantalum would need to grow by about 30 percent — a growth rate well above the historical high of about 10 percent.

    Those examples demonstrate the importance of considering both materials availability and supply chains when evaluating different solid electrolytes for their scale-up potential. “Even when the quantity of a material available isn’t a concern, as is the case with germanium, scaling all the steps in the supply chain to match the future production of electric vehicles may require a growth rate that’s literally unprecedented,” says Huang.

    Materials and processing

    In assessing the potential for scale-up of a battery design, another factor to consider is the difficulty of the manufacturing process and how it may impact cost. Fabricating a solid-state battery inevitably involves many steps, and a failure at any step raises the cost of each battery successfully produced. As Huang explains, “You’re not shipping those failed batteries; you’re throwing them away. But you’ve still spent money on the materials and time and processing.”

    As a proxy for manufacturing difficulty, Olivetti, Ceder, and Huang explored the impact of failure rate on overall cost for selected solid-state battery designs in their database. In one example, they focused on the oxide LLZO. LLZO is extremely brittle, and at the high temperatures involved in manufacturing, a large sheet that’s thin enough to use in a high-performance solid-state battery is likely to crack or warp.

    To determine the impact of such failures on cost, they modeled four key processing steps in assembling LLZO-based batteries. At each step, they calculated cost based on an assumed yield — that is, the fraction of total units that were successfully processed without failing. With the LLZO, the yield was far lower than with the other designs they examined; and, as the yield went down, the cost of each kilowatt-hour (kWh) of battery energy went up significantly. For example, when 5 percent more units failed during the final cathode heating step, cost increased by about $30/kWh — a nontrivial change considering that a commonly accepted target cost for such batteries is $100/kWh. Clearly, manufacturing difficulties can have a profound impact on the viability of a design for large-scale adoption.

    Materials and performance

    One of the main challenges in designing an all-solid battery comes from “interfaces” — that is, where one component meets another. During manufacturing or operation, materials at those interfaces can become unstable. “Atoms start going places that they shouldn’t, and battery performance declines,” says Huang.

    As a result, much research is devoted to coming up with methods of stabilizing interfaces in different battery designs. Many of the methods proposed do increase performance; and as a result, the cost of the battery in dollars per kWh goes down. But implementing such solutions generally involves added materials and time, increasing the cost per kWh during large-scale manufacturing.

    To illustrate that trade-off, the researchers first examined their oxide, LLZO. Here, the goal is to stabilize the interface between the LLZO electrolyte and the negative electrode by inserting a thin layer of tin between the two. They analyzed the impacts — both positive and negative — on cost of implementing that solution. They found that adding the tin separator increases energy-storage capacity and improves performance, which reduces the unit cost in dollars/kWh. But the cost of including the tin layer exceeds the savings so that the final cost is higher than the original cost.

    In another analysis, they looked at a sulfide electrolyte called LPSCl, which consists of lithium, phosphorus, and sulfur with a bit of added chlorine. In this case, the positive electrode incorporates particles of the electrolyte material — a method of ensuring that the lithium ions can find a pathway through the electrolyte to the other electrode. However, the added electrolyte particles are not compatible with other particles in the positive electrode — another interface problem. In this case, a standard solution is to add a “binder,” another material that makes the particles stick together.

    Their analysis confirmed that without the binder, performance is poor, and the cost of the LPSCl-based battery is more than $500/kWh. Adding the binder improves performance significantly, and the cost drops by almost $300/kWh. In this case, the cost of adding the binder during manufacturing is so low that essentially all the of the cost decrease from adding the binder is realized. Here, the method implemented to solve the interface problem pays off in lower costs.

    The researchers performed similar studies of other promising solid-state batteries reported in the literature, and their results were consistent: The choice of battery materials and processes can affect not only near-term outcomes in the lab but also the feasibility and cost of manufacturing the proposed solid-state battery at the scale needed to meet future demand. The results also showed that considering all three factors together — availability, processing needs, and battery performance — is important because there may be collective effects and trade-offs involved.

    Olivetti is proud of the range of concerns the team’s approach can probe. But she stresses that it’s not meant to replace traditional metrics used to guide materials and processing choices in the lab. “Instead, it’s meant to complement those metrics by also looking broadly at the sorts of things that could get in the way of scaling” — an important consideration given what Huang calls “the urgent ticking clock” of clean energy and climate change.

    This research was supported by the Seed Fund Program of the MIT Energy Initiative (MITEI) Low-Carbon Energy Center for Energy Storage; by Shell, a founding member of MITEI; and by the U.S. Department of Energy’s Office of Energy Efficiency and Renewable Energy, Vehicle Technologies Office, under the Advanced Battery Materials Research Program. The text mining work was supported by the National Science Foundation, the Office of Naval Research, and MITEI.

    This article appears in the Spring 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Why boiling droplets can race across hot oily surfaces

    When you’re frying something in a skillet and some droplets of water fall into the pan, you may have noticed those droplets skittering around on top of the film of hot oil. Now, that seemingly trivial phenomenon has been analyzed and understood for the first time by researchers at MIT — and may have important implications for microfluidic devices, heat transfer systems, and other useful functions.

    A droplet of boiling water on a hot surface will sometimes levitate on a thin vapor film, a well-studied phenomenon called the Leidenfrost effect. Because it is suspended on a cushion of vapor, the droplet can move across the surface with little friction. If the surface is coated with hot oil, which has much greater friction than the vapor film under a Leidenfrost droplet, the hot droplet should be expected to move much more slowly. But, counterintuitively, the series of experiments at MIT has showed that the opposite effect happens: The droplet on oil zooms away much more rapidly than on bare metal.

    This effect, which propels droplets across a heated oily surface 10 to 100 times faster than on bare metal, could potentially be used for self-cleaning or de-icing systems, or to propel tiny amounts of liquid through the tiny tubing of microfluidic devices used for biomedical and chemical research and testing. The findings are described today in a paper in the journal Physical Review Letters, written by graduate student Victor Julio Leon and professor of mechanical engineering Kripa Varanasi.

    In previous research, Varanasi and his team showed that it would be possible to harness this phenomenon for some of these potential applications, but the new work, producing such high velocities (approximately 50 times faster), could open up even more new uses, Varanasi says.

    After long and painstaking analysis, Leon and Varanasi were able to determine the reason for the rapid ejection of these droplets from the hot surface. Under the right conditions of high temperature, oil viscosity, and oil thickness, the oil will form a kind of thin cloak coating the outside of each water droplet. As the droplet heats up, tiny bubbles of vapor form along the interface between the droplet and the oil. Because these minuscule bubbles accumulate randomly along the droplet’s base, asymmetries develop, and the lowered friction under the bubble loosens the droplet’s attachment to the surface and propels it away.

    The oily film acts almost like the rubber of a balloon, and when the tiny vapor bubbles burst through, they impart a force and “the balloon just flies off because the air is going out one side, creating a momentum transfer,” Varanasi says. Without the oil cloak, the vapor bubbles would just flow out of the droplet in all directions, preventing self-propulsion, but the cloaking effect holds them in like the skin of the balloon.

    Researchers used extreme high-speed photography to reveal the details of the moving droplets. “You can actually see the fluctuations on the surface,” graduate student Victor Leon says.

    The phenomenon sounds simple, but it turns out to depend on a complex interplay between events happening at different timescales.

    This newly analyzed self-ejection phenomenon depends on a number of factors, including the droplet size, the thickness and viscosity of the oil film, the thermal conductivity of the surface, the surface tension of the different liquids in the system, the type of oil, and the texture of the surface.

    In their experiments, the lowest viscosity of the several oils they tested was about 100 times more viscous than the surrounding air. So, it would have been expected to make bubbles move much more slowly than on the air cushion of the Leidenfrost effect. “That gives an idea of how surprising it is that this droplet is moving faster,” Leon says.

    As boiling starts, bubbles will randomly form from some nucleation site that is not right at its center. Bubble formation will increase on that side, leading to the propulsion off in one direction. So far, the researchers have not been able to control the direction of that randomly induced propulsion, but they are now working on some possible ways to control the directionality in the future. “We have ideas of how to trigger the propulsion in controlled directions,” Leon says.

    Remarkably, the tests showed that even though the oil film of the surface, which was a silicon wafer, was only 10 to 100 microns thick — about the thickness of a human hair — its behavior didn’t match the equations for a thin film. Instead, because of the vaporization the film, it was actually behaving like an infinitely deep pool of oil. “We were kind of astounded” by that finding, Leon says. While a thin film should have caused it to stick, the virtually infinite pool gave the droplet much lower friction, allowing it to move more rapidly than expected, Leon says.

    The effect depends on the fact that the formation of the tiny bubbles is a much more rapid process than the transfer of heat through the oil film, about a thousand times faster, leaving plenty of time for the asymmetries within the droplet to accumulate. When the bubbles of vapor initially form at the oil-water interface, they are  much more insulating that the liquid of the droplet, leading to significant thermal disturbances in the oil film. These disturbances cause the droplet to vibrate, reducing friction and increasing vaporization rate.

    It took extreme high-speed photography to reveal the details of this rapid effect, Leon says, using a 100,000 frames per second video camera. “You can actually see the fluctuations on the surface,” Leon says.

    Initially, Varanasi says, “we were stumped at multiple levels as to what was going on, because the effect was so unexpected. … It’s a fairly complex answer to what may look seemingly simple, but it really creates this fast propulsion.”

    In practice, the effect means that in certain situations, a simple heating of a surface, by the right amount and with the right kind of oily coating, could cause corrosive scaling drops to be cleared from a surface. Further down the line, once the researchers have more control over directionality, the system could potentially substitute for some high-tech pumps in microfluidic devices to propel droplets through the right tubes at the right time. This might be especially useful in microgravity situations, where ordinary pumps don’t function as usual.

    It may also be possible to attach a payload to the droplets, creating a kind of microscale robotic delivery system, Varanasi says. And while their tests focused on water droplets, potentially it could apply to many different kinds of liquids and sublimating solids, he says.

    The work was supported by the National Science Foundation. More

  • in

    Using aluminum and water to make clean hydrogen fuel — when and where it’s needed

    As the world works to move away from fossil fuels, many researchers are investigating whether clean hydrogen fuel can play an expanded role in sectors from transportation and industry to buildings and power generation. It could be used in fuel cell vehicles, heat-producing boilers, electricity-generating gas turbines, systems for storing renewable energy, and more.

    But while using hydrogen doesn’t generate carbon emissions, making it typically does. Today, almost all hydrogen is produced using fossil fuel-based processes that together generate more than 2 percent of all global greenhouse gas emissions. In addition, hydrogen is often produced in one location and consumed in another, which means its use also presents logistical challenges.

    A promising reaction

    Another option for producing hydrogen comes from a perhaps surprising source: reacting aluminum with water. Aluminum metal will readily react with water at room temperature to form aluminum hydroxide and hydrogen. That reaction doesn’t typically take place because a layer of aluminum oxide naturally coats the raw metal, preventing it from coming directly into contact with water.

    Using the aluminum-water reaction to generate hydrogen doesn’t produce any greenhouse gas emissions, and it promises to solve the transportation problem for any location with available water. Simply move the aluminum and then react it with water on-site. “Fundamentally, the aluminum becomes a mechanism for storing hydrogen — and a very effective one,” says Douglas P. Hart, professor of mechanical engineering at MIT. “Using aluminum as our source, we can ‘store’ hydrogen at a density that’s 10 times greater than if we just store it as a compressed gas.”

    Two problems have kept aluminum from being employed as a safe, economical source for hydrogen generation. The first problem is ensuring that the aluminum surface is clean and available to react with water. To that end, a practical system must include a means of first modifying the oxide layer and then keeping it from re-forming as the reaction proceeds.

    The second problem is that pure aluminum is energy-intensive to mine and produce, so any practical approach needs to use scrap aluminum from various sources. But scrap aluminum is not an easy starting material. It typically occurs in an alloyed form, meaning that it contains other elements that are added to change the properties or characteristics of the aluminum for different uses. For example, adding magnesium increases strength and corrosion-resistance, adding silicon lowers the melting point, and adding a little of both makes an alloy that’s moderately strong and corrosion-resistant.

    Despite considerable research on aluminum as a source of hydrogen, two key questions remain: What’s the best way to prevent the adherence of an oxide layer on the aluminum surface, and how do alloying elements in a piece of scrap aluminum affect the total amount of hydrogen generated and the rate at which it is generated?

    “If we’re going to use scrap aluminum for hydrogen generation in a practical application, we need to be able to better predict what hydrogen generation characteristics we’re going to observe from the aluminum-water reaction,” says Laureen Meroueh PhD ’20, who earned her doctorate in mechanical engineering.

    Since the fundamental steps in the reaction aren’t well understood, it’s been hard to predict the rate and volume at which hydrogen forms from scrap aluminum, which can contain varying types and concentrations of alloying elements. So Hart, Meroueh, and Thomas W. Eagar, a professor of materials engineering and engineering management in the MIT Department of Materials Science and Engineering, decided to examine — in a systematic fashion — the impacts of those alloying elements on the aluminum-water reaction and on a promising technique for preventing the formation of the interfering oxide layer.

    To prepare, they had experts at Novelis Inc. fabricate samples of pure aluminum and of specific aluminum alloys made of commercially pure aluminum combined with either 0.6 percent silicon (by weight), 1 percent magnesium, or both — compositions that are typical of scrap aluminum from a variety of sources. Using those samples, the MIT researchers performed a series of tests to explore different aspects of the aluminum-water reaction.

    Pre-treating the aluminum

    The first step was to demonstrate an effective means of penetrating the oxide layer that forms on aluminum in the air. Solid aluminum is made up of tiny grains that are packed together with occasional boundaries where they don’t line up perfectly. To maximize hydrogen production, researchers would need to prevent the formation of the oxide layer on all those interior grain surfaces.

    Research groups have already tried various ways of keeping the aluminum grains “activated” for reaction with water. Some have crushed scrap samples into particles so tiny that the oxide layer doesn’t adhere. But aluminum powders are dangerous, as they can react with humidity and explode. Another approach calls for grinding up scrap samples and adding liquid metals to prevent oxide deposition. But grinding is a costly and energy-intensive process.

    To Hart, Meroueh, and Eagar, the most promising approach — first introduced by Jonathan Slocum ScD ’18 while he was working in Hart’s research group — involved pre-treating the solid aluminum by painting liquid metals on top and allowing them to permeate through the grain boundaries.

    To determine the effectiveness of that approach, the researchers needed to confirm that the liquid metals would reach the internal grain surfaces, with and without alloying elements present. And they had to establish how long it would take for the liquid metal to coat all of the grains in pure aluminum and its alloys.

    They started by combining two metals — gallium and indium — in specific proportions to create a “eutectic” mixture; that is, a mixture that would remain in liquid form at room temperature. They coated their samples with the eutectic and allowed it to penetrate for time periods ranging from 48 to 96 hours. They then exposed the samples to water and monitored the hydrogen yield (the amount formed) and flow rate for 250 minutes. After 48 hours, they also took high-magnification scanning electron microscope (SEM) images so they could observe the boundaries between adjacent aluminum grains.

    Based on the hydrogen yield measurements and the SEM images, the MIT team concluded that the gallium-indium eutectic does naturally permeate and reach the interior grain surfaces. However, the rate and extent of penetration vary with the alloy. The permeation rate was the same in silicon-doped aluminum samples as in pure aluminum samples but slower in magnesium-doped samples.

    Perhaps most interesting were the results from samples doped with both silicon and magnesium — an aluminum alloy often found in recycling streams. Silicon and magnesium chemically bond to form magnesium silicide, which occurs as solid deposits on the internal grain surfaces. Meroueh hypothesized that when both silicon and magnesium are present in scrap aluminum, those deposits can act as barriers that impede the flow of the gallium-indium eutectic.

    The experiments and images confirmed her hypothesis: The solid deposits did act as barriers, and images of samples pre-treated for 48 hours showed that permeation wasn’t complete. Clearly, a lengthy pre-treatment period would be critical for maximizing the hydrogen yield from scraps of aluminum containing both silicon and magnesium.

    Meroueh cites several benefits to the process they used. “You don’t have to apply any energy for the gallium-indium eutectic to work its magic on aluminum and get rid of that oxide layer,” she says. “Once you’ve activated your aluminum, you can drop it in water, and it’ll generate hydrogen — no energy input required.” Even better, the eutectic doesn’t chemically react with the aluminum. “It just physically moves around in between the grains,” she says. “At the end of the process, I could recover all of the gallium and indium I put in and use it again” — a valuable feature as gallium and (especially) indium are costly and in relatively short supply.

    Impacts of alloying elements on hydrogen generation

    The researchers next investigated how the presence of alloying elements affects hydrogen generation. They tested samples that had been treated with the eutectic for 96 hours; by then, the hydrogen yield and flow rates had leveled off in all the samples.

    The presence of 0.6 percent silicon increased the hydrogen yield for a given weight of aluminum by 20 percent compared to pure aluminum — even though the silicon-containing sample had less aluminum than the pure aluminum sample. In contrast, the presence of 1 percent magnesium produced far less hydrogen, while adding both silicon and magnesium pushed the yield up, but not to the level of pure aluminum.

    The presence of silicon also greatly accelerated the reaction rate, producing a far higher peak in the flow rate but cutting short the duration of hydrogen output. The presence of magnesium produced a lower flow rate but allowed the hydrogen output to remain fairly steady over time. And once again, aluminum with both alloying elements produced a flow rate between that of magnesium-doped and pure aluminum.

    Those results provide practical guidance on how to adjust the hydrogen output to match the operating needs of a hydrogen-consuming device. If the starting material is commercially pure aluminum, adding small amounts of carefully selected alloying elements can tailor the hydrogen yield and flow rate. If the starting material is scrap aluminum, careful choice of the source can be key. For high, brief bursts of hydrogen, pieces of silicon-containing aluminum from an auto junkyard could work well. For lower but longer flows, magnesium-containing scraps from the frame of a demolished building might be better. For results somewhere in between, aluminum containing both silicon and magnesium should work well; such material is abundantly available from scrapped cars and motorcycles, yachts, bicycle frames, and even smartphone cases.

    It should also be possible to combine scraps of different aluminum alloys to tune the outcome, notes Meroueh. “If I have a sample of activated aluminum that contains just silicon and another sample that contains just magnesium, I can put them both into a container of water and let them react,” she says. “So I get the fast ramp-up in hydrogen production from the silicon and then the magnesium takes over and has that steady output.”

    Another opportunity for tuning: Reducing grain size

    Another practical way to affect hydrogen production could be to reduce the size of the aluminum grains — a change that should increase the total surface area available for reactions to occur.

    To investigate that approach, the researchers requested specially customized samples from their supplier. Using standard industrial procedures, the Novelis experts first fed each sample through two rollers, squeezing it from the top and bottom so that the internal grains were flattened. They then heated each sample until the long, flat grains had reorganized and shrunk to a targeted size.

    In a series of carefully designed experiments, the MIT team found that reducing the grain size increased the efficiency and decreased the duration of the reaction to varying degrees in the different samples. Again, the presence of particular alloying elements had a major effect on the outcome.

    Needed: A revised theory that explains observations

    Throughout their experiments, the researchers encountered some unexpected results. For example, standard corrosion theory predicts that pure aluminum will generate more hydrogen than silicon-doped aluminum will — the opposite of what they observed in their experiments.

    To shed light on the underlying chemical reactions, Hart, Meroueh, and Eagar investigated hydrogen “flux,” that is, the volume of hydrogen generated over time on each square centimeter of aluminum surface, including the interior grains. They examined three grain sizes for each of their four compositions and collected thousands of data points measuring hydrogen flux.

    Their results show that reducing grain size has significant effects. It increases the peak hydrogen flux from silicon-doped aluminum as much as 100 times and from the other three compositions by 10 times. With both pure aluminum and silicon-containing aluminum, reducing grain size also decreases the delay before the peak flux and increases the rate of decline afterward. With magnesium-containing aluminum, reducing the grain size brings about an increase in peak hydrogen flux and results in a slightly faster decline in the rate of hydrogen output. With both silicon and magnesium present, the hydrogen flux over time resembles that of magnesium-containing aluminum when the grain size is not manipulated. When the grain size is reduced, the hydrogen output characteristics begin to resemble behavior observed in silicon-containing aluminum. That outcome was unexpected because when silicon and magnesium are both present, they react to form magnesium silicide, resulting in a new type of aluminum alloy with its own properties.

    The researchers stress the benefits of developing a better fundamental understanding of the underlying chemical reactions involved. In addition to guiding the design of practical systems, it might help them find a replacement for the expensive indium in their pre-treatment mixture. Other work has shown that gallium will naturally permeate through the grain boundaries of aluminum. “At this point, we know that the indium in our eutectic is important, but we don’t really understand what it does, so we don’t know how to replace it,” says Hart.

    But already Hart, Meroueh, and Eagar have demonstrated two practical ways of tuning the hydrogen reaction rate: by adding certain elements to the aluminum and by manipulating the size of the interior aluminum grains. In combination, those approaches can deliver significant results. “If you go from magnesium-containing aluminum with the largest grain size to silicon-containing aluminum with the smallest grain size, you get a hydrogen reaction rate that differs by two orders of magnitude,” says Meroueh. “That’s huge if you’re trying to design a real system that would use this reaction.”

    This research was supported through the MIT Energy Initiative by ExxonMobil-MIT Energy Fellowships awarded to Laureen Meroueh PhD ’20 from 2018 to 2020.

    This article appears in the Spring 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative.  More

  • in

    Global warming begets more warming, new paleoclimate study finds

    It is increasingly clear that the prolonged drought conditions, record-breaking heat, sustained wildfires, and frequent, more extreme storms experienced in recent years are a direct result of rising global temperatures brought on by humans’ addition of carbon dioxide to the atmosphere. And a new MIT study on extreme climate events in Earth’s ancient history suggests that today’s planet may become more volatile as it continues to warm.

    The study, appearing today in Science Advances, examines the paleoclimate record of the last 66 million years, during the Cenozoic era, which began shortly after the extinction of the dinosaurs. The scientists found that during this period, fluctuations in the Earth’s climate experienced a surprising “warming bias.” In other words, there were far more warming events — periods of prolonged global warming, lasting thousands to tens of thousands of years — than cooling events. What’s more, warming events tended to be more extreme, with greater shifts in temperature, than cooling events.

    The researchers say a possible explanation for this warming bias may lie in a “multiplier effect,” whereby a modest degree of warming — for instance from volcanoes releasing carbon dioxide into the atmosphere — naturally speeds up certain biological and chemical processes that enhance these fluctuations, leading, on average, to still more warming.

    Interestingly, the team observed that this warming bias disappeared about 5 million years ago, around the time when ice sheets started forming in the Northern Hemisphere. It’s unclear what effect the ice has had on the Earth’s response to climate shifts. But as today’s Arctic ice recedes, the new study suggests that a multiplier effect may kick back in, and the result may be a further amplification of human-induced global warming.

    “The Northern Hemisphere’s ice sheets are shrinking, and could potentially disappear as a long-term consequence of human actions” says the study’s lead author Constantin Arnscheidt, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “Our research suggests that this may make the Earth’s climate fundamentally more susceptible to extreme, long-term global warming events such as those seen in the geologic past.”

    Arnscheidt’s study co-author is Daniel Rothman, professor of geophysics at MIT, and  co-founder and co-director of MIT’s Lorenz Center.

    A volatile push

    For their analysis, the team consulted large databases of sediments containing deep-sea benthic foraminifera — single-celled organisms that have been around for hundreds of millions of years and whose hard shells are preserved in sediments. The composition of these shells is affected by the ocean temperatures as organisms are growing; the shells are therefore considered a reliable proxy for the Earth’s ancient temperatures.

    For decades, scientists have analyzed the composition of these shells, collected from all over the world and dated to various time periods, to track how the Earth’s temperature has fluctuated over millions of years. 

    “When using these data to study extreme climate events, most studies have focused on individual large spikes in temperature, typically of a few degrees Celsius warming,” Arnscheidt says. “Instead, we tried to look at the overall statistics and consider all the fluctuations involved, rather than picking out the big ones.”

    The team first carried out a statistical analysis of the data and observed that, over the last 66 million years, the distribution of global temperature fluctuations didn’t resemble a standard bell curve, with symmetric tails representing an equal probability of extreme warm and extreme cool fluctuations. Instead, the curve was noticeably lopsided, skewed toward more warm than cool events. The curve also exhibited a noticeably longer tail, representing warm events that were more extreme, or of higher temperature, than the most extreme cold events.

    “This indicates there’s some sort of amplification relative to what you would otherwise have expected,” Arnscheidt says. “Everything’s pointing to something fundamental that’s causing this push, or bias toward warming events.”

    “It’s fair to say that the Earth system becomes more volatile, in a warming sense,” Rothman adds.

    A warming multiplier

    The team wondered whether this warming bias might have been a result of “multiplicative noise” in the climate-carbon cycle. Scientists have long understood that higher temperatures, up to a point, tend to speed up biological and chemical processes. Because the carbon cycle, which is a key driver of long-term climate fluctuations, is itself composed of such processes, increases in temperature may lead to larger fluctuations, biasing the system towards extreme warming events.

    In mathematics, there exists a set of equations that describes such general amplifying, or multiplicative effects. The researchers applied this multiplicative theory to their analysis to see whether the equations could predict the asymmetrical distribution, including the degree of its skew and the length of its tails.

    In the end, they found that the data, and the observed bias toward warming, could be explained by the multiplicative theory. In other words, it’s very likely that, over the last 66 million years, periods of modest warming were on average further enhanced by multiplier effects, such as the response of biological and chemical processes that further warmed the planet.

    As part of the study, the researchers also looked at the correlation between past warming events and changes in Earth’s orbit. Over hundreds of thousands of years, Earth’s orbit around the sun regularly becomes more or less elliptical. But scientists have wondered why many past warming events appeared to coincide with these changes, and why these events feature outsized warming compared with what the change in Earth’s orbit could have wrought on its own.

    So, Arnscheidt and Rothman incorporated the Earth’s orbital changes into the multiplicative model and their analysis of Earth’s temperature changes, and found that multiplier effects could predictably amplify, on average, the modest temperature rises due to changes in Earth’s orbit.

    “Climate warms and cools in synchrony with orbital changes, but the orbital cycles themselves would predict only modest changes in climate,” Rothman says. “But if we consider a multiplicative model, then modest warming, paired with this multiplier effect, can result in extreme events that tend to occur at the same time as these orbital changes.”

    “Humans are forcing the system in a new way,” Arnscheidt adds. “And this study is showing that, when we increase temperature, we’re likely going to interact with these natural, amplifying effects.”

    This research was supported, in part, by MIT’s School of Science. More

  • in

    Electrifying cars and light trucks to meet Paris climate goals

    On Aug. 5, the White House announced that it seeks to ensure that 50 percent of all new passenger vehicles sold in the United States by 2030 are powered by electricity. The purpose of this target is to enable the U.S to remain competitive with China in the growing electric vehicle (EV) market and meet its international climate commitments. Setting ambitious EV sales targets and transitioning to zero-carbon power sources in the United States and other nations could lead to significant reductions in carbon dioxide and other greenhouse gas emissions in the transportation sector and move the world closer to achieving the Paris Agreement’s long-term goal of keeping global warming well below 2 degrees Celsius relative to preindustrial levels.

    At this time, electrification of the transportation sector is occurring primarily in private light-duty vehicles (LDVs). In 2020, the global EV fleet exceeded 10 million, but that’s a tiny fraction of the cars and light trucks on the road. How much of the LDV fleet will need to go electric to keep the Paris climate goal in play? 

    To help answer that question, researchers at the MIT Joint Program on the Science and Policy of Global Change and MIT Energy Initiative have assessed the potential impacts of global efforts to reduce carbon dioxide emissions on the evolution of LDV fleets over the next three decades.

    Using an enhanced version of the multi-region, multi-sector MIT Economic Projection and Policy Analysis (EPPA) model that includes a representation of the household transportation sector, they projected changes for the 2020-50 period in LDV fleet composition, carbon dioxide emissions, and related impacts for 18 different regions. Projections were generated under four increasingly ambitious climate mitigation scenarios: a “Reference” scenario based on current market trends and fuel efficiency policies, a “Paris Forever” scenario in which current Paris Agreement commitments (Nationally Determined Contributions, or NDCs) are maintained but not strengthened after 2030, a “Paris to 2 C” scenario in which decarbonization actions are enhanced to be consistent with capping global warming at 2 C, and an “Accelerated Actions” scenario the caps global warming at 1.5 C through much more aggressive emissions targets than the current NDCs.

    Based on projections spanning the first three scenarios, the researchers found that the global EV fleet will likely grow to about 95-105 million EVs by 2030, and 585-823 million EVs by 2050. In the Accelerated Actions scenario, global EV stock reaches more than 200 million vehicles in 2030, and more than 1 billion in 2050, accounting for two-thirds of the global LDV fleet. The research team also determined that EV uptake will likely grow but vary across regions over the 30-year study time frame, with China, the United States, and Europe remaining the largest markets. Finally, the researchers found that while EVs play a role in reducing oil use, a more substantial reduction in oil consumption comes from economy-wide carbon pricing. The results appear in a study in the journal Economics of Energy & Environmental Policy.

    “Our study shows that EVs can contribute significantly to reducing global carbon emissions at a manageable cost,” says MIT Joint Program Deputy Director and MIT Energy Initiative Senior Research Scientist Sergey Paltsev, the lead author. “We hope that our findings will help decision-makers to design efficient pathways to reduce emissions.”  

    To boost the EV share of the global LDV fleet, the study’s co-authors recommend more ambitious policies to mitigate climate change and decarbonize the electric grid. They also envision an “integrated system approach” to transportation that emphasizes making internal combustion engine vehicles more efficient, a long-term shift to low- and net-zero carbon fuels, and systemic efficiency improvements through digitalization, smart pricing, and multi-modal integration. While the study focuses on EV deployment, the authors also stress for the need for investment in all possible decarbonization options related to transportation, including enhancing public transportation, avoiding urban sprawl through strategic land-use planning, and reducing the use of private motorized transport by mode switching to walking, biking, and mass transit.

    This research is an extension of the authors’ contribution to the MIT Mobility of the Future study. More