More stories

  • in

    3 Questions: Daniel Cohn on the benefits of high-efficiency, flexible-fuel engines for heavy-duty trucking

    The California Air Resources Board has adopted a regulation that requires truck and engine manufacturers to reduce the nitrogen oxide (NOx) emissions from new heavy-duty trucks by 90 percent starting in 2027. NOx from heavy-duty trucks is one of the main sources of air pollution, creating smog and threatening respiratory health. This regulation requires the largest air pollution cuts in California in more than a decade. How can manufacturers achieve this aggressive goal efficiently and affordably?

    Daniel Cohn, a research scientist at the MIT Energy Initiative, and Leslie Bromberg, a principal research scientist at the MIT Plasma Science and Fusion Center, have been working on a high-efficiency, gasoline-ethanol engine that is cleaner and more cost-effective than existing diesel engine technologies. Here, Cohn explains the flexible-fuel engine approach and why it may be the most realistic solution — in the near term — to help California meet its stringent vehicle emission reduction goals. The research was sponsored by the Arthur Samberg MIT Energy Innovation fund.

    Q. How does your high-efficiency, flexible-fuel gasoline engine technology work?

    A. Our goal is to provide an affordable solution for heavy-duty vehicle (HDV) engines to emit low levels of nitrogen oxide (NOx) emissions that would meet California’s NOx regulations, while also quick-starting gasoline-consumption reductions in a substantial fraction of the HDV fleet.

    Presently, large trucks and other HDVs generally use diesel engines. The main reason for this is because of their high efficiency, which reduces fuel cost — a key factor for commercial trucks (especially long-haul trucks) because of the large number of miles that are driven. However, the NOx emissions from these diesel-powered vehicles are around 10 times greater than those from spark-ignition engines powered by gasoline or ethanol.

    Spark-ignition gasoline engines are primarily used in cars and light trucks (light-duty vehicles), which employ a three-way catalyst exhaust treatment system (generally referred to as a catalytic converter) that reduces vehicle NOx emissions by at least 98 percent and at a modest cost. The use of this highly effective exhaust treatment system is enabled by the capability of spark-ignition engines to be operated at a stoichiometric air/fuel ratio (where the amount of air matches what is needed for complete combustion of the fuel).

    Diesel engines do not operate with stoichiometric air/fuel ratios, making it much more difficult to reduce NOx emissions. Their state-of-the-art exhaust treatment system is much more complex and expensive than catalytic converters, and even with it, vehicles produce NOx emissions around 10 times higher than spark-ignition engine vehicles. Consequently, it is very challenging for diesel engines to further reduce their NOx emissions to meet the new California regulations.

    Our approach uses spark-ignition engines that can be powered by gasoline, ethanol, or mixtures of gasoline and ethanol as a substitute for diesel engines in HDVs. Gasoline has the attractive feature of being widely available and having a comparable or lower cost than diesel fuel. In addition, presently available ethanol in the U.S. produces up to 40 percent less greenhouse gas (GHG) emissions than diesel fuel or gasoline and has a widely available distribution system.

    To make gasoline- and/or ethanol-powered spark-ignition engine HDVs attractive for widespread HDV applications, we developed ways to make spark-ignition engines more efficient, so their fuel costs are more palatable to owners of heavy-duty trucks. Our approach provides diesel-like high efficiency and high power in gasoline-powered engines by using various methods to prevent engine knock (unwanted self-ignition that can damage the engine) in spark-ignition gasoline engines. This enables greater levels of turbocharging and use of higher engine compression ratios. These features provide high efficiency, comparable to that provided by diesel engines. Plus, when the engine is powered by ethanol, the required knock resistance is provided by the intrinsic high knock resistance of the fuel itself. 

    Q. What are the major challenges to implementing your technology in California?

    A. California has always been the pioneer in air pollutant control, with states such as Washington, Oregon, and New York often following suit. As the most populous state, California has a lot of sway — it’s a trendsetter. What happens in California has an impact on the rest of the United States.

    The main challenge to implementation of our technology is the argument that a better internal combustion engine technology is not needed because battery-powered HDVs — particularly long-haul trucks — can play the required role in reducing NOx and GHG emissions by 2035. We think that substantial market penetration of battery electric vehicles (BEV) in this vehicle sector will take a considerably longer time. In contrast to light-duty vehicles, there has been very little penetration of battery power into the HDV fleet, especially in long-haul trucks, which are the largest users of diesel fuel. One reason for this is that long-haul trucks using battery power face the challenge of reduced cargo capability due to substantial battery weight. Another challenge is the substantially longer charging time for BEVs compared to that of most present HDVs.

    Hydrogen-powered trucks using fuel cells have also been proposed as an alternative to BEV trucks, which might limit interest in adopting improved internal combustion engines. However, hydrogen-powered trucks face the formidable challenges of producing zero GHG hydrogen at affordable cost, as well as the cost of storage and transportation of hydrogen. At present the high purity hydrogen needed for fuel cells is generally very expensive.

    Q. How does your idea compare overall to battery-powered and hydrogen-powered HDVs? And how will you persuade people that it is an attractive pathway to follow?

    A. Our design uses existing propulsion systems and can operate on existing liquid fuels, and for these reasons, in the near term, it will be economically attractive to the operators of long-haul trucks. In fact, it can even be a lower-cost option than diesel power because of the significantly less-expensive exhaust treatment and smaller-size engines for the same power and torque. This economic attractiveness could enable the large-scale market penetration that is needed to have a substantial impact on reducing air pollution. Alternatively, we think it could take at least 20 years longer for BEVs or hydrogen-powered vehicles to obtain the same level of market penetration.

    Our approach also uses existing corn-based ethanol, which can provide a greater near-term GHG reduction benefit than battery- or hydrogen-powered long-haul trucks. While the GHG reduction from using existing ethanol would initially be in the 20 percent to 40 percent range, the scale at which the market is penetrated in the near-term could be much greater than for BEV or hydrogen-powered vehicle technology. The overall impact in reducing GHGs could be considerably greater.

    Moreover, we see a migration path beyond 2030 where further reductions in GHG emissions from corn ethanol can be possible through carbon capture and sequestration of the carbon dioxide (CO2) that is produced during ethanol production. In this case, overall CO2 reductions could potentially be 80 percent or more. Technologies for producing ethanol (and methanol, another alcohol fuel) from waste at attractive costs are emerging, and can provide fuel with zero or negative GHG emissions. One pathway for providing a negative GHG impact is through finding alternatives to landfilling for waste disposal, as this method leads to potent methane GHG emissions. A negative GHG impact could also be obtained by converting biomass waste into clean fuel, since the biomass waste can be carbon neutral and CO2 from the production of the clean fuel can be captured and sequestered.

    In addition, our flex-fuel engine technology may be synergistically used as range extenders in plug-in hybrid HDVs, which use limited battery capacity and obviates the cargo capability reduction and fueling disadvantages of long-haul trucks powered by battery alone.

    With the growing threats from air pollution and global warming, our HDV solution is an increasingly important option for near-term reduction of air pollution and offers a faster start in reducing heavy-duty fleet GHG emissions. It also provides an attractive migration path for longer-term, larger GHG reductions from the HDV sector. More

  • in

    Researchers design sensors to rapidly detect plant hormones

    Researchers from the Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP) interdisciplinary research group of the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, and their local collaborators from Temasek Life Sciences Laboratory (TLL) and Nanyang Technological University (NTU), have developed the first-ever nanosensor to enable rapid testing of synthetic auxin plant hormones. The novel nanosensors are safer and less tedious than existing techniques for testing plants’ response to compounds such as herbicide, and can be transformative in improving agricultural production and our understanding of plant growth.

    The scientists designed sensors for two plant hormones — 1-naphthalene acetic acid (NAA) and 2,4-dichlorophenoxyacetic acid (2,4-D) — which are used extensively in the farming industry for regulating plant growth and as herbicides, respectively. Current methods to detect NAA and 2,4-D cause damage to plants, and are unable to provide real-time in vivo monitoring and information.

    Based on the concept of corona phase molecular recognition (​​CoPhMoRe) pioneered by the Strano Lab at SMART DiSTAP and MIT, the new sensors are able to detect the presence of NAA and 2,4-D in living plants at a swift pace, providing plant information in real-time, without causing any harm. The team has successfully tested both sensors on a number of everyday crops including pak choi, spinach, and rice across various planting mediums such as soil, hydroponic, and plant tissue culture.

    Explained in a paper titled “Nanosensor Detection of Synthetic Auxins In Planta using Corona Phase Molecular Recognition” published in the journal ACS Sensors, the research can facilitate more efficient use of synthetic auxins in agriculture and hold tremendous potential to advance plant biology study.

    “Our CoPhMoRe technique has previously been used to detect compounds such as hydrogen peroxide and heavy-metal pollutants like arsenic — but this is the first successful case of CoPhMoRe sensors developed for detecting plant phytohormones that regulate plant growth and physiology, such as sprays to prevent premature flowering and dropping of fruits,” says DiSTAP co-lead principal investigator Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT. “This technology can replace current state-of-the-art sensing methods which are laborious, destructive, and unsafe.”

    Of the two sensors developed by the research team, the 2,4-D nanosensor also showed the ability to detect herbicide susceptibility, enabling farmers and agricultural scientists to quickly find out how vulnerable or resistant different plants are to herbicides without the need to monitor crop or weed growth over days. “This could be incredibly beneficial in revealing the mechanism behind how 2,4-D works within plants and why crops develop herbicide resistance,” says DiSTAP and TLL Principal Investigator Rajani Sarojam.

    “Our research can help the industry gain a better understanding of plant growth dynamics and has the potential to completely change how the industry screens for herbicide resistance, eliminating the need to monitor crop or weed growth over days,” says Mervin Chun-Yi Ang, a research scientist at DiSTAP. “It can be applied across a variety of plant species and planting mediums, and could easily be used in commercial setups for rapid herbicide susceptibility testing, such as urban farms.”

    NTU Professor Mary Chan-Park Bee Eng says, “Using nanosensors for in planta detection eliminates the need for extensive extraction and purification processes, which saves time and money. They also use very low-cost electronics, which makes them easily adaptable for commercial setups.”

    The team says their research can lead to future development of real-time nanosensors for other dynamic plant hormones and metabolites in living plants as well.

    The development of the nanosensor, optical detection system, and image processing algorithms for this study was done by SMART, NTU, and MIT, while TLL validated the nanosensors and provided knowledge of plant biology and plant signaling mechanisms. The research is carried out by SMART and supported by NRF under its Campus for Research Excellence And Technological Enterprise (CREATE) program.

    DiSTAP is one of the five interdisciplinary research roups in SMART. The DiSTAP program addresses deep problems in food production in Singapore and the world by developing a suite of impactful and novel analytical, genetic, and biosynthetic technologies. The goal is to fundamentally change how plant biosynthetic pathways are discovered, monitored, engineered, and ultimately translated to meet the global demand for food and nutrients.

    Scientists from MIT, TTL, NTU, and National University of Singapore (NUS) are collaboratively developing new tools for the continuous measurement of important plant metabolites and hormones for novel discovery, deeper understanding and control of plant biosynthetic pathways in ways not yet possible, especially in the context of green leafy vegetables; leveraging these new techniques to engineer plants with highly desirable properties for global food security, including high yield density production, drought, and pathogen resistance and biosynthesis of high-value commercial products; developing tools for producing hydrophobic food components in industry-relevant microbes; developing novel microbial and enzymatic technologies to produce volatile organic compounds that can protect and/or promote growth of leafy vegetables; and applying these technologies to improve urban farming.

    DiSTAP is led by Michael Strano and Singapore co-lead principal investigator Professor Chua Nam Hai.

    SMART was established by MIT, in partnership with the NRF, in 2007. SMART, the first entity in CREATE, serves as an intellectual and innovation hub for research interactions between MIT and Singapore, undertaking cutting-edge research projects in areas of interest to both. SMART currently comprises an Innovation Center and five interdisciplinary research groups: Antimicrobial Resistance (AMR), Critical Analytics for Manufacturing Personalized-Medicine (CAMP), DiSTAP, Future Urban Mobility (FM), and Low Energy Electronic Systems (LEES). SMART is funded by the NRF. More

  • in

    MIT-designed project achieves major advance toward fusion energy

    It was a moment three years in the making, based on intensive research and design work: On Sept. 5, for the first time, a large high-temperature superconducting electromagnet was ramped up to a field strength of 20 tesla, the most powerful magnetic field of its kind ever created on Earth. That successful demonstration helps resolve the greatest uncertainty in the quest to build the world’s first fusion power plant that can produce more power than it consumes, according to the project’s leaders at MIT and startup company Commonwealth Fusion Systems (CFS).

    That advance paves the way, they say, for the long-sought creation of practical, inexpensive, carbon-free power plants that could make a major contribution to limiting the effects of global climate change.

    “Fusion in a lot of ways is the ultimate clean energy source,” says Maria Zuber, MIT’s vice president for research and E. A. Griswold Professor of Geophysics. “The amount of power that is available is really game-changing.” The fuel used to create fusion energy comes from water, and “the Earth is full of water — it’s a nearly unlimited resource. We just have to figure out how to utilize it.”

    Developing the new magnet is seen as the greatest technological hurdle to making that happen; its successful operation now opens the door to demonstrating fusion in a lab on Earth, which has been pursued for decades with limited progress. With the magnet technology now successfully demonstrated, the MIT-CFS collaboration is on track to build the world’s first fusion device that can create and confine a plasma that produces more energy than it consumes. That demonstration device, called SPARC, is targeted for completion in 2025.

    “The challenges of making fusion happen are both technical and scientific,” says Dennis Whyte, director of MIT’s Plasma Science and Fusion Center, which is working with CFS to develop SPARC. But once the technology is proven, he says, “it’s an inexhaustible, carbon-free source of energy that you can deploy anywhere and at any time. It’s really a fundamentally new energy source.”

    Whyte, who is the Hitachi America Professor of Engineering, says this week’s demonstration represents a major milestone, addressing the biggest questions remaining about the feasibility of the SPARC design. “It’s really a watershed moment, I believe, in fusion science and technology,” he says.

    The sun in a bottle

    Fusion is the process that powers the sun: the merger of two small atoms to make a larger one, releasing prodigious amounts of energy. But the process requires temperatures far beyond what any solid material could withstand. To capture the sun’s power source here on Earth, what’s needed is a way of capturing and containing something that hot — 100,000,000 degrees or more — by suspending it in a way that prevents it from coming into contact with anything solid.

    That’s done through intense magnetic fields, which form a kind of invisible bottle to contain the hot swirling soup of protons and electrons, called a plasma. Because the particles have an electric charge, they are strongly controlled by the magnetic fields, and the most widely used configuration for containing them is a donut-shaped device called a tokamak. Most of these devices have produced their magnetic fields using conventional electromagnets made of copper, but the latest and largest version under construction in France, called ITER, uses what are known as low-temperature superconductors.

    The major innovation in the MIT-CFS fusion design is the use of high-temperature superconductors, which enable a much stronger magnetic field in a smaller space. This design was made possible by a new kind of superconducting material that became commercially available a few years ago. The idea initially arose as a class project in a nuclear engineering class taught by Whyte. The idea seemed so promising that it continued to be developed over the next few iterations of that class, leading to the ARC power plant design concept in early 2015. SPARC, designed to be about half the size of ARC, is a testbed to prove the concept before construction of the full-size, power-producing plant.

    Until now, the only way to achieve the colossally powerful magnetic fields needed to create a magnetic “bottle” capable of containing plasma heated up to hundreds of millions of degrees was to make them larger and larger. But the new high-temperature superconductor material, made in the form of a flat, ribbon-like tape, makes it possible to achieve a higher magnetic field in a smaller device, equaling the performance that would be achieved in an apparatus 40 times larger in volume using conventional low-temperature superconducting magnets. That leap in power versus size is the key element in ARC’s revolutionary design.

    The use of the new high-temperature superconducting magnets makes it possible to apply decades of experimental knowledge gained from the operation of tokamak experiments, including MIT’s own Alcator series. The new approach, led by Zach Hartwig, the MIT principal investigator and the Robert N. Noyce Career Development Assistant Professor of Nuclear Science and Engineering, uses a well-known design but scales everything down to about half the linear size and still achieves the same operational conditions because of the higher magnetic field.

    A series of scientific papers published last year outlined the physical basis and, by simulation, confirmed the viability of the new fusion device. The papers showed that, if the magnets worked as expected, the whole fusion system should indeed produce net power output, for the first time in decades of fusion research.

    Martin Greenwald, deputy director and senior research scientist at the PSFC, says unlike some other designs for fusion experiments, “the niche that we were filling was to use conventional plasma physics, and conventional tokamak designs and engineering, but bring to it this new magnet technology. So, we weren’t requiring innovation in a half-dozen different areas. We would just innovate on the magnet, and then apply the knowledge base of what’s been learned over the last decades.”

    That combination of scientifically established design principles and game-changing magnetic field strength is what makes it possible to achieve a plant that could be economically viable and developed on a fast track. “It’s a big moment,” says Bob Mumgaard, CEO of CFS. “We now have a platform that is both scientifically very well-advanced, because of the decades of research on these machines, and also commercially very interesting. What it does is allow us to build devices faster, smaller, and at less cost,” he says of the successful magnet demonstration. 

    Play video

    Proof of the concept

    Bringing that new magnet concept to reality required three years of intensive work on design, establishing supply chains, and working out manufacturing methods for magnets that may eventually need to be produced by the thousands.

    “We built a first-of-a-kind, superconducting magnet. It required a lot of work to create unique manufacturing processes and equipment. As a result, we are now well-prepared to ramp-up for SPARC production,” says Joy Dunn, head of operations at CFS. “We started with a physics model and a CAD design, and worked through lots of development and prototypes to turn a design on paper into this actual physical magnet.” That entailed building manufacturing capabilities and testing facilities, including an iterative process with multiple suppliers of the superconducting tape, to help them reach the ability to produce material that met the needed specifications — and for which CFS is now overwhelmingly the world’s biggest user.

    They worked with two possible magnet designs in parallel, both of which ended up meeting the design requirements, she says. “It really came down to which one would revolutionize the way that we make superconducting magnets, and which one was easier to build.” The design they adopted clearly stood out in that regard, she says.

    In this test, the new magnet was gradually powered up in a series of steps until reaching the goal of a 20 tesla magnetic field — the highest field strength ever for a high-temperature superconducting fusion magnet. The magnet is composed of 16 plates stacked together, each one of which by itself would be the most powerful high-temperature superconducting magnet in the world.

    “Three years ago we announced a plan,” says Mumgaard, “to build a 20-tesla magnet, which is what we will need for future fusion machines.” That goal has now been achieved, right on schedule, even with the pandemic, he says.

    Citing the series of physics papers published last year, Brandon Sorbom, the chief science officer at CFS, says “basically the papers conclude that if we build the magnet, all of the physics will work in SPARC. So, this demonstration answers the question: Can they build the magnet? It’s a very exciting time! It’s a huge milestone.”

    The next step will be building SPARC, a smaller-scale version of the planned ARC power plant. The successful operation of SPARC will demonstrate that a full-scale commercial fusion power plant is practical, clearing the way for rapid design and construction of that pioneering device can then proceed full speed.

    Zuber says that “I now am genuinely optimistic that SPARC can achieve net positive energy, based on the demonstrated performance of the magnets. The next step is to scale up, to build an actual power plant. There are still many challenges ahead, not the least of which is developing a design that allows for reliable, sustained operation. And realizing that the goal here is commercialization, another major challenge will be economic. How do you design these power plants so it will be cost effective to build and deploy them?”

    Someday in a hoped-for future, when there may be thousands of fusion plants powering clean electric grids around the world, Zuber says, “I think we’re going to look back and think about how we got there, and I think the demonstration of the magnet technology, for me, is the time when I believed that, wow, we can really do this.”

    The successful creation of a power-producing fusion device would be a tremendous scientific achievement, Zuber notes. But that’s not the main point. “None of us are trying to win trophies at this point. We’re trying to keep the planet livable.” More

  • in

    Making catalytic surfaces more active to help decarbonize fuels and chemicals

    Electrochemical reactions that are accelerated using catalysts lie at the heart of many processes for making and using fuels, chemicals, and materials — including storing electricity from renewable energy sources in chemical bonds, an important capability for decarbonizing transportation fuels. Now, research at MIT could open the door to ways of making certain catalysts more active, and thus enhancing the efficiency of such processes.

    A new production process yielded catalysts that increased the efficiency of the chemical reactions by fivefold, potentially enabling useful new processes in biochemistry, organic chemistry, environmental chemistry, and electrochemistry. The findings are described today in the journal Nature Catalysis, in a paper by Yang Shao-Horn, an MIT professor of mechanical engineering and of materials science and engineering, and a member of the Research Lab of Electronics (RLE); Tao Wang, a postdoc in RLE; Yirui Zhang, a graduate student in the Department of Mechanical Engineering; and five others.

    The process involves adding a layer of what’s called an ionic liquid in between a gold or platinum catalyst and a chemical feedstock. Catalysts produced with this method could potentially enable much more efficient conversion of hydrogen fuel to power devices such as fuel cells, or more efficient conversion of carbon dioxide into fuels.

    “There is an urgent need to decarbonize how we power transportation beyond light-duty vehicles, how we make fuels, and how we make materials and chemicals,” says Shao-Horn, emphasizing the pressing call to reduce carbon emissions highlighted in the latest IPCC report on climate change. This new approach to enhancing catalytic activity could provide an important step in that direction, she says.

    Using hydrogen in electrochemical devices such as fuel cells is one promising approach to decarbonizing fields such as aviation and heavy-duty vehicles, and the new process may help to make such uses practical. At present, the oxygen reduction reaction that powers such fuel cells is limited by its inefficiency. Previous attempts to improve that efficiency have focused on choosing different catalyst materials or modifying their surface compositions and structure.

    In this research, however, instead of modifying the solid surfaces, the team added a thin layer in between the catalyst and the electrolyte, the active material that participates in the chemical reaction. The ionic liquid layer, they found, regulates the activity of protons that help to increase the rate of the chemical reactions taking place on the interface.

    Because there is a great variety of such ionic liquids to choose from, it’s possible to “tune” proton activity and the reaction rates to match the energetics needed for processes involving proton transfer, which can be used to make fuels and chemicals through reactions with oxygen.

    “The proton activity and the barrier for proton transfer is governed by the ionic liquid layer, and so there’s a great tuneability in terms of catalytic activity for reactions involving proton and electron transfer,” Shao-Horn says. And the effect is produced by a vanishingly thin layer of the liquid, just a few nanometers thick, above which is a much thicker layer of the liquid that is to undergo the reaction.

    “I think this concept is novel and important,” says Wang, the paper’s first author, “because people know the proton activity is important in many electrochemistry reactions, but it’s very challenging to study.” That’s because in a water environment, there are so many interactions between neighboring water molecules involved that it’s very difficult to separate out which reactions are taking place. By using an ionic liquid, whose ions can each only form a single bond with the intermediate material, it became possible to study the reactions in detail, using infrared spectroscopy.

    As a result, Wang says, “Our finding highlights the critical role that interfacial electrolytes, in particular the intermolecular hydrogen bonding, can play in enhancing the activity of the electro-catalytic process. It also provides fundamental insights into proton transfer mechanisms at a quantum mechanical level, which can push the frontiers of knowing how protons and electrons interact at catalytic interfaces.”

    “The work is also exciting because it gives people a design principle for how they can tune the catalysts,” says Zhang. “We need some species right at a ‘sweet spot’ — not too active or too inert — to enhance the reaction rate.”

    With some of these techniques, says Reshma Rao, a recent doctoral graduate from MIT and now a postdoc at Imperial College, London, who is also a co-author of the paper, “we see up to a five-times increase in activity. I think the most exciting part of this research is the way it opens up a whole new dimension in the way we think about catalysis.” The field had hit “a kind of roadblock,” she says, in finding ways to design better materials. By focusing on the liquid layer rather than the surface of the material, “that’s kind of a whole different way of looking at this problem, and opens up a whole new dimension, a whole new axis along which we can change things and optimize some of these reaction rates.”

    The team also included Botao Huang, Bin Cai, and Livia Giordano in the MIT’s Research Laboratory of Electronics, and Shi-Gang Sun at Xiamen University in China. The work was supported by the Toyota Research Institute, and used the National Science Foundation’s Extreme Science and Engineering Environment. More

  • in

    Making the case for hydrogen in a zero-carbon economy

    As the United States races to achieve its goal of zero-carbon electricity generation by 2035, energy providers are swiftly ramping up renewable resources such as solar and wind. But because these technologies churn out electrons only when the sun shines and the wind blows, they need backup from other energy sources, especially during seasons of high electric demand. Currently, plants burning fossil fuels, primarily natural gas, fill in the gaps.

    “As we move to more and more renewable penetration, this intermittency will make a greater impact on the electric power system,” says Emre Gençer, a research scientist at the MIT Energy Initiative (MITEI). That’s because grid operators will increasingly resort to fossil-fuel-based “peaker” plants that compensate for the intermittency of the variable renewable energy (VRE) sources of sun and wind. “If we’re to achieve zero-carbon electricity, we must replace all greenhouse gas-emitting sources,” Gençer says.

    Low- and zero-carbon alternatives to greenhouse-gas emitting peaker plants are in development, such as arrays of lithium-ion batteries and hydrogen power generation. But each of these evolving technologies comes with its own set of advantages and constraints, and it has proven difficult to frame the debate about these options in a way that’s useful for policymakers, investors, and utilities engaged in the clean energy transition.

    Now, Gençer and Drake D. Hernandez SM ’21 have come up with a model that makes it possible to pin down the pros and cons of these peaker-plant alternatives with greater precision. Their hybrid technological and economic analysis, based on a detailed inventory of California’s power system, was published online last month in Applied Energy. While their work focuses on the most cost-effective solutions for replacing peaker power plants, it also contains insights intended to contribute to the larger conversation about transforming energy systems.

    “Our study’s essential takeaway is that hydrogen-fired power generation can be the more economical option when compared to lithium-ion batteries — even today, when the costs of hydrogen production, transmission, and storage are very high,” says Hernandez, who worked on the study while a graduate research assistant for MITEI. Adds Gençer, “If there is a place for hydrogen in the cases we analyzed, that suggests there is a promising role for hydrogen to play in the energy transition.”

    Adding up the costs

    California serves as a stellar paradigm for a swiftly shifting power system. The state draws more than 20 percent of its electricity from solar and approximately 7 percent from wind, with more VRE coming online rapidly. This means its peaker plants already play a pivotal role, coming online each evening when the sun goes down or when events such as heat waves drive up electricity use for days at a time.

    “We looked at all the peaker plants in California,” recounts Gençer. “We wanted to know the cost of electricity if we replaced them with hydrogen-fired turbines or with lithium-ion batteries.” The researchers used a core metric called the levelized cost of electricity (LCOE) as a way of comparing the costs of different technologies to each other. LCOE measures the average total cost of building and operating a particular energy-generating asset per unit of total electricity generated over the hypothetical lifetime of that asset.

    Selecting 2019 as their base study year, the team looked at the costs of running natural gas-fired peaker plants, which they defined as plants operating 15 percent of the year in response to gaps in intermittent renewable electricity. In addition, they determined the amount of carbon dioxide released by these plants and the expense of abating these emissions. Much of this information was publicly available.

    Coming up with prices for replacing peaker plants with massive arrays of lithium-ion batteries was also relatively straightforward: “There are no technical limitations to lithium-ion, so you can build as many as you want; but they are super expensive in terms of their footprint for energy storage and the mining required to manufacture them,” says Gençer.

    But then came the hard part: nailing down the costs of hydrogen-fired electricity generation. “The most difficult thing is finding cost assumptions for new technologies,” says Hernandez. “You can’t do this through a literature review, so we had many conversations with equipment manufacturers and plant operators.”

    The team considered two different forms of hydrogen fuel to replace natural gas, one produced through electrolyzer facilities that convert water and electricity into hydrogen, and another that reforms natural gas, yielding hydrogen and carbon waste that can be captured to reduce emissions. They also ran the numbers on retrofitting natural gas plants to burn hydrogen as opposed to building entirely new facilities. Their model includes identification of likely locations throughout the state and expenses involved in constructing these facilities.

    The researchers spent months compiling a giant dataset before setting out on the task of analysis. The results from their modeling were clear: “Hydrogen can be a more cost-effective alternative to lithium-ion batteries for peaking operations on a power grid,” says Hernandez. In addition, notes Gençer, “While certain technologies worked better in particular locations, we found that on average, reforming hydrogen rather than electrolytic hydrogen turned out to be the cheapest option for replacing peaker plants.”

    A tool for energy investors

    When he began this project, Gençer admits he “wasn’t hopeful” about hydrogen replacing natural gas in peaker plants. “It was kind of shocking to see in our different scenarios that there was a place for hydrogen.” That’s because the overall price tag for converting a fossil-fuel based plant to one based on hydrogen is very high, and such conversions likely won’t take place until more sectors of the economy embrace hydrogen, whether as a fuel for transportation or for varied manufacturing and industrial purposes.

    A nascent hydrogen production infrastructure does exist, mainly in the production of ammonia for fertilizer. But enormous investments will be necessary to expand this framework to meet grid-scale needs, driven by purposeful incentives. “With any of the climate solutions proposed today, we will need a carbon tax or carbon pricing; otherwise nobody will switch to new technologies,” says Gençer.

    The researchers believe studies like theirs could help key energy stakeholders make better-informed decisions. To that end, they have integrated their analysis into SESAME, a life cycle and techno-economic assessment tool for a range of energy systems that was developed by MIT researchers. Users can leverage this sophisticated modeling environment to compare costs of energy storage and emissions from different technologies, for instance, or to determine whether it is cost-efficient to replace a natural gas-powered plant with one powered by hydrogen.

    “As utilities, industry, and investors look to decarbonize and achieve zero-emissions targets, they have to weigh the costs of investing in low-carbon technologies today against the potential impacts of climate change moving forward,” says Hernandez, who is currently a senior associate in the energy practice at Charles River Associates. Hydrogen, he believes, will become increasingly cost-competitive as its production costs decline and markets expand.

    A study group member of MITEI’s soon-to-be published Future of Storage study, Gençer knows that hydrogen alone will not usher in a zero-carbon future. But, he says, “Our research shows we need to seriously consider hydrogen in the energy transition, start thinking about key areas where hydrogen should be used, and start making the massive investments necessary.”

    Funding for this research was provided by MITEI’s Low-Carbon Energy Centers and Future of Storage study. More

  • in

    Countering climate change with cool pavements

    Pavements are an abundant urban surface, covering around 40 percent of American cities. But in addition to carrying traffic, they can also emit heat.

    Due to what’s called the urban heat island effect, densely built, impermeable surfaces like pavements can absorb solar radiation and warm up their surroundings by re-emitting that radiation as heat. This phenomenon poses a serious threat to cities. It increases air temperatures by up as much as 7 degrees Fahrenheit and contributes to health and environmental risks — risks that climate change will magnify.

    In response, researchers at the MIT Concrete Sustainability Hub (MIT CSHub) are studying how a surface that ordinarily heightens urban heat islands can instead lessen their intensity. Their research focuses on “cool pavements,” which reflect more solar radiation and emit less heat than conventional paving surfaces.

    A recent study by a team of current and former MIT CSHub researchers in the journal of Environmental Science and Technology outlines cool pavements and their implementation. The study found that they could lower air temperatures in Boston and Phoenix by up to 1.7 degrees Celsius (3 F) and 2.1 C (3.7 F), respectively. They would also reduce greenhouse gas emissions, cutting total emissions by up to 3 percent in Boston and 6 percent in Phoenix. Achieving these savings, however, requires that cool pavement strategies be selected according to the climate, traffic, and building configurations of each neighborhood.

    Cities like Los Angeles and Phoenix have already conducted sizeable experiments with cool pavements, but the technology is still not widely implemented. The CSHub team hopes their research can guide future cool paving projects to help cities cope with a changing climate.

    Scratching the surface

    It’s well known that darker surfaces get hotter in sunlight than lighter ones. Climate scientists use a metric called “albedo” to help describe this phenomenon.

    “Albedo is a measure of surface reflectivity,” explains Hessam AzariJafari, the paper’s lead author and a postdoc at the MIT CSHub. “Surfaces with low albedo absorb more light and tend to be darker, while high-albedo surfaces are brighter and reflect more light.”

    Albedo is central to cool pavements. Typical paving surfaces, like conventional asphalt, possess a low albedo and absorb more radiation and emit more heat. Cool pavements, however, have brighter materials that reflect more than three times as much radiation and, consequently, re-emit far less heat.

    “We can build cool pavements in many different ways,” says Randolph Kirchain, a researcher in the Materials Science Laboratory and co-director of the Concrete Sustainability Hub. “Brighter materials like concrete and lighter-colored aggregates offer higher albedo, while existing asphalt pavements can be made ‘cool’ through reflective coatings.”

    CSHub researchers considered these several options in a study of Boston and Phoenix. Their analysis considered different outcomes when concrete, reflective asphalt, and reflective concrete replaced conventional asphalt pavements — which make up more than 95 percent of pavements worldwide.

    Situational awareness

    For a comprehensive understanding of the environmental benefits of cool pavements in Boston and Phoenix, researchers had to look beyond just paving materials. That’s because in addition to lowering air temperatures, cool pavements exert direct and indirect impacts on climate change.  

    “The one direct impact is radiative forcing,” notes AzariJafari. “By reflecting radiation back into the atmosphere, cool pavements exert a radiative forcing, meaning that they change the Earth’s energy balance by sending more energy out of the atmosphere — similar to the polar ice caps.”

    Cool pavements also exert complex, indirect climate change impacts by altering energy use in adjacent buildings.

    “On the one hand, by lowering temperatures, cool pavements can reduce some need for AC [air conditioning] in the summer while increasing heating demand in the winter,” says AzariJafari. “Conversely, by reflecting light — called incident radiation — onto nearby buildings, cool pavements can warm structures up, which can increase AC usage in the summer and lower heating demand in the winter.”

    What’s more, albedo effects are only a portion of the overall life cycle impacts of a cool pavement. In fact, impacts from construction and materials extraction (referred to together as embodied impacts) and the use of the pavement both dominate the life cycle. The primary use phase impact of a pavement — apart from albedo effects  — is excess fuel consumption: Pavements with smooth surfaces and stiff structures cause less excess fuel consumption in the vehicles that drive on them.

    Assessing the climate-change impacts of cool pavements, then, is an intricate process — one involving many trade-offs. In their study, the researchers sought to analyze and measure them.

    A full reflection

    To determine the ideal implementation of cool pavements in Boston and Phoenix, researchers investigated the life cycle impacts of shifting from conventional asphalt pavements to three cool pavement options: reflective asphalt, concrete, and reflective concrete.

    To do this, they used coupled physical simulations to model buildings in thousands of hypothetical neighborhoods. Using this data, they then trained a neural network model to predict impacts based on building and neighborhood characteristics. With this tool in place, it was possible to estimate the impact of cool pavements for each of the thousands of roads and hundreds of thousands of buildings in Boston and Phoenix.

    In addition to albedo effects, they also looked at the embodied impacts for all pavement types and the effect of pavement type on vehicle excess fuel consumption due to surface qualities, stiffness, and deterioration rate.

    After assessing the life cycle impacts of each cool pavement type, the researchers calculated which material — conventional asphalt, reflective asphalt, concrete, and reflective concrete — benefited each neighborhood most. They found that while cool pavements were advantageous in Boston and Phoenix overall, the ideal materials varied greatly within and between both cities.

    “One benefit that was universal across neighborhood type and paving material, was the impact of radiative forcing,” notes AzariJafari. “This was particularly the case in areas with shorter, less-dense buildings, where the effect was most pronounced.”

    Unlike radiative forcing, however, changes to building energy demand differed by location. In Boston, cool pavements reduced energy demand as often as they increased it across all neighborhoods. In Phoenix, cool pavements had a negative impact on energy demand in most census tracts due to incident radiation. When factoring in radiative forcing, though, cool pavements ultimately had a net benefit.

    Only after considering embodied emissions and impacts on fuel consumption did the ideal pavement type manifest for each neighborhood. Once factoring in uncertainty over the life cycle, researchers found that reflective concrete pavements had the best results, proving optimal in 53 percent and 73 percent of the neighborhoods in Boston and Phoenix, respectively.

    Once again, uncertainties and variations were identified. In Boston, replacing conventional asphalt pavements with a cool option was always preferred, while in Phoenix concrete pavements — reflective or not — had better outcomes due to rigidity at high temperatures that minimized vehicle fuel consumption. And despite the dominance of concrete in Phoenix, in 17 percent of its neighborhoods all reflective paving options proved more or less as effective, while in 1 percent of cases, conventional pavements were actually superior.

    “Though the climate change impacts we studied have proven numerous and often at odds with each other, our conclusions are unambiguous: Cool pavements could offer immense climate change mitigation benefits for both cities,” says Kirchain.

    The improvements to air temperatures would be noticeable: the team found that cool pavements would lower peak summer air temperatures in Boston by 1.7 C (3 F) and in Phoenix by 2.1 C (3.7 F). The carbon dioxide emissions reductions would likewise be impressive. Boston would decrease its carbon dioxide emissions by as much as 3 percent over 50 years while reductions in Phoenix would reach 6 percent over the same period.

    This analysis is one of the most comprehensive studies of cool pavements to date — but there’s more to investigate. Just as with pavements, it’s also possible to adjust building albedo, which may result in changes to building energy demand. Intensive grid decarbonization and the introduction of low-carbon concrete mixtures may also alter the emissions generated by cool pavements.

    There’s still lots of ground to cover for the CSHub team. But by studying cool pavements, they’ve elevated a brilliant climate change solution and opened avenues for further research and future mitigation.

    The MIT Concrete Sustainability Hub is a team of researchers from several departments across MIT working on concrete and infrastructure science, engineering, and economics. Its research is supported by the Portland Cement Association and the Ready Mixed Concrete Research and Education Foundation. More

  • in

    A peculiar state of matter in layers of semiconductors

    Scientists around the world are developing new hardware for quantum computers, a new type of device that could accelerate drug design, financial modeling, and weather prediction. These computers rely on qubits, bits of matter that can represent some combination of 1 and 0 simultaneously. The problem is that qubits are fickle, degrading into regular bits when interactions with surrounding matter interfere. But new research at MIT suggests a way to protect their states, using a phenomenon called many-body localization (MBL).

    MBL is a peculiar phase of matter, proposed decades ago, that is unlike solid or liquid. Typically, matter comes to thermal equilibrium with its environment. That’s why soup cools and ice cubes melt. But in MBL, an object consisting of many strongly interacting bodies, such as atoms, never reaches such equilibrium. Heat, like sound, consists of collective atomic vibrations and can travel in waves; an object always has such heat waves internally. But when there’s enough disorder and enough interaction in the way its atoms are arranged, the waves can become trapped, thus preventing the object from reaching equilibrium.

    MBL had been demonstrated in “optical lattices,” arrangements of atoms at very cold temperatures held in place using lasers. But such setups are impractical. MBL had also arguably been shown in solid systems, but only with very slow temporal dynamics, in which the phase’s existence is hard to prove because equilibrium might be reached if researchers could wait long enough. The MIT research found a signatures of MBL in a “solid-state” system — one made of semiconductors — that would otherwise have reached equilibrium in the time it was watched.

    “It could open a new chapter in the study of quantum dynamics,” says Rahul Nandkishore, a physicist at the University of Colorado at Boulder, who was not involved in the work.

    Mingda Li, the Norman C Rasmussen Assistant Professor Nuclear Science and Engineering at MIT, led the new study, published in a recent issue of Nano Letters. The researchers built a system containing alternating semiconductor layers, creating a microscopic lasagna — aluminum arsenide, followed by gallium arsenide, and so on, for 600 layers, each 3 nanometers (millionths of a millimeter) thick. Between the layers they dispersed “nanodots,” 2-nanometer particles of erbium arsenide, to create disorder. The lasagna, or “superlattice,” came in three recipes: one with no nanodots, one in which nanodots covered 8 percent of each layer’s area, and one in which they covered 25 percent.

    According to Li, the team used layers of material, instead of a bulk material, to simplify the system so dissipation of heat across the planes was essentially one-dimensional. And they used nanodots, instead of mere chemical impurities, to crank up the disorder.

    To measure whether these disordered systems are still staying in equilibrium, the researchers measured them with X-rays. Using the Advanced Photon Source at Argonne National Lab, they shot beams of radiation at an energy of more than 20,000 electron volts, and to resolve the energy difference between the incoming X-ray and after its reflection off the sample’s surface, with an energy resolution less than one one-thousandth of an electron volt. To avoid penetrating the superlattice and hitting the underlying substrate, they shot it at an angle of just half a degree from parallel.

    Just as light can be measured as waves or particles, so too can heat. The collective atomic vibration for heat in the form of a heat-carrying unit is called a phonon. X-rays interact with these phonons, and by measuring how X-rays reflect off the sample, the experimenters can determine if it is in equilibrium.

    The researchers found that when the superlattice was cold — 30 kelvin, about -400 degrees Fahrenheit — and it contained nanodots, its phonons at certain frequencies remained were not in equilibrium.

    More work remains to prove conclusively that MBL has been achieved, but “this new quantum phase can open up a whole new platform to explore quantum phenomena,” Li says, “with many potential applications, from thermal storage to quantum computing.”

    To create qubits, some quantum computers employ specks of matter called quantum dots. Li says quantum dots similar to Li’s nanodots could act as qubits. Magnets could read or write their quantum states, while the many-body localization would keep them insulated from heat and other environmental factors.

    In terms of thermal storage, such a superlattice might switch in and out of an MBL phase by magnetically controlling the nanodots. It could insulate computer parts from heat at one moment, then allow parts to disperse heat when it won’t cause damage. Or it could allow heat to build up and be harnessed later for generating electricity.

    Conveniently, superlattices with nanodots can be constructed using traditional techniques for fabricating semiconductors, alongside other elements of computer chips. According to Li, “It’s a much larger design space than with chemical doping, and there are numerous applications.”

    “I am excited to see that signatures of MBL can now also be found in real material systems,” says Immanuel Bloch, scientific director at the Max-Planck-Institute of Quantum Optics, of the new work. “I believe this will help us to better understand the conditions under which MBL can be observed in different quantum many-body systems and how possible coupling to the environment affects the stability of the system. These are fundamental and important questions and the MIT experiment is an important step helping us to answer them.”

    Funding was provided by the U.S. Department of Energy’s Basic Energy Sciences program’s Neutron Scattering Program. More

  • in

    Designing better batteries for electric vehicles

    The urgent need to cut carbon emissions is prompting a rapid move toward electrified mobility and expanded deployment of solar and wind on the electric grid. If those trends escalate as expected, the need for better methods of storing electrical energy will intensify.

    “We need all the strategies we can get to address the threat of climate change,” says Elsa Olivetti PhD ’07, the Esther and Harold E. Edgerton Associate Professor in Materials Science and Engineering. “Obviously, developing technologies for grid-based storage at a large scale is critical. But for mobile applications — in particular, transportation — much research is focusing on adapting today’s lithium-ion battery to make versions that are safer, smaller, and can store more energy for their size and weight.”

    Traditional lithium-ion batteries continue to improve, but they have limitations that persist, in part because of their structure. A lithium-ion battery consists of two electrodes — one positive and one negative — sandwiched around an organic (carbon-containing) liquid. As the battery is charged and discharged, electrically charged particles (or ions) of lithium pass from one electrode to the other through the liquid electrolyte.

    One problem with that design is that at certain voltages and temperatures, the liquid electrolyte can become volatile and catch fire. “Batteries are generally safe under normal usage, but the risk is still there,” says Kevin Huang PhD ’15, a research scientist in Olivetti’s group.

    Another problem is that lithium-ion batteries are not well-suited for use in vehicles. Large, heavy battery packs take up space and increase a vehicle’s overall weight, reducing fuel efficiency. But it’s proving difficult to make today’s lithium-ion batteries smaller and lighter while maintaining their energy density — that is, the amount of energy they store per gram of weight.

    To solve those problems, researchers are changing key features of the lithium-ion battery to make an all-solid, or “solid-state,” version. They replace the liquid electrolyte in the middle with a thin, solid electrolyte that’s stable at a wide range of voltages and temperatures. With that solid electrolyte, they use a high-capacity positive electrode and a high-capacity, lithium metal negative electrode that’s far thinner than the usual layer of porous carbon. Those changes make it possible to shrink the overall battery considerably while maintaining its energy-storage capacity, thereby achieving a higher energy density.

    “Those features — enhanced safety and greater energy density — are probably the two most-often-touted advantages of a potential solid-state battery,” says Huang. He then quickly clarifies that “all of these things are prospective, hoped-for, and not necessarily realized.” Nevertheless, the possibility has many researchers scrambling to find materials and designs that can deliver on that promise.

    Thinking beyond the lab

    Researchers have come up with many intriguing options that look promising — in the lab. But Olivetti and Huang believe that additional practical considerations may be important, given the urgency of the climate change challenge. “There are always metrics that we researchers use in the lab to evaluate possible materials and processes,” says Olivetti. Examples might include energy-storage capacity and charge/discharge rate. When performing basic research — which she deems both necessary and important — those metrics are appropriate. “But if the aim is implementation, we suggest adding a few metrics that specifically address the potential for rapid scaling,” she says.

    Based on industry’s experience with current lithium-ion batteries, the MIT researchers and their colleague Gerbrand Ceder, the Daniel M. Tellep Distinguished Professor of Engineering at the University of California at Berkeley, suggest three broad questions that can help identify potential constraints on future scale-up as a result of materials selection. First, with this battery design, could materials availability, supply chains, or price volatility become a problem as production scales up? (Note that the environmental and other concerns raised by expanded mining are outside the scope of this study.) Second, will fabricating batteries from these materials involve difficult manufacturing steps during which parts are likely to fail? And third, do manufacturing measures needed to ensure a high-performance product based on these materials ultimately lower or raise the cost of the batteries produced?

    To demonstrate their approach, Olivetti, Ceder, and Huang examined some of the electrolyte chemistries and battery structures now being investigated by researchers. To select their examples, they turned to previous work in which they and their collaborators used text- and data-mining techniques to gather information on materials and processing details reported in the literature. From that database, they selected a few frequently reported options that represent a range of possibilities.

    Materials and availability

    In the world of solid inorganic electrolytes, there are two main classes of materials — the oxides, which contain oxygen, and the sulfides, which contain sulfur. Olivetti, Ceder, and Huang focused on one promising electrolyte option in each class and examined key elements of concern for each of them.

    The sulfide they considered was LGPS, which combines lithium, germanium, phosphorus, and sulfur. Based on availability considerations, they focused on the germanium, an element that raises concerns in part because it’s not generally mined on its own. Instead, it’s a byproduct produced during the mining of coal and zinc.

    To investigate its availability, the researchers looked at how much germanium was produced annually in the past six decades during coal and zinc mining and then at how much could have been produced. The outcome suggested that 100 times more germanium could have been produced, even in recent years. Given that supply potential, the availability of germanium is not likely to constrain the scale-up of a solid-state battery based on an LGPS electrolyte.

    The situation looked less promising with the researchers’ selected oxide, LLZO, which consists of lithium, lanthanum, zirconium, and oxygen. Extraction and processing of lanthanum are largely concentrated in China, and there’s limited data available, so the researchers didn’t try to analyze its availability. The other three elements are abundantly available. However, in practice, a small quantity of another element — called a dopant — must be added to make LLZO easy to process. So the team focused on tantalum, the most frequently used dopant, as the main element of concern for LLZO.

    Tantalum is produced as a byproduct of tin and niobium mining. Historical data show that the amount of tantalum produced during tin and niobium mining was much closer to the potential maximum than was the case with germanium. So the availability of tantalum is more of a concern for the possible scale-up of an LLZO-based battery.

    But knowing the availability of an element in the ground doesn’t address the steps required to get it to a manufacturer. So the researchers investigated a follow-on question concerning the supply chains for critical elements — mining, processing, refining, shipping, and so on. Assuming that abundant supplies are available, can the supply chains that deliver those materials expand quickly enough to meet the growing demand for batteries?

    In sample analyses, they looked at how much supply chains for germanium and tantalum would need to grow year to year to provide batteries for a projected fleet of electric vehicles in 2030. As an example, an electric vehicle fleet often cited as a goal for 2030 would require production of enough batteries to deliver a total of 100 gigawatt hours of energy. To meet that goal using just LGPS batteries, the supply chain for germanium would need to grow by 50 percent from year to year — a stretch, since the maximum growth rate in the past has been about 7 percent. Using just LLZO batteries, the supply chain for tantalum would need to grow by about 30 percent — a growth rate well above the historical high of about 10 percent.

    Those examples demonstrate the importance of considering both materials availability and supply chains when evaluating different solid electrolytes for their scale-up potential. “Even when the quantity of a material available isn’t a concern, as is the case with germanium, scaling all the steps in the supply chain to match the future production of electric vehicles may require a growth rate that’s literally unprecedented,” says Huang.

    Materials and processing

    In assessing the potential for scale-up of a battery design, another factor to consider is the difficulty of the manufacturing process and how it may impact cost. Fabricating a solid-state battery inevitably involves many steps, and a failure at any step raises the cost of each battery successfully produced. As Huang explains, “You’re not shipping those failed batteries; you’re throwing them away. But you’ve still spent money on the materials and time and processing.”

    As a proxy for manufacturing difficulty, Olivetti, Ceder, and Huang explored the impact of failure rate on overall cost for selected solid-state battery designs in their database. In one example, they focused on the oxide LLZO. LLZO is extremely brittle, and at the high temperatures involved in manufacturing, a large sheet that’s thin enough to use in a high-performance solid-state battery is likely to crack or warp.

    To determine the impact of such failures on cost, they modeled four key processing steps in assembling LLZO-based batteries. At each step, they calculated cost based on an assumed yield — that is, the fraction of total units that were successfully processed without failing. With the LLZO, the yield was far lower than with the other designs they examined; and, as the yield went down, the cost of each kilowatt-hour (kWh) of battery energy went up significantly. For example, when 5 percent more units failed during the final cathode heating step, cost increased by about $30/kWh — a nontrivial change considering that a commonly accepted target cost for such batteries is $100/kWh. Clearly, manufacturing difficulties can have a profound impact on the viability of a design for large-scale adoption.

    Materials and performance

    One of the main challenges in designing an all-solid battery comes from “interfaces” — that is, where one component meets another. During manufacturing or operation, materials at those interfaces can become unstable. “Atoms start going places that they shouldn’t, and battery performance declines,” says Huang.

    As a result, much research is devoted to coming up with methods of stabilizing interfaces in different battery designs. Many of the methods proposed do increase performance; and as a result, the cost of the battery in dollars per kWh goes down. But implementing such solutions generally involves added materials and time, increasing the cost per kWh during large-scale manufacturing.

    To illustrate that trade-off, the researchers first examined their oxide, LLZO. Here, the goal is to stabilize the interface between the LLZO electrolyte and the negative electrode by inserting a thin layer of tin between the two. They analyzed the impacts — both positive and negative — on cost of implementing that solution. They found that adding the tin separator increases energy-storage capacity and improves performance, which reduces the unit cost in dollars/kWh. But the cost of including the tin layer exceeds the savings so that the final cost is higher than the original cost.

    In another analysis, they looked at a sulfide electrolyte called LPSCl, which consists of lithium, phosphorus, and sulfur with a bit of added chlorine. In this case, the positive electrode incorporates particles of the electrolyte material — a method of ensuring that the lithium ions can find a pathway through the electrolyte to the other electrode. However, the added electrolyte particles are not compatible with other particles in the positive electrode — another interface problem. In this case, a standard solution is to add a “binder,” another material that makes the particles stick together.

    Their analysis confirmed that without the binder, performance is poor, and the cost of the LPSCl-based battery is more than $500/kWh. Adding the binder improves performance significantly, and the cost drops by almost $300/kWh. In this case, the cost of adding the binder during manufacturing is so low that essentially all the of the cost decrease from adding the binder is realized. Here, the method implemented to solve the interface problem pays off in lower costs.

    The researchers performed similar studies of other promising solid-state batteries reported in the literature, and their results were consistent: The choice of battery materials and processes can affect not only near-term outcomes in the lab but also the feasibility and cost of manufacturing the proposed solid-state battery at the scale needed to meet future demand. The results also showed that considering all three factors together — availability, processing needs, and battery performance — is important because there may be collective effects and trade-offs involved.

    Olivetti is proud of the range of concerns the team’s approach can probe. But she stresses that it’s not meant to replace traditional metrics used to guide materials and processing choices in the lab. “Instead, it’s meant to complement those metrics by also looking broadly at the sorts of things that could get in the way of scaling” — an important consideration given what Huang calls “the urgent ticking clock” of clean energy and climate change.

    This research was supported by the Seed Fund Program of the MIT Energy Initiative (MITEI) Low-Carbon Energy Center for Energy Storage; by Shell, a founding member of MITEI; and by the U.S. Department of Energy’s Office of Energy Efficiency and Renewable Energy, Vehicle Technologies Office, under the Advanced Battery Materials Research Program. The text mining work was supported by the National Science Foundation, the Office of Naval Research, and MITEI.

    This article appears in the Spring 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative. More