More stories

  • in

    Predicting building emissions across the US

    The United States is entering a building boom. Between 2017 and 2050, it will build the equivalent of New York City 20 times over. Yet, to meet climate targets, the nation must also significantly reduce the greenhouse gas (GHG) emissions of its buildings, which comprise 27 percent of the nation’s total emissions.

    A team of current and former MIT Concrete Sustainability Hub (CSHub) researchers is addressing these conflicting demands with the aim of giving policymakers the tools and information to act. They have detailed the results of their collaboration in a recent paper in the journal Applied Energy that projects emissions for all buildings across the United States under two GHG reduction scenarios.

    Their paper found that “embodied” emissions — those from materials production and construction — would represent around a quarter of emissions between 2016 and 2050 despite extensive construction.

    Further, many regions would have varying priorities for GHG reductions; some, like the West, would benefit most from reductions to embodied emissions, while others, like parts of the Midwest, would see the greatest payoff from interventions to emissions from energy consumption. If these regional priorities were addressed aggressively, building sector emissions could be reduced by around 30 percent between 2016 and 2050.

    Quantifying contradictions

    Modern buildings are far more complex — and efficient — than their predecessors. Due to new technologies and more stringent building codes, they can offer lower energy consumption and operational emissions. And yet, more-efficient materials and improved construction standards can also generate greater embodied emissions.

    Concrete, in many ways, epitomizes this tradeoff. Though its durability can minimize energy-intensive repairs over a building’s operational life, the scale of its production means that it contributes to a large proportion of the embodied impacts in the building sector.

    As such, the team centered GHG reductions for concrete in its analysis.

    “We took a bottom-up approach, developing reference designs based on a set of residential and commercial building models,” explains Ehsan Vahidi, an assistant professor at the University of Nevada at Reno and a former CSHub postdoc. “These designs were differentiated by roof and slab insulation, HVAC efficiency, and construction materials — chiefly concrete and wood.”

    After measuring the operational and embodied GHG emissions for each reference design, the team scaled up their results to the county level and then national level based on building stock forecasts. This allowed them to estimate the emissions of the entire building sector between 2016 and 2050.

    To understand how various interventions could cut GHG emissions, researchers ran two different scenarios — a “projected” and an “ambitious” scenario — through their framework.

    The projected scenario corresponded to current trends. It assumed grid decarbonization would follow Energy Information Administration predictions; the widespread adoption of new energy codes; efficiency improvement of lighting and appliances; and, for concrete, the implementation of 50 percent low-carbon cements and binders in all new concrete construction and the adoption of full carbon capture, storage, and utilization (CCUS) of all cement and concrete emissions.

    “Our ambitious scenario was intended to reflect a future where more aggressive actions are taken to reduce GHG emissions and achieve the targets,” says Vahidi. “Therefore, the ambitious scenario took these same strategies [of the projected scenario] but featured more aggressive targets for their implementation.”

    For instance, it assumed a 33 percent reduction in grid emissions by 2050 and moved the projected deadlines for lighting and appliances and thermal insulation forward by five and 10 years, respectively. Concrete decarbonization occurred far more quickly as well.

    Reductions and variations

    The extensive growth forecast for the U.S. building sector will inevitably generate a sizable number of emissions. But how much can this figure be minimized?

    Without the implementation of any GHG reduction strategies, the team found that the building sector would emit 62 gigatons CO2 equivalent between 2016 and 2050. That’s comparable to the emissions generated from 156 trillion passenger vehicle miles traveled.

    But both GHG reduction scenarios could cut the emissions from this unmitigated, business-as-usual scenario significantly.

    Under the projected scenario, emissions would fall to 45 gigatons CO2 equivalent — a 27 percent decrease over the analysis period. The ambitious scenario would offer a further 6 percent reduction over the projected scenario, reaching 40 gigatons CO2 equivalent — like removing around 55 trillion passenger vehicle miles from the road over the period.

    “In both scenarios, the largest contributor to reductions was the greening of the energy grid,” notes Vahidi. “Other notable opportunities for reductions were from increasing the efficiency of lighting, HVAC, and appliances. Combined, these four attributes contributed to 85 percent of the emissions over the analysis period. Improvements to them offered the greatest potential emissions reductions.”

    The remaining attributes, such as thermal insulation and low-carbon concrete, had a smaller impact on emissions and, consequently, offered smaller reduction opportunities. That’s because these two attributes were only applied to new construction in the analysis, which was outnumbered by existing structures throughout the period.

    The disparities in impact between strategies aimed at new and existing structures underscore a broader finding: Despite extensive construction over the period, embodied emissions would comprise just 23 percent of cumulative emissions between 2016 and 2050, with the remainder coming primarily from operation.  

    “This is a consequence of existing structures far outnumbering new structures,” explains Jasmina Burek, a CSHub postdoc and an incoming assistant professor at the University of Massachusetts Lowell. “The operational emissions generated by all new and existing structures between 2016 and 2050 will always greatly exceed the embodied emissions of new structures at any given time, even as buildings become more efficient and the grid gets greener.”

    Yet the emissions reductions from both scenarios were not distributed evenly across the entire country. The team identified several regional variations that could have implications for how policymakers must act to reduce building sector emissions.

    “We found that western regions in the United States would see the greatest reduction opportunities from interventions to residential emissions, which would constitute 90 percent of the region’s total emissions over the analysis period,” says Vahidi.

    The predominance of residential emissions stems from the region’s ongoing population surge and its subsequent growth in housing stock. Proposed solutions would include CCUS and low-carbon binders for concrete production, and improvements to energy codes aimed at residential buildings.

    As with the West, ideal solutions for the Southeast would include CCUS, low-carbon binders, and improved energy codes.

    “In the case of Southeastern regions, interventions should equally target commercial and residential buildings, which we found were split more evenly among the building stock,” explains Burek. “Due to the stringent energy codes in both regions, interventions to operational emissions were less impactful than those to embodied emissions.”

    Much of the Midwest saw the inverse outcome. Its energy mix remains one of the most carbon-intensive in the nation and improvements to energy efficiency and the grid would have a large payoff — particularly in Missouri, Kansas, and Colorado.

    New England and California would see the smallest reductions. As their already-strict energy codes would limit further operational reductions, opportunities to reduce embodied emissions would be the most impactful.

    This tremendous regional variation uncovered by the MIT team is in many ways a reflection of the great demographic and geographic diversity of the nation as a whole. And there are still further variables to consider.

    In addition to GHG emissions, future research could consider other environmental impacts, like water consumption and air quality. Other mitigation strategies to consider include longer building lifespans, retrofitting, rooftop solar, and recycling and reuse.

    In this sense, their findings represent the lower bounds of what is possible in the building sector. And even if further improvements are ultimately possible, they’ve shown that regional variation will invariably inform those environmental impact reductions.

    The MIT Concrete Sustainability Hub is a team of researchers from several departments across MIT working on concrete and infrastructure science, engineering, and economics. Its research is supported by the Portland Cement Association and the Ready Mixed Concrete Research and Education Foundation. More

  • in

    Crossing disciplines, adding fresh eyes to nuclear engineering

    Sometimes patterns repeat in nature. Spirals appear in sunflowers and hurricanes. Branches occur in veins and lightning. Limiao Zhang, a doctoral student in MIT’s Department of Nuclear Science and Engineering, has found another similarity: between street traffic and boiling water, with implications for preventing nuclear meltdowns.

    Growing up in China, Zhang enjoyed watching her father repair things around the house. He couldn’t fulfill his dream of becoming an engineer, instead joining the police force, but Zhang did have that opportunity and studied mechanical engineering at Three Gorges University. Being one of four girls among about 50 boys in the major didn’t discourage her. “My father always told me girls can do anything,” she says. She graduated at the top of her class.

    In college, she and a team of classmates won a national engineering competition. They designed and built a model of a carousel powered by solar, hydroelectric, and pedal power. One judge asked how long the system could operate safely. “I didn’t have a perfect answer,” she recalls. She realized that engineering means designing products that not only function, but are resilient. So for her master’s degree, at Beihang University, she turned to industrial engineering and analyzed the reliability of critical infrastructure, in particular traffic networks.

    “Among all the critical infrastructures, nuclear power plants are quite special,” Zhang says. “Although one can provide very enormous carbon-free energy, once it fails, it can cause catastrophic results.” So she decided to switch fields again and study nuclear engineering. At the time she had no nuclear background, and hadn’t studied in the United States, but “I tried to step out of my comfort zone,” she says. “I just applied and MIT welcomed me.” Her supervisor, Matteo Bucci, and her classmates explained the basics of fission reactions as she adjusted to the new material, language, and environment. She doubted herself — “my friend told me, ‘I saw clouds above your head’” — but she passed her first-year courses and published her first paper soon afterward.

    Much of the work in Bucci’s lab deals with what’s called the boiling crisis. In many applications, such as nuclear plants and powerful computers, water cools things. When a hot surface boils water, bubbles cling to the surface before rising, but if too many form, they merge into a layer of vapor that insulates the surface. The heat has nowhere to go — a boiling crisis.

    Bucci invited Zhang into his lab in part because she saw a connection between traffic and heat transfer. The data plots of both phenomena look surprisingly similar. “The mathematical tools she had developed for the study of traffic jams were a completely different way of looking into our problem” Bucci says, “by using something which is intuitively not connected.”

    One can view bubbles as cars. The more there are, the more they interfere with each other. People studying boiling had focused on the physics of individual bubbles. Zhang instead uses statistical physics to analyze collective patterns of behavior. “She brings a different set of skills, a different set of knowledge, to our research,” says Guanyu Su, a postdoc in the lab. “That’s very refreshing.”

    In her first paper on the boiling crisis, published in Physical Review Letters, Zhang used theory and simulations to identify scale-free behavior in boiling: just as in traffic, the same patterns appear whether zoomed in or out, in terms of space or time. Both small and large bubbles matter. Using this insight, the team found certain physical parameters that could predict a boiling crisis. Zhang’s mathematical tools both explain experimental data and suggest new experiments to try. For a second paper, the team collected more data and found ways to predict the boiling crisis in a wider variety of conditions.

    Zhang’s thesis and third paper, both in progress, propose a universal law for explaining the crisis. “She translated the mechanism into a physical law, like F=ma or E=mc2,” Bucci says. “She came up with an equally simple equation.” Zhang says she’s learned a lot from colleagues in the department who are pioneering new nuclear reactors or other technologies, “but for my own work, I try to get down to the very basics of a phenomenon.”

    Bucci describes Zhang as determined, open-minded, and commendably self-critical. Su says she’s careful, optimistic, and courageous. “If I imagine going from heat transfer to city planning, that would be almost impossible for me,” he says. “She has a strong mind.” Last year, Zhang gave birth to a boy, whom she’s raising on her own as she does her research. (Her husband is stuck in China during the pandemic.) “This, to me,” Bucci says, “is almost superhuman.”

    Zhang will graduate at the end of the year, and has started looking for jobs back in China. She wants to continue in the energy field, though maybe not nuclear. “I will use my interdisciplinary knowledge,” she says. “I hope I can design safer and more efficient and more reliable systems to provide energy for our society.” More

  • in

    3 Questions: Daniel Cohn on the benefits of high-efficiency, flexible-fuel engines for heavy-duty trucking

    The California Air Resources Board has adopted a regulation that requires truck and engine manufacturers to reduce the nitrogen oxide (NOx) emissions from new heavy-duty trucks by 90 percent starting in 2027. NOx from heavy-duty trucks is one of the main sources of air pollution, creating smog and threatening respiratory health. This regulation requires the largest air pollution cuts in California in more than a decade. How can manufacturers achieve this aggressive goal efficiently and affordably?

    Daniel Cohn, a research scientist at the MIT Energy Initiative, and Leslie Bromberg, a principal research scientist at the MIT Plasma Science and Fusion Center, have been working on a high-efficiency, gasoline-ethanol engine that is cleaner and more cost-effective than existing diesel engine technologies. Here, Cohn explains the flexible-fuel engine approach and why it may be the most realistic solution — in the near term — to help California meet its stringent vehicle emission reduction goals. The research was sponsored by the Arthur Samberg MIT Energy Innovation fund.

    Q. How does your high-efficiency, flexible-fuel gasoline engine technology work?

    A. Our goal is to provide an affordable solution for heavy-duty vehicle (HDV) engines to emit low levels of nitrogen oxide (NOx) emissions that would meet California’s NOx regulations, while also quick-starting gasoline-consumption reductions in a substantial fraction of the HDV fleet.

    Presently, large trucks and other HDVs generally use diesel engines. The main reason for this is because of their high efficiency, which reduces fuel cost — a key factor for commercial trucks (especially long-haul trucks) because of the large number of miles that are driven. However, the NOx emissions from these diesel-powered vehicles are around 10 times greater than those from spark-ignition engines powered by gasoline or ethanol.

    Spark-ignition gasoline engines are primarily used in cars and light trucks (light-duty vehicles), which employ a three-way catalyst exhaust treatment system (generally referred to as a catalytic converter) that reduces vehicle NOx emissions by at least 98 percent and at a modest cost. The use of this highly effective exhaust treatment system is enabled by the capability of spark-ignition engines to be operated at a stoichiometric air/fuel ratio (where the amount of air matches what is needed for complete combustion of the fuel).

    Diesel engines do not operate with stoichiometric air/fuel ratios, making it much more difficult to reduce NOx emissions. Their state-of-the-art exhaust treatment system is much more complex and expensive than catalytic converters, and even with it, vehicles produce NOx emissions around 10 times higher than spark-ignition engine vehicles. Consequently, it is very challenging for diesel engines to further reduce their NOx emissions to meet the new California regulations.

    Our approach uses spark-ignition engines that can be powered by gasoline, ethanol, or mixtures of gasoline and ethanol as a substitute for diesel engines in HDVs. Gasoline has the attractive feature of being widely available and having a comparable or lower cost than diesel fuel. In addition, presently available ethanol in the U.S. produces up to 40 percent less greenhouse gas (GHG) emissions than diesel fuel or gasoline and has a widely available distribution system.

    To make gasoline- and/or ethanol-powered spark-ignition engine HDVs attractive for widespread HDV applications, we developed ways to make spark-ignition engines more efficient, so their fuel costs are more palatable to owners of heavy-duty trucks. Our approach provides diesel-like high efficiency and high power in gasoline-powered engines by using various methods to prevent engine knock (unwanted self-ignition that can damage the engine) in spark-ignition gasoline engines. This enables greater levels of turbocharging and use of higher engine compression ratios. These features provide high efficiency, comparable to that provided by diesel engines. Plus, when the engine is powered by ethanol, the required knock resistance is provided by the intrinsic high knock resistance of the fuel itself. 

    Q. What are the major challenges to implementing your technology in California?

    A. California has always been the pioneer in air pollutant control, with states such as Washington, Oregon, and New York often following suit. As the most populous state, California has a lot of sway — it’s a trendsetter. What happens in California has an impact on the rest of the United States.

    The main challenge to implementation of our technology is the argument that a better internal combustion engine technology is not needed because battery-powered HDVs — particularly long-haul trucks — can play the required role in reducing NOx and GHG emissions by 2035. We think that substantial market penetration of battery electric vehicles (BEV) in this vehicle sector will take a considerably longer time. In contrast to light-duty vehicles, there has been very little penetration of battery power into the HDV fleet, especially in long-haul trucks, which are the largest users of diesel fuel. One reason for this is that long-haul trucks using battery power face the challenge of reduced cargo capability due to substantial battery weight. Another challenge is the substantially longer charging time for BEVs compared to that of most present HDVs.

    Hydrogen-powered trucks using fuel cells have also been proposed as an alternative to BEV trucks, which might limit interest in adopting improved internal combustion engines. However, hydrogen-powered trucks face the formidable challenges of producing zero GHG hydrogen at affordable cost, as well as the cost of storage and transportation of hydrogen. At present the high purity hydrogen needed for fuel cells is generally very expensive.

    Q. How does your idea compare overall to battery-powered and hydrogen-powered HDVs? And how will you persuade people that it is an attractive pathway to follow?

    A. Our design uses existing propulsion systems and can operate on existing liquid fuels, and for these reasons, in the near term, it will be economically attractive to the operators of long-haul trucks. In fact, it can even be a lower-cost option than diesel power because of the significantly less-expensive exhaust treatment and smaller-size engines for the same power and torque. This economic attractiveness could enable the large-scale market penetration that is needed to have a substantial impact on reducing air pollution. Alternatively, we think it could take at least 20 years longer for BEVs or hydrogen-powered vehicles to obtain the same level of market penetration.

    Our approach also uses existing corn-based ethanol, which can provide a greater near-term GHG reduction benefit than battery- or hydrogen-powered long-haul trucks. While the GHG reduction from using existing ethanol would initially be in the 20 percent to 40 percent range, the scale at which the market is penetrated in the near-term could be much greater than for BEV or hydrogen-powered vehicle technology. The overall impact in reducing GHGs could be considerably greater.

    Moreover, we see a migration path beyond 2030 where further reductions in GHG emissions from corn ethanol can be possible through carbon capture and sequestration of the carbon dioxide (CO2) that is produced during ethanol production. In this case, overall CO2 reductions could potentially be 80 percent or more. Technologies for producing ethanol (and methanol, another alcohol fuel) from waste at attractive costs are emerging, and can provide fuel with zero or negative GHG emissions. One pathway for providing a negative GHG impact is through finding alternatives to landfilling for waste disposal, as this method leads to potent methane GHG emissions. A negative GHG impact could also be obtained by converting biomass waste into clean fuel, since the biomass waste can be carbon neutral and CO2 from the production of the clean fuel can be captured and sequestered.

    In addition, our flex-fuel engine technology may be synergistically used as range extenders in plug-in hybrid HDVs, which use limited battery capacity and obviates the cargo capability reduction and fueling disadvantages of long-haul trucks powered by battery alone.

    With the growing threats from air pollution and global warming, our HDV solution is an increasingly important option for near-term reduction of air pollution and offers a faster start in reducing heavy-duty fleet GHG emissions. It also provides an attractive migration path for longer-term, larger GHG reductions from the HDV sector. More

  • in

    MIT-designed project achieves major advance toward fusion energy

    It was a moment three years in the making, based on intensive research and design work: On Sept. 5, for the first time, a large high-temperature superconducting electromagnet was ramped up to a field strength of 20 tesla, the most powerful magnetic field of its kind ever created on Earth. That successful demonstration helps resolve the greatest uncertainty in the quest to build the world’s first fusion power plant that can produce more power than it consumes, according to the project’s leaders at MIT and startup company Commonwealth Fusion Systems (CFS).

    That advance paves the way, they say, for the long-sought creation of practical, inexpensive, carbon-free power plants that could make a major contribution to limiting the effects of global climate change.

    “Fusion in a lot of ways is the ultimate clean energy source,” says Maria Zuber, MIT’s vice president for research and E. A. Griswold Professor of Geophysics. “The amount of power that is available is really game-changing.” The fuel used to create fusion energy comes from water, and “the Earth is full of water — it’s a nearly unlimited resource. We just have to figure out how to utilize it.”

    Developing the new magnet is seen as the greatest technological hurdle to making that happen; its successful operation now opens the door to demonstrating fusion in a lab on Earth, which has been pursued for decades with limited progress. With the magnet technology now successfully demonstrated, the MIT-CFS collaboration is on track to build the world’s first fusion device that can create and confine a plasma that produces more energy than it consumes. That demonstration device, called SPARC, is targeted for completion in 2025.

    “The challenges of making fusion happen are both technical and scientific,” says Dennis Whyte, director of MIT’s Plasma Science and Fusion Center, which is working with CFS to develop SPARC. But once the technology is proven, he says, “it’s an inexhaustible, carbon-free source of energy that you can deploy anywhere and at any time. It’s really a fundamentally new energy source.”

    Whyte, who is the Hitachi America Professor of Engineering, says this week’s demonstration represents a major milestone, addressing the biggest questions remaining about the feasibility of the SPARC design. “It’s really a watershed moment, I believe, in fusion science and technology,” he says.

    The sun in a bottle

    Fusion is the process that powers the sun: the merger of two small atoms to make a larger one, releasing prodigious amounts of energy. But the process requires temperatures far beyond what any solid material could withstand. To capture the sun’s power source here on Earth, what’s needed is a way of capturing and containing something that hot — 100,000,000 degrees or more — by suspending it in a way that prevents it from coming into contact with anything solid.

    That’s done through intense magnetic fields, which form a kind of invisible bottle to contain the hot swirling soup of protons and electrons, called a plasma. Because the particles have an electric charge, they are strongly controlled by the magnetic fields, and the most widely used configuration for containing them is a donut-shaped device called a tokamak. Most of these devices have produced their magnetic fields using conventional electromagnets made of copper, but the latest and largest version under construction in France, called ITER, uses what are known as low-temperature superconductors.

    The major innovation in the MIT-CFS fusion design is the use of high-temperature superconductors, which enable a much stronger magnetic field in a smaller space. This design was made possible by a new kind of superconducting material that became commercially available a few years ago. The idea initially arose as a class project in a nuclear engineering class taught by Whyte. The idea seemed so promising that it continued to be developed over the next few iterations of that class, leading to the ARC power plant design concept in early 2015. SPARC, designed to be about half the size of ARC, is a testbed to prove the concept before construction of the full-size, power-producing plant.

    Until now, the only way to achieve the colossally powerful magnetic fields needed to create a magnetic “bottle” capable of containing plasma heated up to hundreds of millions of degrees was to make them larger and larger. But the new high-temperature superconductor material, made in the form of a flat, ribbon-like tape, makes it possible to achieve a higher magnetic field in a smaller device, equaling the performance that would be achieved in an apparatus 40 times larger in volume using conventional low-temperature superconducting magnets. That leap in power versus size is the key element in ARC’s revolutionary design.

    The use of the new high-temperature superconducting magnets makes it possible to apply decades of experimental knowledge gained from the operation of tokamak experiments, including MIT’s own Alcator series. The new approach, led by Zach Hartwig, the MIT principal investigator and the Robert N. Noyce Career Development Assistant Professor of Nuclear Science and Engineering, uses a well-known design but scales everything down to about half the linear size and still achieves the same operational conditions because of the higher magnetic field.

    A series of scientific papers published last year outlined the physical basis and, by simulation, confirmed the viability of the new fusion device. The papers showed that, if the magnets worked as expected, the whole fusion system should indeed produce net power output, for the first time in decades of fusion research.

    Martin Greenwald, deputy director and senior research scientist at the PSFC, says unlike some other designs for fusion experiments, “the niche that we were filling was to use conventional plasma physics, and conventional tokamak designs and engineering, but bring to it this new magnet technology. So, we weren’t requiring innovation in a half-dozen different areas. We would just innovate on the magnet, and then apply the knowledge base of what’s been learned over the last decades.”

    That combination of scientifically established design principles and game-changing magnetic field strength is what makes it possible to achieve a plant that could be economically viable and developed on a fast track. “It’s a big moment,” says Bob Mumgaard, CEO of CFS. “We now have a platform that is both scientifically very well-advanced, because of the decades of research on these machines, and also commercially very interesting. What it does is allow us to build devices faster, smaller, and at less cost,” he says of the successful magnet demonstration. 

    Play video

    Proof of the concept

    Bringing that new magnet concept to reality required three years of intensive work on design, establishing supply chains, and working out manufacturing methods for magnets that may eventually need to be produced by the thousands.

    “We built a first-of-a-kind, superconducting magnet. It required a lot of work to create unique manufacturing processes and equipment. As a result, we are now well-prepared to ramp-up for SPARC production,” says Joy Dunn, head of operations at CFS. “We started with a physics model and a CAD design, and worked through lots of development and prototypes to turn a design on paper into this actual physical magnet.” That entailed building manufacturing capabilities and testing facilities, including an iterative process with multiple suppliers of the superconducting tape, to help them reach the ability to produce material that met the needed specifications — and for which CFS is now overwhelmingly the world’s biggest user.

    They worked with two possible magnet designs in parallel, both of which ended up meeting the design requirements, she says. “It really came down to which one would revolutionize the way that we make superconducting magnets, and which one was easier to build.” The design they adopted clearly stood out in that regard, she says.

    In this test, the new magnet was gradually powered up in a series of steps until reaching the goal of a 20 tesla magnetic field — the highest field strength ever for a high-temperature superconducting fusion magnet. The magnet is composed of 16 plates stacked together, each one of which by itself would be the most powerful high-temperature superconducting magnet in the world.

    “Three years ago we announced a plan,” says Mumgaard, “to build a 20-tesla magnet, which is what we will need for future fusion machines.” That goal has now been achieved, right on schedule, even with the pandemic, he says.

    Citing the series of physics papers published last year, Brandon Sorbom, the chief science officer at CFS, says “basically the papers conclude that if we build the magnet, all of the physics will work in SPARC. So, this demonstration answers the question: Can they build the magnet? It’s a very exciting time! It’s a huge milestone.”

    The next step will be building SPARC, a smaller-scale version of the planned ARC power plant. The successful operation of SPARC will demonstrate that a full-scale commercial fusion power plant is practical, clearing the way for rapid design and construction of that pioneering device can then proceed full speed.

    Zuber says that “I now am genuinely optimistic that SPARC can achieve net positive energy, based on the demonstrated performance of the magnets. The next step is to scale up, to build an actual power plant. There are still many challenges ahead, not the least of which is developing a design that allows for reliable, sustained operation. And realizing that the goal here is commercialization, another major challenge will be economic. How do you design these power plants so it will be cost effective to build and deploy them?”

    Someday in a hoped-for future, when there may be thousands of fusion plants powering clean electric grids around the world, Zuber says, “I think we’re going to look back and think about how we got there, and I think the demonstration of the magnet technology, for me, is the time when I believed that, wow, we can really do this.”

    The successful creation of a power-producing fusion device would be a tremendous scientific achievement, Zuber notes. But that’s not the main point. “None of us are trying to win trophies at this point. We’re trying to keep the planet livable.” More

  • in

    Making the case for hydrogen in a zero-carbon economy

    As the United States races to achieve its goal of zero-carbon electricity generation by 2035, energy providers are swiftly ramping up renewable resources such as solar and wind. But because these technologies churn out electrons only when the sun shines and the wind blows, they need backup from other energy sources, especially during seasons of high electric demand. Currently, plants burning fossil fuels, primarily natural gas, fill in the gaps.

    “As we move to more and more renewable penetration, this intermittency will make a greater impact on the electric power system,” says Emre Gençer, a research scientist at the MIT Energy Initiative (MITEI). That’s because grid operators will increasingly resort to fossil-fuel-based “peaker” plants that compensate for the intermittency of the variable renewable energy (VRE) sources of sun and wind. “If we’re to achieve zero-carbon electricity, we must replace all greenhouse gas-emitting sources,” Gençer says.

    Low- and zero-carbon alternatives to greenhouse-gas emitting peaker plants are in development, such as arrays of lithium-ion batteries and hydrogen power generation. But each of these evolving technologies comes with its own set of advantages and constraints, and it has proven difficult to frame the debate about these options in a way that’s useful for policymakers, investors, and utilities engaged in the clean energy transition.

    Now, Gençer and Drake D. Hernandez SM ’21 have come up with a model that makes it possible to pin down the pros and cons of these peaker-plant alternatives with greater precision. Their hybrid technological and economic analysis, based on a detailed inventory of California’s power system, was published online last month in Applied Energy. While their work focuses on the most cost-effective solutions for replacing peaker power plants, it also contains insights intended to contribute to the larger conversation about transforming energy systems.

    “Our study’s essential takeaway is that hydrogen-fired power generation can be the more economical option when compared to lithium-ion batteries — even today, when the costs of hydrogen production, transmission, and storage are very high,” says Hernandez, who worked on the study while a graduate research assistant for MITEI. Adds Gençer, “If there is a place for hydrogen in the cases we analyzed, that suggests there is a promising role for hydrogen to play in the energy transition.”

    Adding up the costs

    California serves as a stellar paradigm for a swiftly shifting power system. The state draws more than 20 percent of its electricity from solar and approximately 7 percent from wind, with more VRE coming online rapidly. This means its peaker plants already play a pivotal role, coming online each evening when the sun goes down or when events such as heat waves drive up electricity use for days at a time.

    “We looked at all the peaker plants in California,” recounts Gençer. “We wanted to know the cost of electricity if we replaced them with hydrogen-fired turbines or with lithium-ion batteries.” The researchers used a core metric called the levelized cost of electricity (LCOE) as a way of comparing the costs of different technologies to each other. LCOE measures the average total cost of building and operating a particular energy-generating asset per unit of total electricity generated over the hypothetical lifetime of that asset.

    Selecting 2019 as their base study year, the team looked at the costs of running natural gas-fired peaker plants, which they defined as plants operating 15 percent of the year in response to gaps in intermittent renewable electricity. In addition, they determined the amount of carbon dioxide released by these plants and the expense of abating these emissions. Much of this information was publicly available.

    Coming up with prices for replacing peaker plants with massive arrays of lithium-ion batteries was also relatively straightforward: “There are no technical limitations to lithium-ion, so you can build as many as you want; but they are super expensive in terms of their footprint for energy storage and the mining required to manufacture them,” says Gençer.

    But then came the hard part: nailing down the costs of hydrogen-fired electricity generation. “The most difficult thing is finding cost assumptions for new technologies,” says Hernandez. “You can’t do this through a literature review, so we had many conversations with equipment manufacturers and plant operators.”

    The team considered two different forms of hydrogen fuel to replace natural gas, one produced through electrolyzer facilities that convert water and electricity into hydrogen, and another that reforms natural gas, yielding hydrogen and carbon waste that can be captured to reduce emissions. They also ran the numbers on retrofitting natural gas plants to burn hydrogen as opposed to building entirely new facilities. Their model includes identification of likely locations throughout the state and expenses involved in constructing these facilities.

    The researchers spent months compiling a giant dataset before setting out on the task of analysis. The results from their modeling were clear: “Hydrogen can be a more cost-effective alternative to lithium-ion batteries for peaking operations on a power grid,” says Hernandez. In addition, notes Gençer, “While certain technologies worked better in particular locations, we found that on average, reforming hydrogen rather than electrolytic hydrogen turned out to be the cheapest option for replacing peaker plants.”

    A tool for energy investors

    When he began this project, Gençer admits he “wasn’t hopeful” about hydrogen replacing natural gas in peaker plants. “It was kind of shocking to see in our different scenarios that there was a place for hydrogen.” That’s because the overall price tag for converting a fossil-fuel based plant to one based on hydrogen is very high, and such conversions likely won’t take place until more sectors of the economy embrace hydrogen, whether as a fuel for transportation or for varied manufacturing and industrial purposes.

    A nascent hydrogen production infrastructure does exist, mainly in the production of ammonia for fertilizer. But enormous investments will be necessary to expand this framework to meet grid-scale needs, driven by purposeful incentives. “With any of the climate solutions proposed today, we will need a carbon tax or carbon pricing; otherwise nobody will switch to new technologies,” says Gençer.

    The researchers believe studies like theirs could help key energy stakeholders make better-informed decisions. To that end, they have integrated their analysis into SESAME, a life cycle and techno-economic assessment tool for a range of energy systems that was developed by MIT researchers. Users can leverage this sophisticated modeling environment to compare costs of energy storage and emissions from different technologies, for instance, or to determine whether it is cost-efficient to replace a natural gas-powered plant with one powered by hydrogen.

    “As utilities, industry, and investors look to decarbonize and achieve zero-emissions targets, they have to weigh the costs of investing in low-carbon technologies today against the potential impacts of climate change moving forward,” says Hernandez, who is currently a senior associate in the energy practice at Charles River Associates. Hydrogen, he believes, will become increasingly cost-competitive as its production costs decline and markets expand.

    A study group member of MITEI’s soon-to-be published Future of Storage study, Gençer knows that hydrogen alone will not usher in a zero-carbon future. But, he says, “Our research shows we need to seriously consider hydrogen in the energy transition, start thinking about key areas where hydrogen should be used, and start making the massive investments necessary.”

    Funding for this research was provided by MITEI’s Low-Carbon Energy Centers and Future of Storage study. More

  • in

    The boiling crisis — and how to avoid it

    It’s rare for a pre-teen to become enamored with thermodynamics, but those consumed by such a passion may consider themselves lucky to end up at a place like MIT. Madhumitha Ravichandran certainly does. A PhD student in Nuclear Science and Engineering (NSE), Ravichandran first encountered the laws of thermodynamics as a middle school student in Chennai, India. “They made complete sense to me,” she says. “While looking at the refrigerator at home, I wondered if I might someday build energy systems that utilized these same principles. That’s how it started, and I’ve sustained that interest ever since.”

    She’s now drawing on her knowledge of thermodynamics in research carried out in the laboratory of NSE Assistant Professor Matteo Bucci, her doctoral supervisor. Ravichandran and Bucci are gaining key insights into the “boiling crisis” — a problem that has long saddled the energy industry.

    Ravichandran was well prepared for this work by the time she arrived at MIT in 2017. As an undergraduate at India’s Sastra University, she pursued research on “two-phase flows,” examining the transitions water undergoes between its liquid and gaseous forms. She continued to study droplet evaporation and related phenomena during an internship in early 2017 in the Bucci Lab. That was an eye-opening experience, Ravichandran explains. “Back at my university in India, only 2 to 3 percent of the mechanical engineering students were women, and there were no women on the faculty. It was the first time I had faced social inequities because of my gender, and I went through some struggles, to say the least.”

    MIT offered a welcome contrast. “The amount of freedom I was given made me extremely happy,” she says. “I was always encouraged to explore my ideas, and I always felt included.” She was doubly happy because, midway through the internship, she learned that she’d been accepted to MIT’s graduate program.

    As a PhD student, her research has followed a similar path. She continues to study boiling and heat transfer, but Bucci gave this work some added urgency. They’re now investigating the aforementioned boiling crisis, which affects nuclear reactors and other kinds of power plants that rely on steam generation to drive turbines. In a light water nuclear reactor, water is heated by fuel rods in which nuclear fission has occurred. Heat removal is most efficient when the water circulating past the rods boils. However, if too many bubbles form on the surface, enveloping the fuel rods in a layer of vapor, heat transfer is greatly reduced. That’s not only diminishes power generation, it can also be dangerous because the fuel rods must be continuously cooled to avoid a dreaded meltdown accident.

    Nuclear plants operate at low power ratings to provide an ample safety margin and thereby prevent such a scenario from occurring. Ravichandran believes these standards may be overly cautious, owing to the fact that people aren’t yet sure of the conditions that bring about the boiling crisis. This hurts the economic viability of nuclear power, she says, at a time when we desperately need carbon-free power sources. But Ravichandran and other researchers in the Bucci Lab are starting to fill some major gaps in our understanding.

    They initially ran experiments to determine how quickly bubbles form when water hits a hot surface, how big the bubbles get, how long they grow, and how the surface temperature changes. “A typical experiment lasted two minutes, but it took more than three weeks to pick out every bubble that formed and track its growth and evolution,” Ravichandran explains.

    To streamline this process, she and Bucci are implementing a machine learning approach, based on neural network technology. Neural networks are good at recognizing patterns, including those associated with bubble nucleation. “These networks are data hungry,” Ravichandran says. “The more data they’re fed, the better they perform.” The networks were trained on experimental results pertaining to bubble formation on different surfaces; the networks were then tested on surfaces for which the NSE researchers had no data and didn’t know what to expect.

    After gaining experimental validation of the output from the machine learning models, the team is now trying to get these models to make reliable predictions as to when the bubble crisis, itself, will occur. The ultimate goal is to have a fully autonomous system that can not only predict the boiling crisis, but also show why it happens and automatically shut down experiments before things go too far and lab equipment starts melting.

    In the meantime, Ravichandran and Bucci have made some important theoretical advances, which they report on in a recently published paper for Applied Physics Letters. There had been a debate in the nuclear engineering community as to whether the boiling crisis is caused by bubbles covering the fuel rod surface or due to bubbles growing on top of each other, extending outward from the surface. Ravichandran and Bucci determined that it is a surface-level phenomenon. In addition, they’ve identified the three main factors that trigger the boiling crisis. First, there’s the number of bubbles that form over a given surface area and, second, the average bubble size. The third factor is the product of the bubble frequency (the number of bubbles forming within a second at a given site) and the time it takes for a bubble to reach its full size.

    Ravichandran is happy to have shed some new light on this issue but acknowledges that there’s still much work to be done. Although her research agenda is ambitious and nearly all consuming, she never forgets where she came from and the sense of isolation she felt while studying engineering as an undergraduate. She has, on her own initiative, been mentoring female engineering students in India, providing both research guidance and career advice.

    “I sometimes feel there was a reason I went through those early hardships,” Ravichandran says. “That’s what made me decide that I want to be an educator.” She’s also grateful for the opportunities that have opened up for her since coming to MIT. A recipient of a 2021-22 MathWorks Engineering Fellowship, she says, “now it feels like the only limits on me are those that I’ve placed on myself.” More

  • in

    A peculiar state of matter in layers of semiconductors

    Scientists around the world are developing new hardware for quantum computers, a new type of device that could accelerate drug design, financial modeling, and weather prediction. These computers rely on qubits, bits of matter that can represent some combination of 1 and 0 simultaneously. The problem is that qubits are fickle, degrading into regular bits when interactions with surrounding matter interfere. But new research at MIT suggests a way to protect their states, using a phenomenon called many-body localization (MBL).

    MBL is a peculiar phase of matter, proposed decades ago, that is unlike solid or liquid. Typically, matter comes to thermal equilibrium with its environment. That’s why soup cools and ice cubes melt. But in MBL, an object consisting of many strongly interacting bodies, such as atoms, never reaches such equilibrium. Heat, like sound, consists of collective atomic vibrations and can travel in waves; an object always has such heat waves internally. But when there’s enough disorder and enough interaction in the way its atoms are arranged, the waves can become trapped, thus preventing the object from reaching equilibrium.

    MBL had been demonstrated in “optical lattices,” arrangements of atoms at very cold temperatures held in place using lasers. But such setups are impractical. MBL had also arguably been shown in solid systems, but only with very slow temporal dynamics, in which the phase’s existence is hard to prove because equilibrium might be reached if researchers could wait long enough. The MIT research found a signatures of MBL in a “solid-state” system — one made of semiconductors — that would otherwise have reached equilibrium in the time it was watched.

    “It could open a new chapter in the study of quantum dynamics,” says Rahul Nandkishore, a physicist at the University of Colorado at Boulder, who was not involved in the work.

    Mingda Li, the Norman C Rasmussen Assistant Professor Nuclear Science and Engineering at MIT, led the new study, published in a recent issue of Nano Letters. The researchers built a system containing alternating semiconductor layers, creating a microscopic lasagna — aluminum arsenide, followed by gallium arsenide, and so on, for 600 layers, each 3 nanometers (millionths of a millimeter) thick. Between the layers they dispersed “nanodots,” 2-nanometer particles of erbium arsenide, to create disorder. The lasagna, or “superlattice,” came in three recipes: one with no nanodots, one in which nanodots covered 8 percent of each layer’s area, and one in which they covered 25 percent.

    According to Li, the team used layers of material, instead of a bulk material, to simplify the system so dissipation of heat across the planes was essentially one-dimensional. And they used nanodots, instead of mere chemical impurities, to crank up the disorder.

    To measure whether these disordered systems are still staying in equilibrium, the researchers measured them with X-rays. Using the Advanced Photon Source at Argonne National Lab, they shot beams of radiation at an energy of more than 20,000 electron volts, and to resolve the energy difference between the incoming X-ray and after its reflection off the sample’s surface, with an energy resolution less than one one-thousandth of an electron volt. To avoid penetrating the superlattice and hitting the underlying substrate, they shot it at an angle of just half a degree from parallel.

    Just as light can be measured as waves or particles, so too can heat. The collective atomic vibration for heat in the form of a heat-carrying unit is called a phonon. X-rays interact with these phonons, and by measuring how X-rays reflect off the sample, the experimenters can determine if it is in equilibrium.

    The researchers found that when the superlattice was cold — 30 kelvin, about -400 degrees Fahrenheit — and it contained nanodots, its phonons at certain frequencies remained were not in equilibrium.

    More work remains to prove conclusively that MBL has been achieved, but “this new quantum phase can open up a whole new platform to explore quantum phenomena,” Li says, “with many potential applications, from thermal storage to quantum computing.”

    To create qubits, some quantum computers employ specks of matter called quantum dots. Li says quantum dots similar to Li’s nanodots could act as qubits. Magnets could read or write their quantum states, while the many-body localization would keep them insulated from heat and other environmental factors.

    In terms of thermal storage, such a superlattice might switch in and out of an MBL phase by magnetically controlling the nanodots. It could insulate computer parts from heat at one moment, then allow parts to disperse heat when it won’t cause damage. Or it could allow heat to build up and be harnessed later for generating electricity.

    Conveniently, superlattices with nanodots can be constructed using traditional techniques for fabricating semiconductors, alongside other elements of computer chips. According to Li, “It’s a much larger design space than with chemical doping, and there are numerous applications.”

    “I am excited to see that signatures of MBL can now also be found in real material systems,” says Immanuel Bloch, scientific director at the Max-Planck-Institute of Quantum Optics, of the new work. “I believe this will help us to better understand the conditions under which MBL can be observed in different quantum many-body systems and how possible coupling to the environment affects the stability of the system. These are fundamental and important questions and the MIT experiment is an important step helping us to answer them.”

    Funding was provided by the U.S. Department of Energy’s Basic Energy Sciences program’s Neutron Scattering Program. More

  • in

    Energy storage from a chemistry perspective

    The transition toward a more sustainable, environmentally sound electrical grid has driven an upsurge in renewables like solar and wind. But something as simple as cloud cover can cause grid instability, and wind power is inherently unpredictable. This intermittent nature of renewables has invigorated the competitive landscape for energy storage companies looking to enhance power system flexibility while enabling the integration of renewables.

    “Impact is what drives PolyJoule more than anything else,” says CEO Eli Paster. “We see impact from a renewable integration standpoint, from a curtailment standpoint, and also from the standpoint of transitioning from a centralized to a decentralized model of energy-power delivery.”

    PolyJoule is a Billerica, Massachusetts-based startup that’s looking to reinvent energy storage from a chemistry perspective. Co-founders Ian Hunter of MIT’s Department of Mechanical Engineering and Tim Swager of the Department of Chemistry are longstanding MIT professors considered luminaries in their respective fields. Meanwhile, the core team is a small but highly skilled collection of chemists, manufacturing specialists, supply chain optimizers, and entrepreneurs, many of whom have called MIT home at one point or another.

    “The ideas that we work on in the lab, you’ll see turned into products three to four years from now, and they will still be innovative and well ahead of the curve when they get to market,” Paster says. “But the concepts come from the foresight of thinking five to 10 years in advance. That’s what we have in our back pocket, thanks to great minds like Ian and Tim.”

    PolyJoule takes a systems-level approach married to high-throughput, analytical electrochemistry that has allowed the company to pinpoint a chemical cell design based on 10,000 trials. The result is a battery that is low-cost, safe, and has a long lifetime. It’s capable of responding to base loads and peak loads in microseconds, allowing the same battery to participate in multiple power markets and deployment use cases.

    In the energy storage sphere, interesting technologies abound, but workable solutions are few and far between. But Paster says PolyJoule has managed to bridge the gap between the lab and the real world by taking industry concerns into account from the beginning. “We’ve taken a slightly contrarian view to all of the other energy storage companies that have come before us that have said, ‘If we build it, they will come.’ Instead, we’ve gone directly to the customer and asked, ‘If you could have a better battery storage platform, what would it look like?’”

    With commercial input feeding into the thought processes behind their technological and commercial deployment, PolyJoule says they’ve designed a battery that is less expensive to make, less expensive to operate, safer, and easier to deploy.

    Traditionally, lithium-ion batteries have been the go-to energy storage solution. But lithium has its drawbacks, including cost, safety issues, and detrimental effects on the environment. But PolyJoule isn’t interested in lithium — or metals of any kind, in fact. “We start with the periodic table of organic elements,” says Paster, “and from there, we derive what works at economies of scale, what is easy to converge and convert chemically.”

    Having an inherently safer chemistry allows PolyJoule to save on system integration costs, among other things. PolyJoule batteries don’t contain flammable solvents, which means no added expenses related to fire mitigation. Safer chemistry also means ease of storage, and PolyJoule batteries are currently undergoing global safety certification (UL approval) to be allowed indoors and on airplanes. Finally, with high power built into the chemistry, PolyJoule’s cells can be charged and discharged to extremes, without the need for heating or cooling systems.

    “From raw material to product delivery, we examine each step in the value chain with an eye towards reducing costs,” says Paster. It all starts with designing the chemistry around earth-abundant elements, which allows the small startup to compete with larger suppliers, even at smaller scales. Consider the fact that PolyJoule’s differentiating material cost is less than $1 per kilogram, whereas lithium carbonate sells for $20 per kilogram.

    On the manufacturing side, Paster explains that PolyJoule cuts costs by making their cells in old paper mills and warehouses, employing off-the-shelf equipment previously used for tissue paper or newspaper printing. “We use equipment that has been around for decades because we don’t want to create a cutting-edge technology that requires cutting-edge manufacturing,” he says. “We want to create a cutting-edge technology that can be deployed in industrialized nations and in other nations that can benefit the most from energy storage.”

    PolyJoule’s first customer is an industrial distributed energy consumer with baseline energy consumption that increases by a factor of 10 when the heavy machinery kicks on twice a day. In the early morning and late afternoon, it consumes about 50 kilowatts for 20 minutes to an hour, compared to a baseline rate of 5  kilowatts. It’s an application model that is translatable to a variety of industries. Think wastewater treatment, food processing, and server farms — anything with a fluctuation in power consumption over a 24-hour period.

    By the end of the year, PolyJoule will have delivered its first 10 kilowatt-hour system, exiting stealth mode and adding commercial viability to demonstrated technological superiority. “What we’re seeing, now is massive amounts of energy storage being added to renewables and grid-edge applications,” says Paster. “We anticipated that by 12-18 months, and now we’re ramping up to catch up with some of the bigger players.” More