More stories

  • in

    Solving a longstanding conundrum in heat transfer

    It is a problem that has beguiled scientists for a century. But, buoyed by a $625,000 Distinguished Early Career Award from the U.S. Department of Energy (DoE), Matteo Bucci, an associate professor in the Department of Nuclear Science and Engineering (NSE), hopes to be close to an answer.

    Tackling the boiling crisis

    Whether you’re heating a pot of water for pasta or are designing nuclear reactors, one phenomenon — boiling — is vital for efficient execution of both processes.

    “Boiling is a very effective heat transfer mechanism; it’s the way to remove large amounts of heat from the surface, which is why it is used in many high-power density applications,” Bucci says. An example use case: nuclear reactors.

    To the layperson, boiling appears simple — bubbles form and burst, removing heat. But what if so many bubbles form and coalesce that they form a band of vapor that prevents further heat transfer? Such a problem is a known entity and is labeled the boiling crisis. It would lead to runaway heat, and a failure of fuel rods in nuclear reactors. So “understanding and determining under which conditions the boiling crisis is likely to happen is critical to designing more efficient and cost-competitive nuclear reactors,” Bucci says.

    Early work on the boiling crisis dates back nearly a century ago, to 1926. And while much work has been done, “it is clear that we haven’t found an answer,” Bucci says. The boiling crisis remains a challenge because while models abound, the measurement of related phenomena to prove or disprove these models has been difficult. “[Boiling] is a process that happens on a very, very small length scale and over very, very short times,” Bucci says. “We are not able to observe it at the level of detail necessary to understand what really happens and validate hypotheses.”

    But, over the past few years, Bucci and his team have been developing diagnostics that can measure the phenomena related to boiling and thereby provide much-needed answers to a classic problem. Diagnostics are anchored in infrared thermometry and a technique using visible light. “By combining these two techniques I think we’re going to be ready to answer standing questions related to heat transfer, we can make our way out of the rabbit hole,” Bucci says. The grant award from the U.S. DoE for Nuclear Energy Projects will aid in this and Bucci’s other research efforts.

    An idyllic Italian childhood

    Tackling difficult problems is not new territory for Bucci, who grew up in the small town of Città di Castello near Florence, Italy. Bucci’s mother was an elementary school teacher. His father used to have a machine shop, which helped develop Bucci’s scientific bent. “I liked LEGOs a lot when I was a kid. It was a passion,” he adds.

    Despite Italy going through a severe pullback from nuclear engineering during his formative years, the subject fascinated Bucci. Job opportunities in the field were uncertain but Bucci decided to dig in. “If I have to do something for the rest of my life, it might as well be something I like,” he jokes. Bucci attended the University of Pisa for undergraduate and graduate studies in nuclear engineering.

    His interest in heat transfer mechanisms took root during his doctoral studies, a research subject he pursued in Paris at the French Alternative Energies and Atomic Energy Commission (CEA). It was there that a colleague suggested work on the boiling water crisis. This time Bucci set his sights on NSE at MIT and reached out to Professor Jacopo Buongiorno to inquire about research at the institution. Bucci had to fundraise at CEA to conduct research at MIT. He arrived just a couple of days before the Boston Marathon bombing in 2013 with a round-trip ticket. But Bucci has stayed ever since, moving on to become a research scientist and then associate professor at NSE.

    Bucci admits he struggled to adapt to the environment when he first arrived at MIT, but work and friendships with colleagues — he counts NSE’s Guanyu Su and Reza Azizian as among his best friends — helped conquer early worries.

    The integration of artificial intelligence

    In addition to diagnostics for boiling, Bucci and his team are working on ways of integrating artificial intelligence and experimental research. He is convinced that “the integration of advanced diagnostics, machine learning, and advanced modeling tools will blossom in a decade.”

    Bucci’s team is developing an autonomous laboratory for boiling heat transfer experiments. Running on machine learning, the setup decides which experiments to run based on a learning objective the team assigns. “We formulate a question and the machine will answer by optimizing the kinds of experiments that are necessary to answer those questions,” Bucci says, “I honestly think this is the next frontier for boiling,” he adds.

    “It’s when you climb a tree and you reach the top, that you realize that the horizon is much more vast and also more beautiful,” Bucci says of his zeal to pursue more research in the field.

    Even as he seeks new heights, Bucci has not forgotten his origins. Commemorating Italy’s hosting of the World Cup in 1990, a series of posters showcasing a soccer field fitted into the Roman Colosseum occupies pride of place in his home and office. Created by Alberto Burri, the posters are of sentimental value: The (now deceased) Italian artist also hailed from Bucci’s hometown — Città di Castello. More

  • in

    Making hydrogen power a reality

    For decades, government and industry have looked to hydrogen as a potentially game-changing tool in the quest for clean energy. As far back as the early days of the Clinton administration, energy sector observers and public policy experts have extolled the virtues of hydrogen — to the point that some people have joked that hydrogen is the energy of the future, “and always will be.”

    Even as wind and solar power have become commonplace in recent years, hydrogen has been held back by high costs and other challenges. But the fuel may finally be poised to have its moment. At the MIT Energy Initiative Spring Symposium — entitled “Hydrogen’s role in a decarbonized energy system” — experts discussed hydrogen production routes, hydrogen consumption markets, the path to a robust hydrogen infrastructure, and policy changes needed to achieve a “hydrogen future.”

    During one panel, “Options for producing low-carbon hydrogen at scale,” four experts laid out existing and planned efforts to leverage hydrogen for decarbonization. 

    “The race is on”

    Huyen N. Dinh, a senior scientist and group manager at the National Renewable Energy Laboratory (NREL), is the director of HydroGEN, a consortium of several U.S. Department of Energy (DOE) national laboratories that accelerates research and development of innovative and advanced water splitting materials and technologies for clean, sustainable, and low-cost hydrogen production.

    For the past 14 years, Dinh has worked on fuel cells and hydrogen production for NREL. “We think that the 2020s is the decade of hydrogen,” she said. Dinh believes that the energy carrier is poised to come into its own over the next few years, pointing to several domestic and international activities surrounding the fuel and citing a Hydrogen Council report that projected the future impacts of hydrogen — including 30 million jobs and $2.5 trillion in global revenue by 2050.

    “Now is the time for hydrogen, and the global race is on,” she said.

    Dinh also explained the parameters of the Hydrogen Shot — the first of the DOE’s “Energy Earthshots” aimed at accelerating breakthroughs for affordable and reliable clean energy solutions. Hydrogen fuel currently costs around $5 per kilogram to produce, and the Hydrogen Shot’s stated goal is to bring that down by 80 percent to $1 per kilogram within a decade.

    The Hydrogen Shot will be facilitated by $9.5 billion in funding for at least four clean hydrogen hubs located in different parts of the United States, as well as extensive research and development, manufacturing, and recycling from last year’s bipartisan infrastructure law. Still, Dinh noted that it took more than 40 years for solar and wind power to become cost competitive, and now industry, government, national lab, and academic leaders are hoping to achieve similar reductions in hydrogen fuel costs over a much shorter time frame. In the near term, she said, stakeholders will need to improve the efficiency, durability, and affordability of hydrogen production through electrolysis (using electricity to split water) using today’s renewable and nuclear power sources. Over the long term, the focus may shift to splitting water more directly through heat or solar energy, she said.

    “The time frame is short, the competition is intense, and a coordinated effort is critical for domestic competitiveness,” Dinh said.

    Hydrogen across continents

    Wambui Mutoru, principal engineer for international commercial development, exploration, and production international at the Norwegian global energy company Equinor, said that hydrogen is an important component in the company’s ambitions to be carbon-neutral by 2050. The company, in collaboration with partners, has several hydrogen projects in the works, and Mutoru laid out the company’s Hydrogen to Humber project in Northern England. Currently, the Humber region emits more carbon dioxide than any other industrial cluster in the United Kingdom — 50 percent more, in fact, than the next-largest carbon emitter.   

    “The ambition here is for us to deploy the world’s first at-scale hydrogen value chain to decarbonize the Humber industrial cluster,” Mutoru said.

    The project consists of three components: a clean hydrogen production facility, an onshore hydrogen and carbon dioxide transmission network, and offshore carbon dioxide transportation and storage operations. Mutoru highlighted the importance of carbon capture and storage in hydrogen production. Equinor, she said, has captured and sequestered carbon offshore for more than 25 years, storing more than 25 million tons of carbon dioxide during that time.

    Mutoru also touched on Equinor’s efforts to build a decarbonized energy hub in the Appalachian region of the United States, covering territory in Ohio, West Virginia, and Pennsylvania. By 2040, she said, the company’s ambition is to produce about 1.5 million tons of clean hydrogen per year in the region — roughly equivalent to 6.8 gigawatts of electricity — while also storing 30 million tons of carbon dioxide.

    Mutoru acknowledged that the biggest challenge facing potential hydrogen producers is the current lack of viable business models. “Resolving that challenge requires cross-industry collaboration, and supportive policy frameworks so that the market for hydrogen can be built and sustained over the long term,” she said.

    Confronting barriers

    Gretchen Baier, executive external strategy and communications leader for Dow, noted that the company already produces hydrogen in multiple ways. For one, Dow operates the world’s largest ethane cracker, in Texas. An ethane cracker heats ethane to break apart molecular bonds to form ethylene, with hydrogen one of the byproducts of the process. Also, Baier showed a slide of the 1891 patent for the electrolysis of brine water, which also produces hydrogen. The company still engages in this practice, but Dow does not have an effective way of utilizing the resulting hydrogen for their own fuel.

    “Just take a moment to think about that,” Baier said. “We’ve been talking about hydrogen production and the cost of it, and this is basically free hydrogen. And it’s still too much of a barrier to somewhat recycle that and use it for ourselves. The environment is clearly changing, and we do have plans for that, but I think that kind of sets some of the challenges that face industry here.”

    However, Baier said, hydrogen is expected to play a significant role in Dow’s future as the company attempts to decarbonize by 2050. The company, she said, plans to optimize hydrogen allocation and production, retrofit turbines for hydrogen fueling, and purchase clean hydrogen. By 2040, Dow expects more than 60 percent of its sites to be hydrogen-ready.

    Baier noted that hydrogen fuel is not a “panacea,” but rather one among many potential contributors as industry attempts to reduce or eliminate carbon emissions in the coming decades. “Hydrogen has an important role, but it’s not the only answer,” she said.

    “This is real”

    Colleen Wright is vice president of corporate strategy for Constellation, which recently separated from Exelon Corporation. (Exelon now owns the former company’s regulated utilities, such as Commonwealth Edison and Baltimore Gas and Electric, while Constellation owns the competitive generation and supply portions of the business.) Wright stressed the advantages of nuclear power in hydrogen production, which she said include superior economics, low barriers to implementation, and scalability.

    “A quarter of emissions in the world are currently from hard-to-decarbonize sectors — the industrial sector, steel making, heavy-duty transportation, aviation,” she said. “These are really challenging decarbonization sectors, and as we continue to expand and electrify, we’re going to need more supply. We’re also going to need to produce clean hydrogen using emissions-free power.”

    “The scale of nuclear power plants is uniquely suited to be able to scale hydrogen production,” Wright added. She mentioned Constellation’s Nine Mile Point site in the State of New York, which received a DOE grant for a pilot program that will see a proton exchange membrane electrolyzer installed at the site.

    “We’re very excited to see hydrogen go from a [research and development] conversation to a commercial conversation,” she said. “We’ve been calling it a little bit of a ‘middle-school dance.’ Everybody is standing around the circle, waiting to see who’s willing to put something at stake. But this is real. We’re not dancing around the edges. There are a lot of people who are big players, who are willing to put skin in the game today.” More

  • in

    Evan Leppink: Seeking a way to better stabilize the fusion environment

    “Fusion energy was always one of those kind-of sci-fi technologies that you read about,” says nuclear science and engineering PhD candidate Evan Leppink. He’s recalling the time before fusion became a part of his daily hands-on experience at MIT’s Plasma Science and Fusion Center, where he is studying a unique way to drive current in a tokamak plasma using radiofrequency (RF) waves. 

    Now, an award from the U.S. Department of Energy’s (DOE) Office of Science Graduate Student Research (SCGSR) Program will support his work with a 12-month residency at the DIII-D National Fusion Facility in San Diego, California.

    Like all tokamaks, DIII-D generates hot plasma inside a doughnut-shaped vacuum chamber wrapped with magnets. Because plasma will follow magnetic field lines, tokamaks are able to contain the turbulent plasma fuel as it gets hotter and denser, keeping it away from the edges of the chamber where it could damage the wall materials. A key part of the tokamak concept is that part of the magnetic field is created by electrical currents in the plasma itself, which helps to confine and stabilize the configuration. Researchers often launch high-power RF waves into tokamaks to drive that current.

    Leppink will be contributing to research, led by his MIT advisor Steve Wukitch, that pursues launching RF waves in DIII-D using a unique compact antenna placed on the tokamak center column. Typically, antennas are placed inside the tokamak on the outer edge of the doughnut, farthest from the central hole (or column), primarily because access and installation are easier there. This is known as the “low-field side,” because the magnetic field is lower there than at the central column, the “high-field side.” This MIT-led experiment, for the first time, will mount an antenna on the high-field side. There is some theoretical evidence that placing the wave launcher there could improve power penetration and current drive efficiency. And because the plasma environment is less harsh on this side, the antenna will survive longer, a factor important for any future power-producing tokamak.

    Leppink’s work on DIII-D focuses specifically on measuring the density of plasmas generated in the tokamak, for which he developed a “reflectometer.” This small antenna launches microwaves into the plasma, which reflect back to the antenna to be measured. The time that it takes for these microwaves to traverse the plasma provides information about the plasma density, allowing researchers to build up detailed density profiles, data critical for injecting RF power into the plasma.

    “Research shows that when we try to inject these waves into the plasma to drive the current, they can lose power as they travel through the edge region of the tokamak, and can even have problems entering the core of the plasma, where we would most like to direct them,” says Leppink. “My diagnostic will measure that edge region on the high-field side near the launcher in great detail, which provides us a way to directly verify calculations or compare actual results with simulation results.”

    Although focused on his own research, Leppink has excelled at priming other students for success in their studies and research. In 2021 he received the NSE Outstanding Teaching Assistant and Mentorship Award.

    “The highlights of TA’ing for me were the times when I could watch students go from struggling with a difficult topic to fully understanding it, often with just a nudge in the right direction and then allowing them to follow their own intuition the rest of the way,” he says.

    The right direction for Leppink points toward San Diego and RF current drive experiments on DIII-D. He is grateful for the support from the SCGSR, a program created to prepare graduate students like him for science, technology, engineering, or mathematics careers important to the DOE Office of Science mission. It provides graduate thesis research opportunities through extended residency at DOE national laboratories. He has already made several trips to DIII-D, in part to install his reflectometer, and has been impressed with the size of the operation.

    “It takes a little while to kind of compartmentalize everything and say, ‘OK, well, here’s my part of the machine. This is what I’m doing.’ It can definitely be overwhelming at times. But I’m blessed to be able to work on what has been the workhorse tokamak of the United States for the past few decades.” More

  • in

    Team creates map for production of eco-friendly metals

    In work that could usher in more efficient, eco-friendly processes for producing important metals like lithium, iron, and cobalt, researchers from MIT and the SLAC National Accelerator Laboratory have mapped what is happening at the atomic level behind a particularly promising approach called metal electrolysis.

    By creating maps for a wide range of metals, they not only determined which metals should be easiest to produce using this approach, but also identified fundamental barriers behind the efficient production of others. As a result, the researchers’ map could become an important design tool for optimizing the production of all these metals.

    The work could also aid the development of metal-air batteries, cousins of the lithium-ion batteries used in today’s electric vehicles.

    Most of the metals key to society today are produced using fossil fuels. These fuels generate the high temperatures necessary to convert the original ore into its purified metal. But that process is a significant source of greenhouse gases — steel alone accounts for some 7 percent of carbon dioxide emissions globally. As a result, researchers from around the world are working to identify more eco-friendly ways for the production of metals.

    One promising approach is metal electrolysis, in which a metal oxide, the ore, is zapped with electricity to create pure metal with oxygen as the byproduct. That is the reaction explored at the atomic level in new research reported in the April 8 issue of the journal Chemistry of Materials.

    Donald Siegel is department chair and professor of mechanical engineering at the University of Texas at Austin. Says Siegel, who was not involved in the Chemistry of Materials study: “This work is an important contribution to improving the efficiency of metal production from metal oxides. It clarifies our understanding of low-carbon electrolysis processes by tracing the underlying thermodynamics back to elementary metal-oxygen interactions. I expect that this work will aid in the creation of design rules that will make these industrially important processes less reliant on fossil fuels.”

    Yang Shao-Horn, the JR East Professor of Engineering in MIT’s Department of Materials Science and Engineering (DMSE) and Department of Mechanical Engineering, is a leader of the current work, with Michal Bajdich of SLAC.

    “Here we aim to establish some basic understanding to predict the efficiency of electrochemical metal production and metal-air batteries from examining computed thermodynamic barriers for the conversion between metal and metal oxides,” says Shao-Horn, who is on the research team for MIT’s new Center for Electrification and Decarbonization of Industry, a winner of the Institute’s first-ever Climate Grand Challenges competition. Shao-Horn is also affiliated with MIT’s Materials Research Laboratory and Research Laboratory of Electronics.

    In addition to Shao-Horn and Bajdich, other authors of the Chemistry of Materials paper are Jaclyn R. Lunger, first author and a DMSE graduate student; mechanical engineering senior Naomi Lutz; and DMSE graduate student Jiayu Peng.

    Other applications

    The work could also aid in developing metal-air batteries such as lithium-air, aluminum-air, and zinc-air batteries. These cousins of the lithium-ion batteries used in today’s electric vehicles have the potential to electrify aviation because their energy densities are much higher. However, they are not yet on the market due to a variety of problems including inefficiency.

    Charging metal-air batteries also involves electrolysis. As a result, the new atomic-level understanding of these reactions could not only help engineers develop efficient electrochemical routes for metal production, but also design more efficient metal-air batteries.

    Learning from water splitting

    Electrolysis is also used to split water into oxygen and hydrogen, which stores the resulting energy. That hydrogen, in turn, could become an eco-friendly alternative to fossil fuels. Since much more is known about water electrolysis, the focus of Bajdich’s work at SLAC, than the electrolysis of metal oxides, the team compared the two processes for the first time.

    The result: “Slowly, we uncovered the elementary steps involved in metal electrolysis,” says Bajdich. The work was challenging, says Lunger, because “it was unclear to us what those steps are. We had to figure out how to get from A to B,” or from a metal oxide to metal and oxygen.

    All of the work was conducted with supercomputer simulations. “It’s like a sandbox of atoms, and then we play with them. It’s a little like Legos,” says Bajdich. More specifically, the team explored different scenarios for the electrolysis of several metals. Each involved different catalysts, molecules that boost the speed of a reaction.

    Says Lunger, “To optimize the reaction, you want to find the catalyst that makes it most efficient.” The team’s map is essentially a guide for designing the best catalysts for each different metal.

    What’s next? Lunger noted that the current work focused on the electrolysis of pure metals. “I’m interested in seeing what happens in more complex systems involving multiple metals. Can you make the reaction more efficient if there’s sodium and lithium present, or cadmium and cesium?”

    This work was supported by a U.S. Department of Energy Office of Science Graduate Student Research award. It was also supported by an MIT Energy Initiative fellowship, the Toyota Research Institute through the Accelerated Materials Design and Discovery Program, the Catalysis Science Program of Department of Energy, Office of Basic Energy Sciences, and by the Differentiate Program through the U.S. Advanced Research Projects Agency — Energy.  More

  • in

    Using excess heat to improve electrolyzers and fuel cells

    Reducing the use of fossil fuels will have unintended consequences for the power-generation industry and beyond. For example, many industrial chemical processes use fossil-fuel byproducts as precursors to things like asphalt, glycerine, and other important chemicals. One solution to reduce the impact of the loss of fossil fuels on industrial chemical processes is to store and use the heat that nuclear fission produces. New MIT research has dramatically improved a way to put that heat toward generating chemicals through a process called electrolysis. 

    Electrolyzers are devices that use electricity to split water (H2O) and generate molecules of hydrogen (H2) and oxygen (O2). Hydrogen is used in fuel cells to generate electricity and drive electric cars or drones or in industrial operations like the production of steel, ammonia, and polymers. Electrolyzers can also take in water and carbon dioxide (CO2) and produce oxygen and ethylene (C2H4), a chemical used in polymers and elsewhere.

    There are three main types of electrolyzers. One type works at room temperature, but has downsides; they’re inefficient and require rare metals, such as platinum. A second type is more efficient but runs at high temperatures, above 700 degrees Celsius. But metals corrode at that temperature, and the devices need expensive sealing and insulation. The third type would be a Goldilocks solution for nuclear heat if it were perfected, running at 300-600 C and requiring mostly cheap materials like stainless steel. These cells have never been operated as efficiently as theory says they should. The new work, published this month in Nature, both illuminates the problem and offers a solution.

    A sandwich mystery

    The intermediate-temperature devices use what are called protonic ceramic electrochemical cells. Each cell is a sandwich, with a dense electrolyte layered between two porous electrodes. Water vapor is pumped into the top electrode. A wire on the side connects the two electrodes, and externally generated electricity runs from the top to the bottom. The voltage pulls electrons out of the water, which splits the molecule, releasing oxygen. A hydrogen atom without an electron is just a proton. The protons get pulled through the electrolyte to rejoin with the electrons at the bottom electrode and form H2 molecules, which are then collected.

    On its own, the electrolyte in the middle, made mainly of barium, cerium, and zirconium, conducts protons very well. “But when we put the same material into this three-layer device, the proton conductivity of the full cell is pretty bad,” says Yanhao Dong, a postdoc in MIT’s Department of Nuclear Science and Engineering and a paper co-author. “Its conductivity is only about 50 percent of the bulk form’s. We wondered why there’s an inconsistency here.”

    A couple of clues pointed them in the right direction. First, if they don’t prepare the cell very carefully, the top layer, only about 20 microns (.02 millimeters) thick, doesn’t stay attached. “Sometimes if you use just Scotch tape, it will peel off,” Dong says. Second, when they looked at a cross section of a device using a scanning electron microscope, they saw that the top surface of the electrolyte layer was flat, whereas the bottom surface of the porous electrode sitting on it was bumpy, and the two came into contact in only a few places. They didn’t bond well. That precarious interface leads to both structural de-lamination and poor proton passage from the electrode to the electrolyte.

    Acidic solution

    The solution turned out to be simple: researchers roughed up the top of the electrolyte. Specifically, they applied acid for 10 minutes, which etched grooves into the surface. Ju Li, the Battelle Energy Alliance Professor in Nuclear Engineering and professor of materials science and engineering at MIT, and a paper co-author, likens it to sandblasting a surface before applying paint to increase adhesion. Their acid-treated cells produced about 200 percent more hydrogen per area at 1.5 volts at 600 C than did any previous cell of its type, and worked well down to 350 C with very little performance decay over extended operation. 

    “The authors reported a surprisingly simple yet highly effective surface treatment to dramatically improve the interface,” says Liangbing Hu, the director of the Center for Materials Innovation at the Maryland Energy Innovation Institute, who was not involved in the work. He calls the cell performance “exceptional.”

    “We are excited and surprised” by the results, Dong says. “The engineering solution seems quite simple. And that’s actually good, because it makes it very applicable to real applications.” In a practical product, many such cells would be stacked together to form a module. MIT’s partner in the project, Idaho National Laboratory, is very strong in engineering and prototyping, so Li expects to see electrolyzers based on this technology at scale before too long. “At the materials level, this is a breakthrough that shows that at a real-device scale you can work at this sweet spot of temperature of 350 to 600 degrees Celsius for nuclear fission and fusion reactors,” he says.

    “Reduced operating temperature enables cheaper materials for the large-scale assembly, including the stack,” says Idaho National Laboratory researcher and paper co-author Dong Ding. “The technology operates within the same temperature range as several important, current industrial processes, including ammonia production and CO2 reduction. Matching these temperatures will expedite the technology’s adoption within the existing industry.”

    “This is very significant for both Idaho National Lab and us,” Li adds, “because it bridges nuclear energy and renewable electricity.” He notes that the technology could also help fuel cells, which are basically electrolyzers run in reverse, using green hydrogen or hydrocarbons to generate electricity. According to Wei Wu, a materials scientist at Idaho National Laboratory and a paper co-author, “this technique is quite universal and compatible with other solid electrochemical devices.”

    Dong says it’s rare for a paper to advance both science and engineering to such a degree. “We are happy to combine those together and get both very good scientific understanding and also very good real-world performance.”

    This work, done in collaboration with Idaho National Laboratory, New Mexico State University, and the University of Nebraska–Lincoln, was funded, in part, by the U.S. Department of Energy. More

  • in

    A new heat engine with no moving parts is as efficient as a steam turbine

    Engineers at MIT and the National Renewable Energy Laboratory (NREL) have designed a heat engine with no moving parts. Their new demonstrations show that it converts heat to electricity with over 40 percent efficiency — a performance better than that of traditional steam turbines.

    The heat engine is a thermophotovoltaic (TPV) cell, similar to a solar panel’s photovoltaic cells, that passively captures high-energy photons from a white-hot heat source and converts them into electricity. The team’s design can generate electricity from a heat source of between 1,900 to 2,400 degrees Celsius, or up to about 4,300 degrees Fahrenheit.

    The researchers plan to incorporate the TPV cell into a grid-scale thermal battery. The system would absorb excess energy from renewable sources such as the sun and store that energy in heavily insulated banks of hot graphite. When the energy is needed, such as on overcast days, TPV cells would convert the heat into electricity, and dispatch the energy to a power grid.

    With the new TPV cell, the team has now successfully demonstrated the main parts of the system in separate, small-scale experiments. They are working to integrate the parts to demonstrate a fully operational system. From there, they hope to scale up the system to replace fossil-fuel-driven power plants and enable a fully decarbonized power grid, supplied entirely by renewable energy.

    “Thermophotovoltaic cells were the last key step toward demonstrating that thermal batteries are a viable concept,” says Asegun Henry, the Robert N. Noyce Career Development Professor in MIT’s Department of Mechanical Engineering. “This is an absolutely critical step on the path to proliferate renewable energy and get to a fully decarbonized grid.”

    Henry and his collaborators have published their results today in the journal Nature. Co-authors at MIT include Alina LaPotin, Kevin Schulte, Kyle Buznitsky, Colin Kelsall, Andrew Rohskopf, and Evelyn Wang, the Ford Professor of Engineering and head of the Department of Mechanical Engineering, along with collaborators at NREL in Golden, Colorado.

    Jumping the gap

    More than 90 percent of the world’s electricity comes from sources of heat such as coal, natural gas, nuclear energy, and concentrated solar energy. For a century, steam turbines have been the industrial standard for converting such heat sources into electricity.

    On average, steam turbines reliably convert about 35 percent of a heat source into electricity, with about 60 percent representing the highest efficiency of any heat engine to date. But the machinery depends on moving parts that are temperature- limited. Heat sources higher than 2,000 degrees Celsius, such as Henry’s proposed thermal battery system, would be too hot for turbines.

    In recent years, scientists have looked into solid-state alternatives — heat engines with no moving parts, that could potentially work efficiently at higher temperatures.

    “One of the advantages of solid-state energy converters are that they can operate at higher temperatures with lower maintenance costs because they have no moving parts,” Henry says. “They just sit there and reliably generate electricity.”

    Thermophotovoltaic cells offered one exploratory route toward solid-state heat engines. Much like solar cells, TPV cells could be made from semiconducting materials with a particular bandgap — the gap between a material’s valence band and its conduction band. If a photon with a high enough energy is absorbed by the material, it can kick an electron across the bandgap, where the electron can then conduct, and thereby generate electricity — doing so without moving rotors or blades.

    To date, most TPV cells have only reached efficiencies of around 20 percent, with the record at 32 percent, as they have been made of relatively low-bandgap materials that convert lower-temperature, low-energy photons, and therefore convert energy less efficiently.

    Catching light

    In their new TPV design, Henry and his colleagues looked to capture higher-energy photons from a higher-temperature heat source, thereby converting energy more efficiently. The team’s new cell does so with higher-bandgap materials and multiple junctions, or material layers, compared with existing TPV designs.

    The cell is fabricated from three main regions: a high-bandgap alloy, which sits over a slightly lower-bandgap alloy, underneath which is a mirror-like layer of gold. The first layer captures a heat source’s highest-energy photons and converts them into electricity, while lower-energy photons that pass through the first layer are captured by the second and converted to add to the generated voltage. Any photons that pass through this second layer are then reflected by the mirror, back to the heat source, rather than being absorbed as wasted heat.

    The team tested the cell’s efficiency by placing it over a heat flux sensor — a device that directly measures the heat absorbed from the cell. They exposed the cell to a high-temperature lamp and concentrated the light onto the cell. They then varied the bulb’s intensity, or temperature, and observed how the cell’s power efficiency — the amount of power it produced, compared with the heat it absorbed — changed with temperature. Over a range of 1,900 to 2,400 degrees Celsius, the new TPV cell maintained an efficiency of around 40 percent.

    “We can get a high efficiency over a broad range of temperatures relevant for thermal batteries,” Henry says.

    The cell in the experiments is about a square centimeter. For a grid-scale thermal battery system, Henry envisions the TPV cells would have to scale up to about 10,000 square feet (about a quarter of a football field), and would operate in climate-controlled warehouses to draw power from huge banks of stored solar energy. He points out that an infrastructure exists for making large-scale photovoltaic cells, which could also be adapted to manufacture TPVs.

    “There’s definitely a huge net positive here in terms of sustainability,” Henry says. “The technology is safe, environmentally benign in its life cycle, and can have a tremendous impact on abating carbon dioxide emissions from electricity production.”

    This research was supported, in part, by the U.S. Department of Energy. More

  • in

    A better way to separate gases

    Industrial processes for chemical separations, including natural gas purification and the production of oxygen and nitrogen for medical or industrial uses, are collectively responsible for about 15 percent of the world’s energy use. They also contribute a corresponding amount to the world’s greenhouse gas emissions. Now, researchers at MIT and Stanford University have developed a new kind of membrane for carrying out these separation processes with roughly 1/10 the energy use and emissions.

    Using membranes for separation of chemicals is known to be much more efficient than processes such as distillation or absorption, but there has always been a tradeoff between permeability — how fast gases can penetrate through the material — and selectivity — the ability to let the desired molecules pass through while blocking all others. The new family of membrane materials, based on “hydrocarbon ladder” polymers, overcomes that tradeoff, providing both high permeability and extremely good selectivity, the researchers say.

    The findings are reported today in the journal Science, in a paper by Yan Xia, an associate professor of chemistry at Stanford; Zachary Smith, an assistant professor of chemical engineering at MIT; Ingo Pinnau, a professor at King Abdullah University of Science and Technology, and five others.

    Gas separation is an important and widespread industrial process whose uses include removing impurities and undesired compounds from natural gas or biogas, separating oxygen and nitrogen from air for medical and industrial purposes, separating carbon dioxide from other gases for carbon capture, and producing hydrogen for use as a carbon-free transportation fuel. The new ladder polymer membranes show promise for drastically improving the performance of such separation processes. For example, separating carbon dioxide from methane, these new membranes have five times the selectivity and 100 times the permeability of existing cellulosic membranes for that purpose. Similarly, they are 100 times more permeable and three times as selective for separating hydrogen gas from methane.

    The new type of polymers, developed over the last several years by the Xia lab, are referred to as ladder polymers because they are formed from double strands connected by rung-like bonds, and these linkages provide a high degree of rigidity and stability to the polymer material. These ladder polymers are synthesized via an efficient and selective chemistry the Xia lab developed called CANAL, an acronym for catalytic arene-norbornene annulation, which stitches readily available chemicals into ladder structures with hundreds or even thousands of rungs. The polymers are synthesized in a solution, where they form rigid and kinked ribbon-like strands that can easily be made into a thin sheet with sub-nanometer-scale pores by using industrially available polymer casting processes. The sizes of the resulting pores can be tuned through the choice of the specific hydrocarbon starting compounds. “This chemistry and choice of chemical building blocks allowed us to make very rigid ladder polymers with different configurations,” Xia says.

    To apply the CANAL polymers as selective membranes, the collaboration made use of Xia’s expertise in polymers and Smith’s specialization in membrane research. Holden Lai, a former Stanford doctoral student, carried out much of the development and exploration of how their structures impact gas permeation properties. “It took us eight years from developing the new chemistry to finding the right polymer structures that bestow the high separation performance,” Xia says.

    The Xia lab spent the past several years varying the structures of CANAL polymers to understand how their structures affect their separation performance. Surprisingly, they found that adding additional kinks to their original CANAL polymers significantly improved the mechanical robustness of their membranes and boosted their selectivity  for molecules of similar sizes, such as oxygen and nitrogen gases, without losing permeability of the more permeable gas. The selectivity actually improves as the material ages. The combination of high selectivity and high permeability makes these materials outperform all other polymer materials in many gas separations, the researchers say.

    Today, 15 percent of global energy use goes into chemical separations, and these separation processes are “often based on century-old technologies,” Smith says. “They work well, but they have an enormous carbon footprint and consume massive amounts of energy. The key challenge today is trying to replace these nonsustainable processes.” Most of these processes require high temperatures for boiling and reboiling solutions, and these often are the hardest processes to electrify, he adds.

    For the separation of oxygen and nitrogen from air, the two molecules only differ in size by about 0.18 angstroms (ten-billionths of a meter), he says. To make a filter capable of separating them efficiently “is incredibly difficult to do without decreasing throughput.” But the new ladder polymers, when manufactured into membranes produce tiny pores that achieve high selectivity, he says. In some cases, 10 oxygen molecules permeate for every nitrogen, despite the razor-thin sieve needed to access this type of size selectivity. These new membrane materials have “the highest combination of permeability and selectivity of all known polymeric materials for many applications,” Smith says.

    “Because CANAL polymers are strong and ductile, and because they are soluble in certain solvents, they could be scaled for industrial deployment within a few years,” he adds. An MIT spinoff company called Osmoses, led by authors of this study, recently won the MIT $100K entrepreneurship competition and has been partly funded by The Engine to commercialize the technology.

    There are a variety of potential applications for these materials in the chemical processing industry, Smith says, including the separation of carbon dioxide from other gas mixtures as a form of emissions reduction. Another possibility is the purification of biogas fuel made from agricultural waste products in order to provide carbon-free transportation fuel. Hydrogen separation for producing a fuel or a chemical feedstock, could also be carried out efficiently, helping with the transition to a hydrogen-based economy.

    The close-knit team of researchers is continuing to refine the process to facilitate the development from laboratory to industrial scale, and to better understand the details on how the macromolecular structures and packing result in the ultrahigh selectivity. Smith says he expects this platform technology to play a role in multiple decarbonization pathways, starting with hydrogen separation and carbon capture, because there is such a pressing need for these technologies in order to transition to a carbon-free economy.

    “These are impressive new structures that have outstanding gas separation performance,” says Ryan Lively, am associate professor of chemical and biomolecular engineering at Georgia Tech, who was not involved in this work. “Importantly, this performance is improved during membrane aging and when the membranes are challenged with concentrated gas mixtures. … If they can scale these materials and fabricate membrane modules, there is significant potential practical impact.”

    The research team also included Jun Myun Ahn and Ashley Robinson at Stanford, Francesco Benedetti at MIT, now the chief executive officer at Osmoses, and Yingge Wang at King Abdullah University of Science and Technology in Saudi Arabia. The work was supported by the Stanford Natural Gas Initiative, the Sloan Research Fellowship, the U.S. Department of Energy Office of Basic Energy Sciences, and the National Science Foundation. More

  • in

    Using artificial intelligence to find anomalies hiding in massive datasets

    Identifying a malfunction in the nation’s power grid can be like trying to find a needle in an enormous haystack. Hundreds of thousands of interrelated sensors spread across the U.S. capture data on electric current, voltage, and other critical information in real time, often taking multiple recordings per second.

    Researchers at the MIT-IBM Watson AI Lab have devised a computationally efficient method that can automatically pinpoint anomalies in those data streams in real time. They demonstrated that their artificial intelligence method, which learns to model the interconnectedness of the power grid, is much better at detecting these glitches than some other popular techniques.

    Because the machine-learning model they developed does not require annotated data on power grid anomalies for training, it would be easier to apply in real-world situations where high-quality, labeled datasets are often hard to come by. The model is also flexible and can be applied to other situations where a vast number of interconnected sensors collect and report data, like traffic monitoring systems. It could, for example, identify traffic bottlenecks or reveal how traffic jams cascade.

    “In the case of a power grid, people have tried to capture the data using statistics and then define detection rules with domain knowledge to say that, for example, if the voltage surges by a certain percentage, then the grid operator should be alerted. Such rule-based systems, even empowered by statistical data analysis, require a lot of labor and expertise. We show that we can automate this process and also learn patterns from the data using advanced machine-learning techniques,” says senior author Jie Chen, a research staff member and manager of the MIT-IBM Watson AI Lab.

    The co-author is Enyan Dai, an MIT-IBM Watson AI Lab intern and graduate student at the Pennsylvania State University. This research will be presented at the International Conference on Learning Representations.

    Probing probabilities

    The researchers began by defining an anomaly as an event that has a low probability of occurring, like a sudden spike in voltage. They treat the power grid data as a probability distribution, so if they can estimate the probability densities, they can identify the low-density values in the dataset. Those data points which are least likely to occur correspond to anomalies.

    Estimating those probabilities is no easy task, especially since each sample captures multiple time series, and each time series is a set of multidimensional data points recorded over time. Plus, the sensors that capture all that data are conditional on one another, meaning they are connected in a certain configuration and one sensor can sometimes impact others.

    To learn the complex conditional probability distribution of the data, the researchers used a special type of deep-learning model called a normalizing flow, which is particularly effective at estimating the probability density of a sample.

    They augmented that normalizing flow model using a type of graph, known as a Bayesian network, which can learn the complex, causal relationship structure between different sensors. This graph structure enables the researchers to see patterns in the data and estimate anomalies more accurately, Chen explains.

    “The sensors are interacting with each other, and they have causal relationships and depend on each other. So, we have to be able to inject this dependency information into the way that we compute the probabilities,” he says.

    This Bayesian network factorizes, or breaks down, the joint probability of the multiple time series data into less complex, conditional probabilities that are much easier to parameterize, learn, and evaluate. This allows the researchers to estimate the likelihood of observing certain sensor readings, and to identify those readings that have a low probability of occurring, meaning they are anomalies.

    Their method is especially powerful because this complex graph structure does not need to be defined in advance — the model can learn the graph on its own, in an unsupervised manner.

    A powerful technique

    They tested this framework by seeing how well it could identify anomalies in power grid data, traffic data, and water system data. The datasets they used for testing contained anomalies that had been identified by humans, so the researchers were able to compare the anomalies their model identified with real glitches in each system.

    Their model outperformed all the baselines by detecting a higher percentage of true anomalies in each dataset.

    “For the baselines, a lot of them don’t incorporate graph structure. That perfectly corroborates our hypothesis. Figuring out the dependency relationships between the different nodes in the graph is definitely helping us,” Chen says.

    Their methodology is also flexible. Armed with a large, unlabeled dataset, they can tune the model to make effective anomaly predictions in other situations, like traffic patterns.

    Once the model is deployed, it would continue to learn from a steady stream of new sensor data, adapting to possible drift of the data distribution and maintaining accuracy over time, says Chen.

    Though this particular project is close to its end, he looks forward to applying the lessons he learned to other areas of deep-learning research, particularly on graphs.

    Chen and his colleagues could use this approach to develop models that map other complex, conditional relationships. They also want to explore how they can efficiently learn these models when the graphs become enormous, perhaps with millions or billions of interconnected nodes. And rather than finding anomalies, they could also use this approach to improve the accuracy of forecasts based on datasets or streamline other classification techniques.

    This work was funded by the MIT-IBM Watson AI Lab and the U.S. Department of Energy. More