More stories

  • in

    MIT design would harness 40 percent of the sun’s heat to produce clean hydrogen fuel

    MIT engineers aim to produce totally green, carbon-free hydrogen fuel with a new, train-like system of reactors that is driven solely by the sun.

    In a study appearing today in Solar Energy Journal, the engineers lay out the conceptual design for a system that can efficiently produce “solar thermochemical hydrogen.” The system harnesses the sun’s heat to directly split water and generate hydrogen — a clean fuel that can power long-distance trucks, ships, and planes, while in the process emitting no greenhouse gas emissions.

    Today, hydrogen is largely produced through processes that involve natural gas and other fossil fuels, making the otherwise green fuel more of a “grey” energy source when considered from the start of its production to its end use. In contrast, solar thermochemical hydrogen, or STCH, offers a totally emissions-free alternative, as it relies entirely on renewable solar energy to drive hydrogen production. But so far, existing STCH designs have limited efficiency: Only about 7 percent of incoming sunlight is used to make hydrogen. The results so far have been low-yield and high-cost.

    In a big step toward realizing solar-made fuels, the MIT team estimates its new design could harness up to 40 percent of the sun’s heat to generate that much more hydrogen. The increase in efficiency could drive down the system’s overall cost, making STCH a potentially scalable, affordable option to help decarbonize the transportation industry.

    “We’re thinking of hydrogen as the fuel of the future, and there’s a need to generate it cheaply and at scale,” says the study’s lead author, Ahmed Ghoniem, the Ronald C. Crane Professor of Mechanical Engineering at MIT. “We’re trying to achieve the Department of Energy’s goal, which is to make green hydrogen by 2030, at $1 per kilogram. To improve the economics, we have to improve the efficiency and make sure most of the solar energy we collect is used in the production of hydrogen.”

    Ghoniem’s study co-authors are Aniket Patankar, first author and MIT postdoc; Harry Tuller, MIT professor of materials science and engineering; Xiao-Yu Wu of the University of Waterloo; and Wonjae Choi at Ewha Womans University in South Korea.

    Solar stations

    Similar to other proposed designs, the MIT system would be paired with an existing source of solar heat, such as a concentrated solar plant (CSP) — a circular array of hundreds of mirrors that collect and reflect sunlight to a central receiving tower. An STCH system then absorbs the receiver’s heat and directs it to split water and produce hydrogen. This process is very different from electrolysis, which uses electricity instead of heat to split water.

    At the heart of a conceptual STCH system is a two-step thermochemical reaction. In the first step, water in the form of steam is exposed to a metal. This causes the metal to grab oxygen from steam, leaving hydrogen behind. This metal “oxidation” is similar to the rusting of iron in the presence of water, but it occurs much faster. Once hydrogen is separated, the oxidized (or rusted) metal is reheated in a vacuum, which acts to reverse the rusting process and regenerate the metal. With the oxygen removed, the metal can be cooled and exposed to steam again to produce more hydrogen. This process can be repeated hundreds of times.

    The MIT system is designed to optimize this process. The system as a whole resembles a train of box-shaped reactors running on a circular track. In practice, this track would be set around a solar thermal source, such as a CSP tower. Each reactor in the train would house the metal that undergoes the redox, or reversible rusting, process.

    Each reactor would first pass through a hot station, where it would be exposed to the sun’s heat at temperatures of up to 1,500 degrees Celsius. This extreme heat would effectively pull oxygen out of a reactor’s metal. That metal would then be in a “reduced” state — ready to grab oxygen from steam. For this to happen, the reactor would move to a cooler station at temperatures around 1,000 C, where it would be exposed to steam to produce hydrogen.

    Rust and rails

    Other similar STCH concepts have run up against a common obstacle: what to do with the heat released by the reduced reactor as it is cooled. Without recovering and reusing this heat, the system’s efficiency is too low to be practical.

    A second challenge has to do with creating an energy-efficient vacuum where metal can de-rust. Some prototypes generate a vacuum using mechanical pumps, though the pumps are too energy-intensive and costly for large-scale hydrogen production.

    To address these challenges, the MIT design incorporates several energy-saving workarounds. To recover most of the heat that would otherwise escape from the system, reactors on opposite sides of the circular track are allowed to exchange heat through thermal radiation; hot reactors get cooled while cool reactors get heated. This keeps the heat within the system. The researchers also added a second set of reactors that would circle around the first train, moving in the opposite direction. This outer train of reactors would operate at generally cooler temperatures and would be used to evacuate oxygen from the hotter inner train, without the need for energy-consuming mechanical pumps.

    These outer reactors would carry a second type of metal that can also easily oxidize. As they circle around, the outer reactors would absorb oxygen from the inner reactors, effectively de-rusting the original metal, without having to use energy-intensive vacuum pumps. Both reactor trains would  run continuously and would enerate separate streams of pure hydrogen and oxygen.

    The researchers carried out detailed simulations of the conceptual design, and found that it would significantly boost the efficiency of solar thermochemical hydrogen production, from 7 percent, as previous designs have demonstrated, to 40 percent.

    “We have to think of every bit of energy in the system, and how to use it, to minimize the cost,” Ghoniem says. “And with this design, we found that everything can be powered by heat coming from the sun. It is able to use 40 percent of the sun’s heat to produce hydrogen.”

    “If this can be realized, it could drastically change our energy future — namely, enabling hydrogen production, 24/7,” says Christopher Muhich, an assistant professor of chemical engineering at Arizona State University, who was not involved in the research. “The ability to make hydrogen is the linchpin to producing liquid fuels from sunlight.”

    In the next year, the team will be building a prototype of the system that they plan to test in concentrated solar power facilities at laboratories of the Department of Energy, which is currently funding the project.

    “When fully implemented, this system would be housed in a little building in the middle of a solar field,” Patankar explains. “Inside the building, there could be one or more trains each having about 50 reactors. And we think this could be a modular system, where you can add reactors to a conveyor belt, to scale up hydrogen production.”

    This work was supported by the Centers for Mechanical Engineering Research and Education at MIT and SUSTech. More

  • in

    Printing a new approach to fusion power plant materials

    When Alexander O’Brien sent in his application for graduate school at MIT’s Department of Nuclear Science and Engineering, he had a germ of a research idea already brewing. So when he received a phone call from Professor Mingda Li, he shared it: The student from Arkansas wanted to explore the design of materials that could hold nuclear reactors together.

    Li listened to him patiently and then said, “I think you’d be a really good fit for Professor Ju Li,” O’Brien remembers. Ju Li, the Battelle Energy Alliance Professor in Nuclear Engineering, had wanted to explore 3D printing for nuclear reactors and O’Brien seemed like the right candidate. “At that moment I decided to go to MIT if they accepted me,” O’Brien remembers.

    And they did.

    Under the advisement of Ju Li, the fourth-year doctoral student now explores 3D printing of ceramic-metal composites, materials that can be used to construct fusion power plants.

    An early interest in the sciences

    Growing up in Springdale, Arkansas as a self-described “band nerd,” O’Brien was particularly interested in chemistry and physics. It was one thing to mix baking soda and vinegar to make a “volcano” and quite another to understand why that was happening. “I just enjoyed understanding things on a deeper level and being able to figure out how the world works,” he says.

    At the same time, it was difficult to ignore the economics of energy playing out in his own backyard. When Arkansas, a place that had hardly ever seen earthquakes, started registering them in the wake of fracking in neighboring Oklahoma, it was “like a lightbulb moment” for O’Brien. “I knew this was going to create problems down the line, I knew there’s got to be a better way to do [energy],” he says.

    With the idea of energy alternatives simmering on the back burner, O’Brien enrolled for undergraduate studies at the University of Arkansas. He participated in the school’s marching band — “you show up a week before everyone else and there’s 400 people who automatically become your friends” — and enjoyed the social environment that a large state school could offer.

    O’Brien double-majored in chemical engineering and physics and appreciated “the ability to get your hands dirty on machinery to make things work.” Deciding to begin exploring his interest in energy alternatives, O’Brien researched transition metal dichalcogenides, coatings of which could catalyze the hydrogen evolution reaction and more easily create hydrogen gas, a green energy alternative.

    It was shortly after his sophomore year, however, that O’Brien really found his way in the field of energy alternatives — in nuclear engineering. The American Chemical Society was soliciting student applications for summer study of nuclear chemistry in San Jose, California. O’Brien applied and got accepted. “After years of knowing I wanted to work in green energy but not knowing what that looked like, I very quickly fell in love with [nuclear engineering],” he says. That summer also cemented O’Brien’s decision to attend graduate school. “I came away with this idea of ‘I need to go to grad school because I need to know more about this,’” he says.

    O’Brien especially appreciated an independent project, assigned as part of the summer program: He chose to research nuclear-powered spacecraft. In digging deeper, O’Brien discovered the challenges of powering spacecraft — nuclear was the most viable alternative, but it had to work around extraneous radiation sources in space. Getting to explore national laboratories near San Jose sealed the deal. “I got to visit the National Ignition Facility, which is the big fusion center up there, and just seeing that massive facility entirely designed around this one idea of fusion was kind of mind-blowing to me,” O’Brien says.

    A fresh blueprint for fusion power plants

    O’Brien’s current research at MIT’s Department of Nuclear Science and Engineering (NSE) is equally mind-blowing.

    As the design of new fusion devices kicks into gear, it’s becoming increasingly apparent that the materials we have been using just don’t hold up to the higher temperatures and radiation levels in operating environments, O’Brien says. Additive manufacturing, another term for 3D printing, “opens up a whole new realm of possibilities for what you can do with metals, which is exactly what you’re going to need [to build the next generation of fusion power plants],” he says.

    Metals and ceramics by themselves might not do the job of withstanding high temperatures (750 degrees Celsius is the target) and stresses and radiation, but together they might get there. Although such metal matrix composites have been around for decades, they have been impractical for use in reactors because they’re “difficult to make with any kind of uniformity and really limited in size scale,” O’Brien says. That’s because when you try to place ceramic nanoparticles into a pool of molten metal, they’re going to fall out in whichever direction they want. “3D printing quickly changes that story entirely, to the point where if you want to add these nanoparticles in very specific regions, you have the capability to do that,” O’Brien says.

    O’Brien’s work, which forms the basis of his doctoral thesis and a research paper in the journal Additive Manufacturing, involves implanting metals with ceramic nanoparticles. The net result is a metal matrix composite that is an ideal candidate for fusion devices, especially for the vacuum vessel component, which must be able to withstand high temperatures, extremely corrosive molten salts, and internal helium gas from nuclear transmutation.

    O’Brien’s work focuses on nickel superalloys like Inconel 718, which are especially robust candidates because they can withstand higher operating temperatures while retaining strength. Helium embrittlement, where bubbles of helium caused by fusion neutrons lead to weakness and failure, is a problem with Inconel 718, but composites exhibit potential to overcome this challenge.

    To create the composites, first a mechanical milling process coats the ceramic onto the metal particles. The ceramic nanoparticles act as reinforcing strength agents, especially at high temperatures, and make materials last longer. The nanoparticles also absorb helium and radiation defects when uniformly dispersed, which prevent these damage agents from all getting to the grain boundaries.

    The composite then goes through a 3D printing process called powder bed fusion (non-nuclear fusion), where a laser passes over a bed of this powder melting it into desired shapes. “By coating these particles with the ceramic and then only melting very specific regions, we keep the ceramics in the areas that we want, and then you can build up and have a uniform structure,” O’Brien says.

    Printing an exciting future

    The 3D printing of nuclear materials exhibits such promise that O’Brien is looking at pursuing the prospect after his doctoral studies. “The concept of these metal matrix composites and how they can enhance material property is really interesting,” he says. Scaling it up commercially through a startup company is on his radar.

    For now, O’Brien is enjoying research and catching an occasional Broadway show with his wife. While the band nerd doesn’t pick up his saxophone much anymore, he does enjoy driving up to New Hampshire and going backpacking. “That’s my newfound hobby,” O’Brien says, “since I started grad school.” More

  • in

    New tools are available to help reduce the energy that AI models devour

    When searching for flights on Google, you may have noticed that each flight’s carbon-emission estimate is now presented next to its cost. It’s a way to inform customers about their environmental impact, and to let them factor this information into their decision-making.

    A similar kind of transparency doesn’t yet exist for the computing industry, despite its carbon emissions exceeding those of the entire airline industry. Escalating this energy demand are artificial intelligence models. Huge, popular models like ChatGPT signal a trend of large-scale artificial intelligence, boosting forecasts that predict data centers will draw up to 21 percent of the world’s electricity supply by 2030.

    The MIT Lincoln Laboratory Supercomputing Center (LLSC) is developing techniques to help data centers reel in energy use. Their techniques range from simple but effective changes, like power-capping hardware, to adopting novel tools that can stop AI training early on. Crucially, they have found that these techniques have a minimal impact on model performance.

    In the wider picture, their work is mobilizing green-computing research and promoting a culture of transparency. “Energy-aware computing is not really a research area, because everyone’s been holding on to their data,” says Vijay Gadepally, senior staff in the LLSC who leads energy-aware research efforts. “Somebody has to start, and we’re hoping others will follow.”

    Curbing power and cooling down

    Like many data centers, the LLSC has seen a significant uptick in the number of AI jobs running on its hardware. Noticing an increase in energy usage, computer scientists at the LLSC were curious about ways to run jobs more efficiently. Green computing is a principle of the center, which is powered entirely by carbon-free energy.

    Training an AI model — the process by which it learns patterns from huge datasets — requires using graphics processing units (GPUs), which are power-hungry hardware. As one example, the GPUs that trained GPT-3 (the precursor to ChatGPT) are estimated to have consumed 1,300 megawatt-hours of electricity, roughly equal to that used by 1,450 average U.S. households per month.

    While most people seek out GPUs because of their computational power, manufacturers offer ways to limit the amount of power a GPU is allowed to draw. “We studied the effects of capping power and found that we could reduce energy consumption by about 12 percent to 15 percent, depending on the model,” Siddharth Samsi, a researcher within the LLSC, says.

    The trade-off for capping power is increasing task time — GPUs will take about 3 percent longer to complete a task, an increase Gadepally says is “barely noticeable” considering that models are often trained over days or even months. In one of their experiments in which they trained the popular BERT language model, limiting GPU power to 150 watts saw a two-hour increase in training time (from 80 to 82 hours) but saved the equivalent of a U.S. household’s week of energy.

    The team then built software that plugs this power-capping capability into the widely used scheduler system, Slurm. The software lets data center owners set limits across their system or on a job-by-job basis.

    “We can deploy this intervention today, and we’ve done so across all our systems,” Gadepally says.

    Side benefits have arisen, too. Since putting power constraints in place, the GPUs on LLSC supercomputers have been running about 30 degrees Fahrenheit cooler and at a more consistent temperature, reducing stress on the cooling system. Running the hardware cooler can potentially also increase reliability and service lifetime. They can now consider delaying the purchase of new hardware — reducing the center’s “embodied carbon,” or the emissions created through the manufacturing of equipment — until the efficiencies gained by using new hardware offset this aspect of the carbon footprint. They’re also finding ways to cut down on cooling needs by strategically scheduling jobs to run at night and during the winter months.

    “Data centers can use these easy-to-implement approaches today to increase efficiencies, without requiring modifications to code or infrastructure,” Gadepally says.

    Taking this holistic look at a data center’s operations to find opportunities to cut down can be time-intensive. To make this process easier for others, the team — in collaboration with Professor Devesh Tiwari and Baolin Li at Northeastern University — recently developed and published a comprehensive framework for analyzing the carbon footprint of high-performance computing systems. System practitioners can use this analysis framework to gain a better understanding of how sustainable their current system is and consider changes for next-generation systems.  

    Adjusting how models are trained and used

    On top of making adjustments to data center operations, the team is devising ways to make AI-model development more efficient.

    When training models, AI developers often focus on improving accuracy, and they build upon previous models as a starting point. To achieve the desired output, they have to figure out what parameters to use, and getting it right can take testing thousands of configurations. This process, called hyperparameter optimization, is one area LLSC researchers have found ripe for cutting down energy waste. 

    “We’ve developed a model that basically looks at the rate at which a given configuration is learning,” Gadepally says. Given that rate, their model predicts the likely performance. Underperforming models are stopped early. “We can give you a very accurate estimate early on that the best model will be in this top 10 of 100 models running,” he says.

    In their studies, this early stopping led to dramatic savings: an 80 percent reduction in the energy used for model training. They’ve applied this technique to models developed for computer vision, natural language processing, and material design applications.

    “In my opinion, this technique has the biggest potential for advancing the way AI models are trained,” Gadepally says.

    Training is just one part of an AI model’s emissions. The largest contributor to emissions over time is model inference, or the process of running the model live, like when a user chats with ChatGPT. To respond quickly, these models use redundant hardware, running all the time, waiting for a user to ask a question.

    One way to improve inference efficiency is to use the most appropriate hardware. Also with Northeastern University, the team created an optimizer that matches a model with the most carbon-efficient mix of hardware, such as high-power GPUs for the computationally intense parts of inference and low-power central processing units (CPUs) for the less-demanding aspects. This work recently won the best paper award at the International ACM Symposium on High-Performance Parallel and Distributed Computing.

    Using this optimizer can decrease energy use by 10-20 percent while still meeting the same “quality-of-service target” (how quickly the model can respond).

    This tool is especially helpful for cloud customers, who lease systems from data centers and must select hardware from among thousands of options. “Most customers overestimate what they need; they choose over-capable hardware just because they don’t know any better,” Gadepally says.

    Growing green-computing awareness

    The energy saved by implementing these interventions also reduces the associated costs of developing AI, often by a one-to-one ratio. In fact, cost is usually used as a proxy for energy consumption. Given these savings, why aren’t more data centers investing in green techniques?

    “I think it’s a bit of an incentive-misalignment problem,” Samsi says. “There’s been such a race to build bigger and better models that almost every secondary consideration has been put aside.”

    They point out that while some data centers buy renewable-energy credits, these renewables aren’t enough to cover the growing energy demands. The majority of electricity powering data centers comes from fossil fuels, and water used for cooling is contributing to stressed watersheds. 

    Hesitancy may also exist because systematic studies on energy-saving techniques haven’t been conducted. That’s why the team has been pushing their research in peer-reviewed venues in addition to open-source repositories. Some big industry players, like Google DeepMind, have applied machine learning to increase data center efficiency but have not made their work available for others to deploy or replicate. 

    Top AI conferences are now pushing for ethics statements that consider how AI could be misused. The team sees the climate aspect as an AI ethics topic that has not yet been given much attention, but this also appears to be slowly changing. Some researchers are now disclosing the carbon footprint of training the latest models, and industry is showing a shift in energy transparency too, as in this recent report from Meta AI.

    They also acknowledge that transparency is difficult without tools that can show AI developers their consumption. Reporting is on the LLSC roadmap for this year. They want to be able to show every LLSC user, for every job, how much energy they consume and how this amount compares to others, similar to home energy reports.

    Part of this effort requires working more closely with hardware manufacturers to make getting these data off hardware easier and more accurate. If manufacturers can standardize the way the data are read out, then energy-saving and reporting tools can be applied across different hardware platforms. A collaboration is underway between the LLSC researchers and Intel to work on this very problem.

    Even for AI developers who are aware of the intense energy needs of AI, they can’t do much on their own to curb this energy use. The LLSC team wants to help other data centers apply these interventions and provide users with energy-aware options. Their first partnership is with the U.S. Air Force, a sponsor of this research, which operates thousands of data centers. Applying these techniques can make a significant dent in their energy consumption and cost.

    “We’re putting control into the hands of AI developers who want to lessen their footprint,” Gadepally says. “Do I really need to gratuitously train unpromising models? Am I willing to run my GPUs slower to save energy? To our knowledge, no other supercomputing center is letting you consider these options. Using our tools, today, you get to decide.”

    Visit this webpage to see the group’s publications related to energy-aware computing and findings described in this article. More

  • in

    Desalination system could produce freshwater that is cheaper than tap water

    Engineers at MIT and in China are aiming to turn seawater into drinking water with a completely passive device that is inspired by the ocean, and powered by the sun.

    In a paper appearing today in the journal Joule, the team outlines the design for a new solar desalination system that takes in saltwater and heats it with natural sunlight.

    The configuration of the device allows water to circulate in swirling eddies, in a manner similar to the much larger “thermohaline” circulation of the ocean. This circulation, combined with the sun’s heat, drives water to evaporate, leaving salt behind. The resulting water vapor can then be condensed and collected as pure, drinkable water. In the meantime, the leftover salt continues to circulate through and out of the device, rather than accumulating and clogging the system.

    The new system has a higher water-production rate and a higher salt-rejection rate than all other passive solar desalination concepts currently being tested.

    The researchers estimate that if the system is scaled up to the size of a small suitcase, it could produce about 4 to 6 liters of drinking water per hour and last several years before requiring replacement parts. At this scale and performance, the system could produce drinking water at a rate and price that is cheaper than tap water.

    “For the first time, it is possible for water, produced by sunlight, to be even cheaper than tap water,” says Lenan Zhang, a research scientist in MIT’s Device Research Laboratory.

    The team envisions a scaled-up device could passively produce enough drinking water to meet the daily requirements of a small family. The system could also supply off-grid, coastal communities where seawater is easily accessible.

    Zhang’s study co-authors include MIT graduate student Yang Zhong and Evelyn Wang, the Ford Professor of Engineering, along with Jintong Gao, Jinfang You, Zhanyu Ye, Ruzhu Wang, and Zhenyuan Xu of Shanghai Jiao Tong University in China.

    A powerful convection

    The team’s new system improves on their previous design — a similar concept of multiple layers, called stages. Each stage contained an evaporator and a condenser that used heat from the sun to passively separate salt from incoming water. That design, which the team tested on the roof of an MIT building, efficiently converted the sun’s energy to evaporate water, which was then condensed into drinkable water. But the salt that was left over quickly accumulated as crystals that clogged the system after a few days. In a real-world setting, a user would have to place stages on a frequent basis, which would significantly increase the system’s overall cost.

    In a follow-up effort, they devised a solution with a similar layered configuration, this time with an added feature that helped to circulate the incoming water as well as any leftover salt. While this design prevented salt from settling and accumulating on the device, it desalinated water at a relatively low rate.

    In the latest iteration, the team believes it has landed on a design that achieves both a high water-production rate, and high salt rejection, meaning that the system can quickly and reliably produce drinking water for an extended period. The key to their new design is a combination of their two previous concepts: a multistage system of evaporators and condensers, that is also configured to boost the circulation of water — and salt — within each stage.

    “We introduce now an even more powerful convection, that is similar to what we typically see in the ocean, at kilometer-long scales,” Xu says.

    The small circulations generated in the team’s new system is similar to the “thermohaline” convection in the ocean — a phenomenon that drives the movement of water around the world, based on differences in sea temperature (“thermo”) and salinity (“haline”).

    “When seawater is exposed to air, sunlight drives water to evaporate. Once water leaves the surface, salt remains. And the higher the salt concentration, the denser the liquid, and this heavier water wants to flow downward,” Zhang explains. “By mimicking this kilometer-wide phenomena in small box, we can take advantage of this feature to reject salt.”

    Tapping out

    The heart of the team’s new design is a single stage that resembles a thin box, topped with a dark material that efficiently absorbs the heat of the sun. Inside, the box is separated into a top and bottom section. Water can flow through the top half, where the ceiling is lined with an evaporator layer that uses the sun’s heat to warm up and evaporate any water in direct contact. The water vapor is then funneled to the bottom half of the box, where a condensing layer air-cools the vapor into salt-free, drinkable liquid. The researchers set the entire box at a tilt within a larger, empty vessel, then attached a tube from the top half of the box down through the bottom of the vessel, and floated the vessel in saltwater.

    In this configuration, water can naturally push up through the tube and into the box, where the tilt of the box, combined with the thermal energy from the sun, induces the water to swirl as it flows through. The small eddies help to bring water in contact with the upper evaporating layer while keeping salt circulating, rather than settling and clogging.

    The team built several prototypes, with one, three, and 10 stages, and tested their performance in water of varying salinity, including natural seawater and water that was seven times saltier.

    From these tests, the researchers calculated that if each stage were scaled up to a square meter, it would produce up to 5 liters of drinking water per hour, and that the system could desalinate water without accumulating salt for several years. Given this extended lifetime, and the fact that the system is entirely passive, requiring no electricity to run, the team estimates that the overall cost of running the system would be cheaper than what it costs to produce tap water in the United States.

    “We show that this device is capable of achieving a long lifetime,” Zhong says. “That means that, for the first time, it is possible for drinking water produced by sunlight to be cheaper than tap water. This opens up the possibility for solar desalination to address real-world problems.”

    “This is a very innovative approach that effectively mitigates key challenges in the field of desalination,” says Guihua Yu, who develops sustainable water and energy storage systems at the University of Texas at Austin, and was not involved in the research. “The design is particularly beneficial for regions struggling with high-salinity water. Its modular design makes it highly suitable for household water production, allowing for scalability and adaptability to meet individual needs.”

    Funding for the research at Shanghai Jiao Tong University was supported by the Natural Science Foundation of China. More

  • in

    Tracking US progress on the path to a decarbonized economy

    Investments in new technologies and infrastucture that help reduce greenhouse gas emissions — everything from electric vehicles to heat pumps — are growing rapidly in the United States. Now, a new database enables these investments to be comprehensively monitored in real-time, thereby helping to assess the efficacy of policies designed to spur clean investments and address climate change.

    The Clean Investment Monitor (CIM), developed by a team at MIT’s Center for Energy and Environmental Policy Research (CEEPR) led by Institute Innovation Fellow Brian Deese and in collaboration with the Rhodium Group, an independent research firm, provides a timely and methodologically consistent tracking of all announced public and private investments in the manufacture and deployment of clean technologies and infrastructure in the U.S. The CIM offers a means of assessing the country’s progress in transitioning to a cleaner economy and reducing greenhouse gas emissions.

    In the year from July 1, 2022, to June 30, 2023, data from the CIM show, clean investments nationwide totaled $213 billion. To put that figure in perspective, 18 states in the U.S. have GDPs each lower than $213 billion.

    “As clean technology becomes a larger and larger sector in the United States, its growth will have far-reaching implications — for our economy, for our leadership in innovation, and for reducing our greenhouse gas emissions,” says Deese, who served as the director of the White House National Economic Council from January 2021 to February 2023. “The Clean Investment Monitor is a tool designed to help us understand and assess this growth in a real-time, comprehensive way. Our hope is that the CIM will enhance research and improve public policies designed to accelerate the clean energy transition.”

    Launched on Sept. 13, the CIM shows that the $213 billion invested over the last year reflects a 37 percent increase from the $155 billion invested in the previous 12-month period. According to CIM data, the fastest growth has been in the manufacturing sector, where investment grew 125 percent year-on-year, particularly in electric vehicle and solar manufacturing.

    Beyond manufacturing, the CIM also provides data on investment in clean energy production, such as solar, wind, and nuclear; industrial decarbonization, such as sustainable aviation fuels; and retail investments by households and businesses in technologies like heat pumps and zero-emission vehicles. The CIM’s data goes back to 2018, providing a baseline before the passage of the legislation in 2021 and 2022.

    “We’re really excited to bring MIT’s analytical rigor to bear to help develop the Clean Investment Monitor,” says Christopher Knittel, the George P. Shultz Professor of Energy Economics at the MIT Sloan School of Management and CEEPR’s faculty director. “Bolstered by Brian’s keen understanding of the policy world, this tool is poised to become the go-to reference for anyone looking to understand clean investment flows and what drives them.”

    In 2021 and 2022, the U.S. federal government enacted a series of new laws that together aimed to catalyze the largest-ever national investment in clean energy technologies and related infrastructure. The Clean Investment Monitor can also be used to track how well the legislation is living up to expectations.

    The three pieces of federal legislation — the Infrastructure Investment and Jobs Act, enacted in 2021, and the Inflation Reduction Act (IRA) and the CHIPS and Science Act, both enacted in 2022 — provide grants, loans, loan guarantees, and tax incentives to spur investments in technologies that reduce greenhouse gas emissions.

    The effectiveness of the legislation in hastening the U.S. transition to a clean economy will be crucial in determining whether the country reaches its goal of reducing greenhouse gas emissions by 50 percent to 52 percent below 2005 levels in 2030. An analysis earlier this year estimated that the IRA will lead to a 43 percent to 48 percent decline in economywide emissions below 2005 levels by 2035, compared with 27 percent to 35 percent in a reference scenario without the law’s provisions, helping bring the U.S. goal closer in reach.

    The Clean Investment Monitor is available at cleaninvestmentmonitor.org. More

  • in

    Pixel-by-pixel analysis yields insights into lithium-ion batteries

    By mining data from X-ray images, researchers at MIT, Stanford University, SLAC National Accelerator, and the Toyota Research Institute have made significant new discoveries about the reactivity of lithium iron phosphate, a material used in batteries for electric cars and in other rechargeable batteries.

    The new technique has revealed several phenomena that were previously impossible to see, including variations in the rate of lithium intercalation reactions in different regions of a lithium iron phosphate nanoparticle.

    The paper’s most significant practical finding — that these variations in reaction rate are correlated with differences in the thickness of the carbon coating on the surface of the particles — could lead to improvements in the efficiency of charging and discharging such batteries.

    “What we learned from this study is that it’s the interfaces that really control the dynamics of the battery, especially in today’s modern batteries made from nanoparticles of the active material. That means that our focus should really be on engineering that interface,” says Martin Bazant, the E.G. Roos Professor of Chemical Engineering and a professor of mathematics at MIT, who is the senior author of the study.

    This approach to discovering the physics behind complex patterns in images could also be used to gain insights into many other materials, not only other types of batteries but also biological systems, such as dividing cells in a developing embryo.

    “What I find most exciting about this work is the ability to take images of a system that’s undergoing the formation of some pattern, and learning the principles that govern that,” Bazant says.

    Hongbo Zhao PhD ’21, a former MIT graduate student who is now a postdoc at Princeton University, is the lead author of the new study, which appears today in Nature. Other authors include Richard Bratz, the Edwin R. Gilliland Professor of Chemical Engineering at MIT; William Chueh, an associate professor of materials science and engineering at Stanford and director of the SLAC-Stanford Battery Center; and Brian Storey, senior director of Energy and Materials at the Toyota Research Institute.

    “Until now, we could make these beautiful X-ray movies of battery nanoparticles at work, but it was challenging to measure and understand subtle details of how they function because the movies were so information-rich,” Chueh says. “By applying image learning to these nanoscale movies, we can extract insights that were not previously possible.”

    Modeling reaction rates

    Lithium iron phosphate battery electrodes are made of many tiny particles of lithium iron phosphate, surrounded by an electrolyte solution. A typical particle is about 1 micron in diameter and about 100 nanometers thick. When the battery discharges, lithium ions flow from the electrolyte solution into the material by an electrochemical reaction known as ion intercalation. When the battery charges, the intercalation reaction is reversed, and ions flow in the opposite direction.

    “Lithium iron phosphate (LFP) is an important battery material due to low cost, a good safety record, and its use of abundant elements,” Storey says. “We are seeing an increased use of LFP in the EV market, so the timing of this study could not be better.”

    Before the current study, Bazant had done a great deal of theoretical modeling of patterns formed by lithium-ion intercalation. Lithium iron phosphate prefers to exist in one of two stable phases: either full of lithium ions or empty. Since 2005, Bazant has been working on mathematical models of this phenomenon, known as phase separation, which generates distinctive patterns of lithium-ion flow driven by intercalation reactions. In 2015, while on sabbatical at Stanford, he began working with Chueh to try to interpret images of lithium iron phosphate particles from scanning tunneling X-ray microscopy.

    Using this type of microscopy, the researchers can obtain images that reveal the concentration of lithium ions, pixel-by-pixel, at every point in the particle. They can scan the particles several times as the particles charge or discharge, allowing them to create movies of how lithium ions flow in and out of the particles.

    In 2017, Bazant and his colleagues at SLAC received funding from the Toyota Research Institute to pursue further studies using this approach, along with other battery-related research projects.

    By analyzing X-ray images of 63 lithium iron phosphate particles as they charged and discharged, the researchers found that the movement of lithium ions within the material could be nearly identical to the computer simulations that Bazant had created earlier. Using all 180,000 pixels as measurements, the researchers trained the computational model to produce equations that accurately describe the nonequilibrium thermodynamics and reaction kinetics of the battery material.
    By analyzing X-ray images of lithium iron phosphate particles as they charged and discharged, researchers have shown that the movement of lithium ions within the material was nearly identical to computer simulations they had created earlier.  In each pair, the actual particles are on the left and the simulations are on the right.Courtesy of the researchers

    “Every little pixel in there is jumping from full to empty, full to empty. And we’re mapping that whole process, using our equations to understand how that’s happening,” Bazant says.

    The researchers also found that the patterns of lithium-ion flow that they observed could reveal spatial variations in the rate at which lithium ions are absorbed at each location on the particle surface.

    “It was a real surprise to us that we could learn the heterogeneities in the system — in this case, the variations in surface reaction rate — simply by looking at the images,” Bazant says. “There are regions that seem to be fast and others that seem to be slow.”

    Furthermore, the researchers showed that these differences in reaction rate were correlated with the thickness of the carbon coating on the surface of the lithium iron phosphate particles. That carbon coating is applied to lithium iron phosphate to help it conduct electricity — otherwise the material would conduct too slowly to be useful as a battery.

    “We discovered at the nano scale that variation of the carbon coating thickness directly controls the rate, which is something you could never figure out if you didn’t have all of this modeling and image analysis,” Bazant says.

    The findings also offer quantitative support for a hypothesis Bazant formulated several years ago: that the performance of lithium iron phosphate electrodes is limited primarily by the rate of coupled ion-electron transfer at the interface between the solid particle and the carbon coating, rather than the rate of lithium-ion diffusion in the solid.

    Optimized materials

    The results from this study suggest that optimizing the thickness of the carbon layer on the electrode surface could help researchers to design batteries that would work more efficiently, the researchers say.

    “This is the first study that’s been able to directly attribute a property of the battery material with a physical property of the coating,” Bazant says. “The focus for optimizing and designing batteries should be on controlling reaction kinetics at the interface of the electrolyte and electrode.”

    “This publication is the culmination of six years of dedication and collaboration,” Storey says. “This technique allows us to unlock the inner workings of the battery in a way not previously possible. Our next goal is to improve battery design by applying this new understanding.”  

    In addition to using this type of analysis on other battery materials, Bazant anticipates that it could be useful for studying pattern formation in other chemical and biological systems.

    This work was supported by the Toyota Research Institute through the Accelerated Materials Design and Discovery program. More

  • in

    Jackson Jewett wants to design buildings that use less concrete

    After three years leading biking tours through U.S. National Parks, Jackson Jewett decided it was time for a change.

    “It was a lot of fun, but I realized I missed buildings,” says Jewett. “I really wanted to be a part of that industry, learn more about it, and reconnect with my roots in the built environment.”

    Jewett grew up in California in what he describes as a “very creative household.”

    “I remember making very elaborate Halloween costumes with my parents, making fun dioramas for school projects, and building forts in the backyard, that kind of thing,” Jewett explains.

    Both of his parents have backgrounds in design; his mother studied art in college and his father is a practicing architect. From a young age, Jewett was interested in following in his father’s footsteps. But when he arrived at the University of California at Berkeley in the midst of the 2009 housing crash, it didn’t seem like the right time. Jewett graduated with a degree in cognitive science and a minor in history of architecture. And even as he led tours through Yellowstone, the Grand Canyon, and other parks, buildings were in the back of his mind.

    It wasn’t just the built environment that Jewett was missing. He also longed for the rigor and structure of an academic environment.

    Jewett arrived at MIT in 2017, initially only planning on completing the master’s program in civil and environmental engineering. It was then that he first met Josephine Carstensen, a newly hired lecturer in the department. Jewett was interested in Carstensen’s work on “topology optimization,” which uses algorithms to design structures that can achieve their performance requirements while using only a limited amount of material. He was particularly interested in applying this approach to concrete design, and he collaborated with Carstensen to help demonstrate its viability.

    After earning his master’s, Jewett spent a year and a half as a structural engineer in New York City. But when Carstensen was hired as a professor, she reached out to Jewett about joining her lab as a PhD student. He was ready for another change.

    Now in the third year of his PhD program, Jewett’s dissertation work builds upon his master’s thesis to further refine algorithms that can design building-scale concrete structures that use less material, which would help lower carbon emissions from the construction industry. It is estimated that the concrete industry alone is responsible for 8 percent of global carbon emissions, so any efforts to reduce that number could help in the fight against climate change.

    Implementing new ideas

    Topology optimization is a small field, with the bulk of the prior work being computational without any experimental verification. The work Jewett completed for his master’s thesis was just the start of a long learning process.

    “I do feel like I’m just getting to the part where I can start implementing my own ideas without as much support as I’ve needed in the past,” says Jewett. “In the last couple of months, I’ve been working on a reinforced concrete optimization algorithm that I hope will be the cornerstone of my thesis.”

    The process of fine-tuning a generative algorithm is slow going, particularly when tackling a multifaceted problem.

    “It can take days or usually weeks to take a step toward making it work as an entire integrated system,” says Jewett. “The days when that breakthrough happens and I can see the algorithm converging on a solution that makes sense — those are really exciting moments.”

    By harnessing computational power, Jewett is searching for materially efficient components that can be used to make up structures such as bridges or buildings. These are other constraints to consider as well, particularly ensuring that the cost of manufacturing isn’t too high. Having worked in the industry before starting the PhD program, Jewett has an eye toward doing work that can be feasibly implemented.

    Inspiring others

    When Jewett first visited MIT campus, he was drawn in by the collaborative environment of the institute and the students’ drive to learn. Now, he’s a part of that process as a teaching assistant and a supervisor in the Undergraduate Research Opportunities Program.  

    Working as a teaching assistant isn’t a requirement for Jewett’s program, but it’s been one of his favorite parts of his time at MIT.

    “The MIT undergrads are so gifted and just constantly impress me,” says Jewett. “Being able to teach, especially in the context of what MIT values is a lot of fun. And I learn, too. My coding practices have gotten so much better since working with undergrads here.”

    Jewett’s experiences have inspired him to pursue a career in academia after the completion of his program, which he expects to complete in the spring of 2025. But he’s making sure to take care of himself along the way. He still finds time to plan cycling trips with his friends and has gotten into running ever since moving to Boston. So far, he’s completed two marathons.

    “It’s so inspiring to be in a place where so many good ideas are just bouncing back and forth all over campus,” says Jewett. “And on most days, I remember that and it inspires me. But it’s also the case that academics is hard, PhD programs are hard, and MIT — there’s pressure being here, and sometimes that pressure can feel like it’s working against you.”

    Jewett is grateful for the mental health resources that MIT provides students. While he says it can be imperfect, it’s been a crucial part of his journey.

    “My PhD thesis will be done in 2025, but the work won’t be done. The time horizon of when these things need to be implemented is relatively short if we want to make an impact before global temperatures have already risen too high. My PhD research will be developing a framework for how that could be done with concrete construction, but I’d like to keep thinking about other materials and construction methods even after this project is finished.” More

  • in

    Study suggests energy-efficient route to capturing and converting CO2

    In the race to draw down greenhouse gas emissions around the world, scientists at MIT are looking to carbon-capture technologies to decarbonize the most stubborn industrial emitters.

    Steel, cement, and chemical manufacturing are especially difficult industries to decarbonize, as carbon and fossil fuels are inherent ingredients in their production. Technologies that can capture carbon emissions and convert them into forms that feed back into the production process could help to reduce the overall emissions from these “hard-to-abate” sectors.

    But thus far, experimental technologies that capture and convert carbon dioxide do so as two separate processes, that themselves require a huge amount of energy to run. The MIT team is looking to combine the two processes into one integrated and far more energy-efficient system that could potentially run on renewable energy to both capture and convert carbon dioxide from concentrated, industrial sources.

    In a study appearing today in ACS Catalysis, the researchers reveal the hidden functioning of how carbon dioxide can be both captured and converted through a single electrochemical process. The process involves using an electrode to attract carbon dioxide released from a sorbent, and to convert it into a reduced, reusable form.

    Others have reported similar demonstrations, but the mechanisms driving the electrochemical reaction have remained unclear. The MIT team carried out extensive experiments to determine that driver, and found that, in the end, it came down to the partial pressure of carbon dioxide. In other words, the more pure carbon dioxide that makes contact with the electrode, the more efficiently the electrode can capture and convert the molecule.

    Knowledge of this main driver, or “active species,” can help scientists tune and optimize similar electrochemical systems to efficiently capture and convert carbon dioxide in an integrated process.

    The study’s results imply that, while these electrochemical systems would probably not work for very dilute environments (for instance, to capture and convert carbon emissions directly from the air), they would be well-suited to the highly concentrated emissions generated by industrial processes, particularly those that have no obvious renewable alternative.

    “We can and should switch to renewables for electricity production. But deeply decarbonizing industries like cement or steel production is challenging and will take a longer time,” says study author Betar Gallant, the Class of 1922 Career Development Associate Professor at MIT. “Even if we get rid of all our power plants, we need some solutions to deal with the emissions from other industries in the shorter term, before we can fully decarbonize them. That’s where we see a sweet spot, where something like this system could fit.”

    The study’s MIT co-authors are lead author and postdoc Graham Leverick and graduate student Elizabeth Bernhardt, along with Aisyah Illyani Ismail, Jun Hui Law, Arif Arifutzzaman, and Mohamed Kheireddine Aroua of Sunway University in Malaysia.

    Breaking bonds

    Carbon-capture technologies are designed to capture emissions, or “flue gas,” from the smokestacks of power plants and manufacturing facilities. This is done primarily using large retrofits to funnel emissions into chambers filled with a “capture” solution — a mix of amines, or ammonia-based compounds, that chemically bind with carbon dioxide, producing a stable form that can be separated out from the rest of the flue gas.

    High temperatures are then applied, typically in the form of fossil-fuel-generated steam, to release the captured carbon dioxide from its amine bond. In its pure form, the gas can then be pumped into storage tanks or underground, mineralized, or further converted into chemicals or fuels.

    “Carbon capture is a mature technology, in that the chemistry has been known for about 100 years, but it requires really large installations, and is quite expensive and energy-intensive to run,” Gallant notes. “What we want are technologies that are more modular and flexible and can be adapted to more diverse sources of carbon dioxide. Electrochemical systems can help to address that.”

    Her group at MIT is developing an electrochemical system that both recovers the captured carbon dioxide and converts it into a reduced, usable product. Such an integrated system, rather than a decoupled one, she says, could be entirely powered with renewable electricity rather than fossil-fuel-derived steam.

    Their concept centers on an electrode that would fit into existing chambers of carbon-capture solutions. When a voltage is applied to the electrode, electrons flow onto the reactive form of carbon dioxide and convert it to a product using protons supplied from water. This makes the sorbent available to bind more carbon dioxide, rather than using steam to do the same.

    Gallant previously demonstrated this electrochemical process could work to capture and convert carbon dioxide into a solid carbonate form.

    “We showed that this electrochemical process was feasible in very early concepts,” she says. “Since then, there have been other studies focused on using this process to attempt to produce useful chemicals and fuels. But there’s been inconsistent explanations of how these reactions work, under the hood.”

    Solo CO2

    In the new study, the MIT team took a magnifying glass under the hood to tease out the specific reactions driving the electrochemical process. In the lab, they generated amine solutions that resemble the industrial capture solutions used to extract carbon dioxide from flue gas. They methodically altered various properties of each solution, such as the pH, concentration, and type of amine, then ran each solution past an electrode made from silver — a metal that is widely used in electrolysis studies and known to efficiently convert carbon dioxide to carbon monoxide. They then measured the concentration of carbon monoxide that was converted at the end of the reaction, and compared this number against that of every other solution they tested, to see which parameter had the most influence on how much carbon monoxide was produced.

    In the end, they found that what mattered most was not the type of amine used to initially capture carbon dioxide, as many have suspected. Instead, it was the concentration of solo, free-floating carbon dioxide molecules, which avoided bonding with amines but were nevertheless present in the solution. This “solo-CO2” determined the concentration of carbon monoxide that was ultimately produced.

    “We found that it’s easier to react this ‘solo’ CO2, as compared to CO2 that has been captured by the amine,” Leverick offers. “This tells future researchers that this process could be feasible for industrial streams, where high concentrations of carbon dioxide could efficiently be captured and converted into useful chemicals and fuels.”

    “This is not a removal technology, and it’s important to state that,” Gallant stresses. “The value that it does bring is that it allows us to recycle carbon dioxide some number of times while sustaining existing industrial processes, for fewer associated emissions. Ultimately, my dream is that electrochemical systems can be used to facilitate mineralization, and permanent storage of CO2 — a true removal technology. That’s a longer-term vision. And a lot of the science we’re starting to understand is a first step toward designing those processes.”

    This research is supported by Sunway University in Malaysia. More