More stories

  • in

    3 Questions: What should scientists and the public know about nuclear waste?

    Many researchers see an expansion of nuclear power, which produces no greenhouse gas emissions from its power generation, as an essential component of strategies to combat global climate change. Yet there is still strong resistance to such expansion, and much of that is based on the issue of how to safely dispose of the resulting radioactive waste material. MIT recently convened a workshop to help nuclear engineers, policymakers, and academics learn about approaches to communicating accurate information about the management of nuclear waste to students and the public, in hopes of allaying fears and encouraging support for the development of new, safer nuclear power plants around the world.

    Organized by Haruko Wainwright, an MIT assistant professor of nuclear science and engineering and of civil and environmental engineering, the workshop included professors, researchers, industry representatives, and government officials, and was designed to emphasize the multidisciplinary nature of the issue. MIT News asked Wainwright to describe the workshop and its conclusions, which she reported on in a paper just published in the Journal of Environmental Radioactivity.

    Q: What was the main objective of the this workshop?

    A: There is a growing concern that, in spite of much excitement about new nuclear reactor deployment and nuclear energy for tackling climate change, relatively less attention is being paid to the thorny question of long-term management of the spent fuel (waste) from these reactors. The government and industry have embraced consent-based siting approaches — that is, finding sites to store and dispose nuclear waste through broad community participation with equity and environmental justice considered. However, many of us in academia feel that those in the industry are missing key facts to communicate to the public.

    Understanding and managing nuclear waste requires a multidisciplinary expertise in nuclear, civil, and chemical engineering as well as environmental and earth sciences. For example, the amount of waste per se, which is always very small for nuclear systems, is not the only factor determining the environmental impacts because some radionuclides in the waste are vastly more mobile than others, and thus can spread farther and more quickly. Nuclear engineers, environmental scientists, and others need to work together to predict the environmental impacts of radionuclides in the waste generated by the new reactors, and to develop waste isolation strategies for an extended time.

    We organized this workshop to ensure this collaborative approach is mastered from the start. A second objective was to develop a blueprint for educating next-generation engineers and scientists about nuclear waste and shaping a more broadly educated group of nuclear and general engineers.

    Q: What kinds of innovative teaching practices were discussed and recommended, and are there examples of these practices in action?

     A: Some participants teach project-based or simulation-based courses of real-world situations. For example, students are divided into several groups representing various stakeholders — such as the public, policymakers, scientists, and governments — and discuss the potential siting of a nuclear waste repository in a community. Such a course helps the students to consider the perspectives of different groups, understand a plurality of points of view, and learn how to communicate their ideas and concerns effectively. Other courses may ask students to synthesize key technical facts and numbers, and to develop a Congressional testimony statement or an opinion article for newspapers. 

    Q: What are some of the biggest misconceptions people have about nuclear waste, and how do you think these misconceptions can be addressed?

    A: The workshop participants agreed that the broader and life-cycle perspectives are important. Within the nuclear energy life cycle, for example, people focus disproportionally on high-level radioactive waste or spent fuel, which has been highly regulated and well managed. Nuclear systems also produce secondary waste, including low-level waste and uranium mining waste, which gets less attention.

    The participants also believe that the nuclear industry has been exemplary in leading the environmental and waste isolation science and technologies. Nuclear waste disposal strategies were developed in the 1950s, much earlier than other hazardous waste which began to receive serious regulation only in the 1970s. In addition, current nuclear waste disposal practices consider the compliance periods of isolation for thousands of years, while other hazardous waste disposal is not required to consider beyond 30 years, although some waste has an essentially infinite longevity, for example, mercury or lead. Finally, there is relatively unregulated waste — such as CO2 from fossil energy, agricultural effluents and other sources — that is released freely into the biosphere and is already impacting our environment. Yet, many people remain more concerned about the relatively well-regulated nuclear waste than about all these unregulated sources.

    Interestingly, many engineers — even nuclear engineers — do not know these facts. We believe that we need to teach students not just cutting-edge technologies, but also broader perspectives, including the history of industries and regulations, as well as environmental science.

    At the same time, we need to move the nuclear community to think more holistically about waste and its environmental impacts from the early stages of design of nuclear systems. We should design new reactors from the “waste up.”  We believe that the nuclear industry should continue to lead waste-management technologies and strategies, and also encourage other industries to adopt lifecycle approaches about their own waste to improve the overall sustainability. More

  • in

    MIT design would harness 40 percent of the sun’s heat to produce clean hydrogen fuel

    MIT engineers aim to produce totally green, carbon-free hydrogen fuel with a new, train-like system of reactors that is driven solely by the sun.

    In a study appearing today in Solar Energy Journal, the engineers lay out the conceptual design for a system that can efficiently produce “solar thermochemical hydrogen.” The system harnesses the sun’s heat to directly split water and generate hydrogen — a clean fuel that can power long-distance trucks, ships, and planes, while in the process emitting no greenhouse gas emissions.

    Today, hydrogen is largely produced through processes that involve natural gas and other fossil fuels, making the otherwise green fuel more of a “grey” energy source when considered from the start of its production to its end use. In contrast, solar thermochemical hydrogen, or STCH, offers a totally emissions-free alternative, as it relies entirely on renewable solar energy to drive hydrogen production. But so far, existing STCH designs have limited efficiency: Only about 7 percent of incoming sunlight is used to make hydrogen. The results so far have been low-yield and high-cost.

    In a big step toward realizing solar-made fuels, the MIT team estimates its new design could harness up to 40 percent of the sun’s heat to generate that much more hydrogen. The increase in efficiency could drive down the system’s overall cost, making STCH a potentially scalable, affordable option to help decarbonize the transportation industry.

    “We’re thinking of hydrogen as the fuel of the future, and there’s a need to generate it cheaply and at scale,” says the study’s lead author, Ahmed Ghoniem, the Ronald C. Crane Professor of Mechanical Engineering at MIT. “We’re trying to achieve the Department of Energy’s goal, which is to make green hydrogen by 2030, at $1 per kilogram. To improve the economics, we have to improve the efficiency and make sure most of the solar energy we collect is used in the production of hydrogen.”

    Ghoniem’s study co-authors are Aniket Patankar, first author and MIT postdoc; Harry Tuller, MIT professor of materials science and engineering; Xiao-Yu Wu of the University of Waterloo; and Wonjae Choi at Ewha Womans University in South Korea.

    Solar stations

    Similar to other proposed designs, the MIT system would be paired with an existing source of solar heat, such as a concentrated solar plant (CSP) — a circular array of hundreds of mirrors that collect and reflect sunlight to a central receiving tower. An STCH system then absorbs the receiver’s heat and directs it to split water and produce hydrogen. This process is very different from electrolysis, which uses electricity instead of heat to split water.

    At the heart of a conceptual STCH system is a two-step thermochemical reaction. In the first step, water in the form of steam is exposed to a metal. This causes the metal to grab oxygen from steam, leaving hydrogen behind. This metal “oxidation” is similar to the rusting of iron in the presence of water, but it occurs much faster. Once hydrogen is separated, the oxidized (or rusted) metal is reheated in a vacuum, which acts to reverse the rusting process and regenerate the metal. With the oxygen removed, the metal can be cooled and exposed to steam again to produce more hydrogen. This process can be repeated hundreds of times.

    The MIT system is designed to optimize this process. The system as a whole resembles a train of box-shaped reactors running on a circular track. In practice, this track would be set around a solar thermal source, such as a CSP tower. Each reactor in the train would house the metal that undergoes the redox, or reversible rusting, process.

    Each reactor would first pass through a hot station, where it would be exposed to the sun’s heat at temperatures of up to 1,500 degrees Celsius. This extreme heat would effectively pull oxygen out of a reactor’s metal. That metal would then be in a “reduced” state — ready to grab oxygen from steam. For this to happen, the reactor would move to a cooler station at temperatures around 1,000 C, where it would be exposed to steam to produce hydrogen.

    Rust and rails

    Other similar STCH concepts have run up against a common obstacle: what to do with the heat released by the reduced reactor as it is cooled. Without recovering and reusing this heat, the system’s efficiency is too low to be practical.

    A second challenge has to do with creating an energy-efficient vacuum where metal can de-rust. Some prototypes generate a vacuum using mechanical pumps, though the pumps are too energy-intensive and costly for large-scale hydrogen production.

    To address these challenges, the MIT design incorporates several energy-saving workarounds. To recover most of the heat that would otherwise escape from the system, reactors on opposite sides of the circular track are allowed to exchange heat through thermal radiation; hot reactors get cooled while cool reactors get heated. This keeps the heat within the system. The researchers also added a second set of reactors that would circle around the first train, moving in the opposite direction. This outer train of reactors would operate at generally cooler temperatures and would be used to evacuate oxygen from the hotter inner train, without the need for energy-consuming mechanical pumps.

    These outer reactors would carry a second type of metal that can also easily oxidize. As they circle around, the outer reactors would absorb oxygen from the inner reactors, effectively de-rusting the original metal, without having to use energy-intensive vacuum pumps. Both reactor trains would  run continuously and would enerate separate streams of pure hydrogen and oxygen.

    The researchers carried out detailed simulations of the conceptual design, and found that it would significantly boost the efficiency of solar thermochemical hydrogen production, from 7 percent, as previous designs have demonstrated, to 40 percent.

    “We have to think of every bit of energy in the system, and how to use it, to minimize the cost,” Ghoniem says. “And with this design, we found that everything can be powered by heat coming from the sun. It is able to use 40 percent of the sun’s heat to produce hydrogen.”

    “If this can be realized, it could drastically change our energy future — namely, enabling hydrogen production, 24/7,” says Christopher Muhich, an assistant professor of chemical engineering at Arizona State University, who was not involved in the research. “The ability to make hydrogen is the linchpin to producing liquid fuels from sunlight.”

    In the next year, the team will be building a prototype of the system that they plan to test in concentrated solar power facilities at laboratories of the Department of Energy, which is currently funding the project.

    “When fully implemented, this system would be housed in a little building in the middle of a solar field,” Patankar explains. “Inside the building, there could be one or more trains each having about 50 reactors. And we think this could be a modular system, where you can add reactors to a conveyor belt, to scale up hydrogen production.”

    This work was supported by the Centers for Mechanical Engineering Research and Education at MIT and SUSTech. More

  • in

    Printing a new approach to fusion power plant materials

    When Alexander O’Brien sent in his application for graduate school at MIT’s Department of Nuclear Science and Engineering, he had a germ of a research idea already brewing. So when he received a phone call from Professor Mingda Li, he shared it: The student from Arkansas wanted to explore the design of materials that could hold nuclear reactors together.

    Li listened to him patiently and then said, “I think you’d be a really good fit for Professor Ju Li,” O’Brien remembers. Ju Li, the Battelle Energy Alliance Professor in Nuclear Engineering, had wanted to explore 3D printing for nuclear reactors and O’Brien seemed like the right candidate. “At that moment I decided to go to MIT if they accepted me,” O’Brien remembers.

    And they did.

    Under the advisement of Ju Li, the fourth-year doctoral student now explores 3D printing of ceramic-metal composites, materials that can be used to construct fusion power plants.

    An early interest in the sciences

    Growing up in Springdale, Arkansas as a self-described “band nerd,” O’Brien was particularly interested in chemistry and physics. It was one thing to mix baking soda and vinegar to make a “volcano” and quite another to understand why that was happening. “I just enjoyed understanding things on a deeper level and being able to figure out how the world works,” he says.

    At the same time, it was difficult to ignore the economics of energy playing out in his own backyard. When Arkansas, a place that had hardly ever seen earthquakes, started registering them in the wake of fracking in neighboring Oklahoma, it was “like a lightbulb moment” for O’Brien. “I knew this was going to create problems down the line, I knew there’s got to be a better way to do [energy],” he says.

    With the idea of energy alternatives simmering on the back burner, O’Brien enrolled for undergraduate studies at the University of Arkansas. He participated in the school’s marching band — “you show up a week before everyone else and there’s 400 people who automatically become your friends” — and enjoyed the social environment that a large state school could offer.

    O’Brien double-majored in chemical engineering and physics and appreciated “the ability to get your hands dirty on machinery to make things work.” Deciding to begin exploring his interest in energy alternatives, O’Brien researched transition metal dichalcogenides, coatings of which could catalyze the hydrogen evolution reaction and more easily create hydrogen gas, a green energy alternative.

    It was shortly after his sophomore year, however, that O’Brien really found his way in the field of energy alternatives — in nuclear engineering. The American Chemical Society was soliciting student applications for summer study of nuclear chemistry in San Jose, California. O’Brien applied and got accepted. “After years of knowing I wanted to work in green energy but not knowing what that looked like, I very quickly fell in love with [nuclear engineering],” he says. That summer also cemented O’Brien’s decision to attend graduate school. “I came away with this idea of ‘I need to go to grad school because I need to know more about this,’” he says.

    O’Brien especially appreciated an independent project, assigned as part of the summer program: He chose to research nuclear-powered spacecraft. In digging deeper, O’Brien discovered the challenges of powering spacecraft — nuclear was the most viable alternative, but it had to work around extraneous radiation sources in space. Getting to explore national laboratories near San Jose sealed the deal. “I got to visit the National Ignition Facility, which is the big fusion center up there, and just seeing that massive facility entirely designed around this one idea of fusion was kind of mind-blowing to me,” O’Brien says.

    A fresh blueprint for fusion power plants

    O’Brien’s current research at MIT’s Department of Nuclear Science and Engineering (NSE) is equally mind-blowing.

    As the design of new fusion devices kicks into gear, it’s becoming increasingly apparent that the materials we have been using just don’t hold up to the higher temperatures and radiation levels in operating environments, O’Brien says. Additive manufacturing, another term for 3D printing, “opens up a whole new realm of possibilities for what you can do with metals, which is exactly what you’re going to need [to build the next generation of fusion power plants],” he says.

    Metals and ceramics by themselves might not do the job of withstanding high temperatures (750 degrees Celsius is the target) and stresses and radiation, but together they might get there. Although such metal matrix composites have been around for decades, they have been impractical for use in reactors because they’re “difficult to make with any kind of uniformity and really limited in size scale,” O’Brien says. That’s because when you try to place ceramic nanoparticles into a pool of molten metal, they’re going to fall out in whichever direction they want. “3D printing quickly changes that story entirely, to the point where if you want to add these nanoparticles in very specific regions, you have the capability to do that,” O’Brien says.

    O’Brien’s work, which forms the basis of his doctoral thesis and a research paper in the journal Additive Manufacturing, involves implanting metals with ceramic nanoparticles. The net result is a metal matrix composite that is an ideal candidate for fusion devices, especially for the vacuum vessel component, which must be able to withstand high temperatures, extremely corrosive molten salts, and internal helium gas from nuclear transmutation.

    O’Brien’s work focuses on nickel superalloys like Inconel 718, which are especially robust candidates because they can withstand higher operating temperatures while retaining strength. Helium embrittlement, where bubbles of helium caused by fusion neutrons lead to weakness and failure, is a problem with Inconel 718, but composites exhibit potential to overcome this challenge.

    To create the composites, first a mechanical milling process coats the ceramic onto the metal particles. The ceramic nanoparticles act as reinforcing strength agents, especially at high temperatures, and make materials last longer. The nanoparticles also absorb helium and radiation defects when uniformly dispersed, which prevent these damage agents from all getting to the grain boundaries.

    The composite then goes through a 3D printing process called powder bed fusion (non-nuclear fusion), where a laser passes over a bed of this powder melting it into desired shapes. “By coating these particles with the ceramic and then only melting very specific regions, we keep the ceramics in the areas that we want, and then you can build up and have a uniform structure,” O’Brien says.

    Printing an exciting future

    The 3D printing of nuclear materials exhibits such promise that O’Brien is looking at pursuing the prospect after his doctoral studies. “The concept of these metal matrix composites and how they can enhance material property is really interesting,” he says. Scaling it up commercially through a startup company is on his radar.

    For now, O’Brien is enjoying research and catching an occasional Broadway show with his wife. While the band nerd doesn’t pick up his saxophone much anymore, he does enjoy driving up to New Hampshire and going backpacking. “That’s my newfound hobby,” O’Brien says, “since I started grad school.” More

  • in

    New tools are available to help reduce the energy that AI models devour

    When searching for flights on Google, you may have noticed that each flight’s carbon-emission estimate is now presented next to its cost. It’s a way to inform customers about their environmental impact, and to let them factor this information into their decision-making.

    A similar kind of transparency doesn’t yet exist for the computing industry, despite its carbon emissions exceeding those of the entire airline industry. Escalating this energy demand are artificial intelligence models. Huge, popular models like ChatGPT signal a trend of large-scale artificial intelligence, boosting forecasts that predict data centers will draw up to 21 percent of the world’s electricity supply by 2030.

    The MIT Lincoln Laboratory Supercomputing Center (LLSC) is developing techniques to help data centers reel in energy use. Their techniques range from simple but effective changes, like power-capping hardware, to adopting novel tools that can stop AI training early on. Crucially, they have found that these techniques have a minimal impact on model performance.

    In the wider picture, their work is mobilizing green-computing research and promoting a culture of transparency. “Energy-aware computing is not really a research area, because everyone’s been holding on to their data,” says Vijay Gadepally, senior staff in the LLSC who leads energy-aware research efforts. “Somebody has to start, and we’re hoping others will follow.”

    Curbing power and cooling down

    Like many data centers, the LLSC has seen a significant uptick in the number of AI jobs running on its hardware. Noticing an increase in energy usage, computer scientists at the LLSC were curious about ways to run jobs more efficiently. Green computing is a principle of the center, which is powered entirely by carbon-free energy.

    Training an AI model — the process by which it learns patterns from huge datasets — requires using graphics processing units (GPUs), which are power-hungry hardware. As one example, the GPUs that trained GPT-3 (the precursor to ChatGPT) are estimated to have consumed 1,300 megawatt-hours of electricity, roughly equal to that used by 1,450 average U.S. households per month.

    While most people seek out GPUs because of their computational power, manufacturers offer ways to limit the amount of power a GPU is allowed to draw. “We studied the effects of capping power and found that we could reduce energy consumption by about 12 percent to 15 percent, depending on the model,” Siddharth Samsi, a researcher within the LLSC, says.

    The trade-off for capping power is increasing task time — GPUs will take about 3 percent longer to complete a task, an increase Gadepally says is “barely noticeable” considering that models are often trained over days or even months. In one of their experiments in which they trained the popular BERT language model, limiting GPU power to 150 watts saw a two-hour increase in training time (from 80 to 82 hours) but saved the equivalent of a U.S. household’s week of energy.

    The team then built software that plugs this power-capping capability into the widely used scheduler system, Slurm. The software lets data center owners set limits across their system or on a job-by-job basis.

    “We can deploy this intervention today, and we’ve done so across all our systems,” Gadepally says.

    Side benefits have arisen, too. Since putting power constraints in place, the GPUs on LLSC supercomputers have been running about 30 degrees Fahrenheit cooler and at a more consistent temperature, reducing stress on the cooling system. Running the hardware cooler can potentially also increase reliability and service lifetime. They can now consider delaying the purchase of new hardware — reducing the center’s “embodied carbon,” or the emissions created through the manufacturing of equipment — until the efficiencies gained by using new hardware offset this aspect of the carbon footprint. They’re also finding ways to cut down on cooling needs by strategically scheduling jobs to run at night and during the winter months.

    “Data centers can use these easy-to-implement approaches today to increase efficiencies, without requiring modifications to code or infrastructure,” Gadepally says.

    Taking this holistic look at a data center’s operations to find opportunities to cut down can be time-intensive. To make this process easier for others, the team — in collaboration with Professor Devesh Tiwari and Baolin Li at Northeastern University — recently developed and published a comprehensive framework for analyzing the carbon footprint of high-performance computing systems. System practitioners can use this analysis framework to gain a better understanding of how sustainable their current system is and consider changes for next-generation systems.  

    Adjusting how models are trained and used

    On top of making adjustments to data center operations, the team is devising ways to make AI-model development more efficient.

    When training models, AI developers often focus on improving accuracy, and they build upon previous models as a starting point. To achieve the desired output, they have to figure out what parameters to use, and getting it right can take testing thousands of configurations. This process, called hyperparameter optimization, is one area LLSC researchers have found ripe for cutting down energy waste. 

    “We’ve developed a model that basically looks at the rate at which a given configuration is learning,” Gadepally says. Given that rate, their model predicts the likely performance. Underperforming models are stopped early. “We can give you a very accurate estimate early on that the best model will be in this top 10 of 100 models running,” he says.

    In their studies, this early stopping led to dramatic savings: an 80 percent reduction in the energy used for model training. They’ve applied this technique to models developed for computer vision, natural language processing, and material design applications.

    “In my opinion, this technique has the biggest potential for advancing the way AI models are trained,” Gadepally says.

    Training is just one part of an AI model’s emissions. The largest contributor to emissions over time is model inference, or the process of running the model live, like when a user chats with ChatGPT. To respond quickly, these models use redundant hardware, running all the time, waiting for a user to ask a question.

    One way to improve inference efficiency is to use the most appropriate hardware. Also with Northeastern University, the team created an optimizer that matches a model with the most carbon-efficient mix of hardware, such as high-power GPUs for the computationally intense parts of inference and low-power central processing units (CPUs) for the less-demanding aspects. This work recently won the best paper award at the International ACM Symposium on High-Performance Parallel and Distributed Computing.

    Using this optimizer can decrease energy use by 10-20 percent while still meeting the same “quality-of-service target” (how quickly the model can respond).

    This tool is especially helpful for cloud customers, who lease systems from data centers and must select hardware from among thousands of options. “Most customers overestimate what they need; they choose over-capable hardware just because they don’t know any better,” Gadepally says.

    Growing green-computing awareness

    The energy saved by implementing these interventions also reduces the associated costs of developing AI, often by a one-to-one ratio. In fact, cost is usually used as a proxy for energy consumption. Given these savings, why aren’t more data centers investing in green techniques?

    “I think it’s a bit of an incentive-misalignment problem,” Samsi says. “There’s been such a race to build bigger and better models that almost every secondary consideration has been put aside.”

    They point out that while some data centers buy renewable-energy credits, these renewables aren’t enough to cover the growing energy demands. The majority of electricity powering data centers comes from fossil fuels, and water used for cooling is contributing to stressed watersheds. 

    Hesitancy may also exist because systematic studies on energy-saving techniques haven’t been conducted. That’s why the team has been pushing their research in peer-reviewed venues in addition to open-source repositories. Some big industry players, like Google DeepMind, have applied machine learning to increase data center efficiency but have not made their work available for others to deploy or replicate. 

    Top AI conferences are now pushing for ethics statements that consider how AI could be misused. The team sees the climate aspect as an AI ethics topic that has not yet been given much attention, but this also appears to be slowly changing. Some researchers are now disclosing the carbon footprint of training the latest models, and industry is showing a shift in energy transparency too, as in this recent report from Meta AI.

    They also acknowledge that transparency is difficult without tools that can show AI developers their consumption. Reporting is on the LLSC roadmap for this year. They want to be able to show every LLSC user, for every job, how much energy they consume and how this amount compares to others, similar to home energy reports.

    Part of this effort requires working more closely with hardware manufacturers to make getting these data off hardware easier and more accurate. If manufacturers can standardize the way the data are read out, then energy-saving and reporting tools can be applied across different hardware platforms. A collaboration is underway between the LLSC researchers and Intel to work on this very problem.

    Even for AI developers who are aware of the intense energy needs of AI, they can’t do much on their own to curb this energy use. The LLSC team wants to help other data centers apply these interventions and provide users with energy-aware options. Their first partnership is with the U.S. Air Force, a sponsor of this research, which operates thousands of data centers. Applying these techniques can make a significant dent in their energy consumption and cost.

    “We’re putting control into the hands of AI developers who want to lessen their footprint,” Gadepally says. “Do I really need to gratuitously train unpromising models? Am I willing to run my GPUs slower to save energy? To our knowledge, no other supercomputing center is letting you consider these options. Using our tools, today, you get to decide.”

    Visit this webpage to see the group’s publications related to energy-aware computing and findings described in this article. More

  • in

    Desalination system could produce freshwater that is cheaper than tap water

    Engineers at MIT and in China are aiming to turn seawater into drinking water with a completely passive device that is inspired by the ocean, and powered by the sun.

    In a paper appearing today in the journal Joule, the team outlines the design for a new solar desalination system that takes in saltwater and heats it with natural sunlight.

    The configuration of the device allows water to circulate in swirling eddies, in a manner similar to the much larger “thermohaline” circulation of the ocean. This circulation, combined with the sun’s heat, drives water to evaporate, leaving salt behind. The resulting water vapor can then be condensed and collected as pure, drinkable water. In the meantime, the leftover salt continues to circulate through and out of the device, rather than accumulating and clogging the system.

    The new system has a higher water-production rate and a higher salt-rejection rate than all other passive solar desalination concepts currently being tested.

    The researchers estimate that if the system is scaled up to the size of a small suitcase, it could produce about 4 to 6 liters of drinking water per hour and last several years before requiring replacement parts. At this scale and performance, the system could produce drinking water at a rate and price that is cheaper than tap water.

    “For the first time, it is possible for water, produced by sunlight, to be even cheaper than tap water,” says Lenan Zhang, a research scientist in MIT’s Device Research Laboratory.

    The team envisions a scaled-up device could passively produce enough drinking water to meet the daily requirements of a small family. The system could also supply off-grid, coastal communities where seawater is easily accessible.

    Zhang’s study co-authors include MIT graduate student Yang Zhong and Evelyn Wang, the Ford Professor of Engineering, along with Jintong Gao, Jinfang You, Zhanyu Ye, Ruzhu Wang, and Zhenyuan Xu of Shanghai Jiao Tong University in China.

    A powerful convection

    The team’s new system improves on their previous design — a similar concept of multiple layers, called stages. Each stage contained an evaporator and a condenser that used heat from the sun to passively separate salt from incoming water. That design, which the team tested on the roof of an MIT building, efficiently converted the sun’s energy to evaporate water, which was then condensed into drinkable water. But the salt that was left over quickly accumulated as crystals that clogged the system after a few days. In a real-world setting, a user would have to place stages on a frequent basis, which would significantly increase the system’s overall cost.

    In a follow-up effort, they devised a solution with a similar layered configuration, this time with an added feature that helped to circulate the incoming water as well as any leftover salt. While this design prevented salt from settling and accumulating on the device, it desalinated water at a relatively low rate.

    In the latest iteration, the team believes it has landed on a design that achieves both a high water-production rate, and high salt rejection, meaning that the system can quickly and reliably produce drinking water for an extended period. The key to their new design is a combination of their two previous concepts: a multistage system of evaporators and condensers, that is also configured to boost the circulation of water — and salt — within each stage.

    “We introduce now an even more powerful convection, that is similar to what we typically see in the ocean, at kilometer-long scales,” Xu says.

    The small circulations generated in the team’s new system is similar to the “thermohaline” convection in the ocean — a phenomenon that drives the movement of water around the world, based on differences in sea temperature (“thermo”) and salinity (“haline”).

    “When seawater is exposed to air, sunlight drives water to evaporate. Once water leaves the surface, salt remains. And the higher the salt concentration, the denser the liquid, and this heavier water wants to flow downward,” Zhang explains. “By mimicking this kilometer-wide phenomena in small box, we can take advantage of this feature to reject salt.”

    Tapping out

    The heart of the team’s new design is a single stage that resembles a thin box, topped with a dark material that efficiently absorbs the heat of the sun. Inside, the box is separated into a top and bottom section. Water can flow through the top half, where the ceiling is lined with an evaporator layer that uses the sun’s heat to warm up and evaporate any water in direct contact. The water vapor is then funneled to the bottom half of the box, where a condensing layer air-cools the vapor into salt-free, drinkable liquid. The researchers set the entire box at a tilt within a larger, empty vessel, then attached a tube from the top half of the box down through the bottom of the vessel, and floated the vessel in saltwater.

    In this configuration, water can naturally push up through the tube and into the box, where the tilt of the box, combined with the thermal energy from the sun, induces the water to swirl as it flows through. The small eddies help to bring water in contact with the upper evaporating layer while keeping salt circulating, rather than settling and clogging.

    The team built several prototypes, with one, three, and 10 stages, and tested their performance in water of varying salinity, including natural seawater and water that was seven times saltier.

    From these tests, the researchers calculated that if each stage were scaled up to a square meter, it would produce up to 5 liters of drinking water per hour, and that the system could desalinate water without accumulating salt for several years. Given this extended lifetime, and the fact that the system is entirely passive, requiring no electricity to run, the team estimates that the overall cost of running the system would be cheaper than what it costs to produce tap water in the United States.

    “We show that this device is capable of achieving a long lifetime,” Zhong says. “That means that, for the first time, it is possible for drinking water produced by sunlight to be cheaper than tap water. This opens up the possibility for solar desalination to address real-world problems.”

    “This is a very innovative approach that effectively mitigates key challenges in the field of desalination,” says Guihua Yu, who develops sustainable water and energy storage systems at the University of Texas at Austin, and was not involved in the research. “The design is particularly beneficial for regions struggling with high-salinity water. Its modular design makes it highly suitable for household water production, allowing for scalability and adaptability to meet individual needs.”

    Funding for the research at Shanghai Jiao Tong University was supported by the Natural Science Foundation of China. More

  • in

    Tracking US progress on the path to a decarbonized economy

    Investments in new technologies and infrastucture that help reduce greenhouse gas emissions — everything from electric vehicles to heat pumps — are growing rapidly in the United States. Now, a new database enables these investments to be comprehensively monitored in real-time, thereby helping to assess the efficacy of policies designed to spur clean investments and address climate change.

    The Clean Investment Monitor (CIM), developed by a team at MIT’s Center for Energy and Environmental Policy Research (CEEPR) led by Institute Innovation Fellow Brian Deese and in collaboration with the Rhodium Group, an independent research firm, provides a timely and methodologically consistent tracking of all announced public and private investments in the manufacture and deployment of clean technologies and infrastructure in the U.S. The CIM offers a means of assessing the country’s progress in transitioning to a cleaner economy and reducing greenhouse gas emissions.

    In the year from July 1, 2022, to June 30, 2023, data from the CIM show, clean investments nationwide totaled $213 billion. To put that figure in perspective, 18 states in the U.S. have GDPs each lower than $213 billion.

    “As clean technology becomes a larger and larger sector in the United States, its growth will have far-reaching implications — for our economy, for our leadership in innovation, and for reducing our greenhouse gas emissions,” says Deese, who served as the director of the White House National Economic Council from January 2021 to February 2023. “The Clean Investment Monitor is a tool designed to help us understand and assess this growth in a real-time, comprehensive way. Our hope is that the CIM will enhance research and improve public policies designed to accelerate the clean energy transition.”

    Launched on Sept. 13, the CIM shows that the $213 billion invested over the last year reflects a 37 percent increase from the $155 billion invested in the previous 12-month period. According to CIM data, the fastest growth has been in the manufacturing sector, where investment grew 125 percent year-on-year, particularly in electric vehicle and solar manufacturing.

    Beyond manufacturing, the CIM also provides data on investment in clean energy production, such as solar, wind, and nuclear; industrial decarbonization, such as sustainable aviation fuels; and retail investments by households and businesses in technologies like heat pumps and zero-emission vehicles. The CIM’s data goes back to 2018, providing a baseline before the passage of the legislation in 2021 and 2022.

    “We’re really excited to bring MIT’s analytical rigor to bear to help develop the Clean Investment Monitor,” says Christopher Knittel, the George P. Shultz Professor of Energy Economics at the MIT Sloan School of Management and CEEPR’s faculty director. “Bolstered by Brian’s keen understanding of the policy world, this tool is poised to become the go-to reference for anyone looking to understand clean investment flows and what drives them.”

    In 2021 and 2022, the U.S. federal government enacted a series of new laws that together aimed to catalyze the largest-ever national investment in clean energy technologies and related infrastructure. The Clean Investment Monitor can also be used to track how well the legislation is living up to expectations.

    The three pieces of federal legislation — the Infrastructure Investment and Jobs Act, enacted in 2021, and the Inflation Reduction Act (IRA) and the CHIPS and Science Act, both enacted in 2022 — provide grants, loans, loan guarantees, and tax incentives to spur investments in technologies that reduce greenhouse gas emissions.

    The effectiveness of the legislation in hastening the U.S. transition to a clean economy will be crucial in determining whether the country reaches its goal of reducing greenhouse gas emissions by 50 percent to 52 percent below 2005 levels in 2030. An analysis earlier this year estimated that the IRA will lead to a 43 percent to 48 percent decline in economywide emissions below 2005 levels by 2035, compared with 27 percent to 35 percent in a reference scenario without the law’s provisions, helping bring the U.S. goal closer in reach.

    The Clean Investment Monitor is available at cleaninvestmentmonitor.org. More

  • in

    Pixel-by-pixel analysis yields insights into lithium-ion batteries

    By mining data from X-ray images, researchers at MIT, Stanford University, SLAC National Accelerator, and the Toyota Research Institute have made significant new discoveries about the reactivity of lithium iron phosphate, a material used in batteries for electric cars and in other rechargeable batteries.

    The new technique has revealed several phenomena that were previously impossible to see, including variations in the rate of lithium intercalation reactions in different regions of a lithium iron phosphate nanoparticle.

    The paper’s most significant practical finding — that these variations in reaction rate are correlated with differences in the thickness of the carbon coating on the surface of the particles — could lead to improvements in the efficiency of charging and discharging such batteries.

    “What we learned from this study is that it’s the interfaces that really control the dynamics of the battery, especially in today’s modern batteries made from nanoparticles of the active material. That means that our focus should really be on engineering that interface,” says Martin Bazant, the E.G. Roos Professor of Chemical Engineering and a professor of mathematics at MIT, who is the senior author of the study.

    This approach to discovering the physics behind complex patterns in images could also be used to gain insights into many other materials, not only other types of batteries but also biological systems, such as dividing cells in a developing embryo.

    “What I find most exciting about this work is the ability to take images of a system that’s undergoing the formation of some pattern, and learning the principles that govern that,” Bazant says.

    Hongbo Zhao PhD ’21, a former MIT graduate student who is now a postdoc at Princeton University, is the lead author of the new study, which appears today in Nature. Other authors include Richard Bratz, the Edwin R. Gilliland Professor of Chemical Engineering at MIT; William Chueh, an associate professor of materials science and engineering at Stanford and director of the SLAC-Stanford Battery Center; and Brian Storey, senior director of Energy and Materials at the Toyota Research Institute.

    “Until now, we could make these beautiful X-ray movies of battery nanoparticles at work, but it was challenging to measure and understand subtle details of how they function because the movies were so information-rich,” Chueh says. “By applying image learning to these nanoscale movies, we can extract insights that were not previously possible.”

    Modeling reaction rates

    Lithium iron phosphate battery electrodes are made of many tiny particles of lithium iron phosphate, surrounded by an electrolyte solution. A typical particle is about 1 micron in diameter and about 100 nanometers thick. When the battery discharges, lithium ions flow from the electrolyte solution into the material by an electrochemical reaction known as ion intercalation. When the battery charges, the intercalation reaction is reversed, and ions flow in the opposite direction.

    “Lithium iron phosphate (LFP) is an important battery material due to low cost, a good safety record, and its use of abundant elements,” Storey says. “We are seeing an increased use of LFP in the EV market, so the timing of this study could not be better.”

    Before the current study, Bazant had done a great deal of theoretical modeling of patterns formed by lithium-ion intercalation. Lithium iron phosphate prefers to exist in one of two stable phases: either full of lithium ions or empty. Since 2005, Bazant has been working on mathematical models of this phenomenon, known as phase separation, which generates distinctive patterns of lithium-ion flow driven by intercalation reactions. In 2015, while on sabbatical at Stanford, he began working with Chueh to try to interpret images of lithium iron phosphate particles from scanning tunneling X-ray microscopy.

    Using this type of microscopy, the researchers can obtain images that reveal the concentration of lithium ions, pixel-by-pixel, at every point in the particle. They can scan the particles several times as the particles charge or discharge, allowing them to create movies of how lithium ions flow in and out of the particles.

    In 2017, Bazant and his colleagues at SLAC received funding from the Toyota Research Institute to pursue further studies using this approach, along with other battery-related research projects.

    By analyzing X-ray images of 63 lithium iron phosphate particles as they charged and discharged, the researchers found that the movement of lithium ions within the material could be nearly identical to the computer simulations that Bazant had created earlier. Using all 180,000 pixels as measurements, the researchers trained the computational model to produce equations that accurately describe the nonequilibrium thermodynamics and reaction kinetics of the battery material.
    By analyzing X-ray images of lithium iron phosphate particles as they charged and discharged, researchers have shown that the movement of lithium ions within the material was nearly identical to computer simulations they had created earlier.  In each pair, the actual particles are on the left and the simulations are on the right.Courtesy of the researchers

    “Every little pixel in there is jumping from full to empty, full to empty. And we’re mapping that whole process, using our equations to understand how that’s happening,” Bazant says.

    The researchers also found that the patterns of lithium-ion flow that they observed could reveal spatial variations in the rate at which lithium ions are absorbed at each location on the particle surface.

    “It was a real surprise to us that we could learn the heterogeneities in the system — in this case, the variations in surface reaction rate — simply by looking at the images,” Bazant says. “There are regions that seem to be fast and others that seem to be slow.”

    Furthermore, the researchers showed that these differences in reaction rate were correlated with the thickness of the carbon coating on the surface of the lithium iron phosphate particles. That carbon coating is applied to lithium iron phosphate to help it conduct electricity — otherwise the material would conduct too slowly to be useful as a battery.

    “We discovered at the nano scale that variation of the carbon coating thickness directly controls the rate, which is something you could never figure out if you didn’t have all of this modeling and image analysis,” Bazant says.

    The findings also offer quantitative support for a hypothesis Bazant formulated several years ago: that the performance of lithium iron phosphate electrodes is limited primarily by the rate of coupled ion-electron transfer at the interface between the solid particle and the carbon coating, rather than the rate of lithium-ion diffusion in the solid.

    Optimized materials

    The results from this study suggest that optimizing the thickness of the carbon layer on the electrode surface could help researchers to design batteries that would work more efficiently, the researchers say.

    “This is the first study that’s been able to directly attribute a property of the battery material with a physical property of the coating,” Bazant says. “The focus for optimizing and designing batteries should be on controlling reaction kinetics at the interface of the electrolyte and electrode.”

    “This publication is the culmination of six years of dedication and collaboration,” Storey says. “This technique allows us to unlock the inner workings of the battery in a way not previously possible. Our next goal is to improve battery design by applying this new understanding.”  

    In addition to using this type of analysis on other battery materials, Bazant anticipates that it could be useful for studying pattern formation in other chemical and biological systems.

    This work was supported by the Toyota Research Institute through the Accelerated Materials Design and Discovery program. More

  • in

    Jackson Jewett wants to design buildings that use less concrete

    After three years leading biking tours through U.S. National Parks, Jackson Jewett decided it was time for a change.

    “It was a lot of fun, but I realized I missed buildings,” says Jewett. “I really wanted to be a part of that industry, learn more about it, and reconnect with my roots in the built environment.”

    Jewett grew up in California in what he describes as a “very creative household.”

    “I remember making very elaborate Halloween costumes with my parents, making fun dioramas for school projects, and building forts in the backyard, that kind of thing,” Jewett explains.

    Both of his parents have backgrounds in design; his mother studied art in college and his father is a practicing architect. From a young age, Jewett was interested in following in his father’s footsteps. But when he arrived at the University of California at Berkeley in the midst of the 2009 housing crash, it didn’t seem like the right time. Jewett graduated with a degree in cognitive science and a minor in history of architecture. And even as he led tours through Yellowstone, the Grand Canyon, and other parks, buildings were in the back of his mind.

    It wasn’t just the built environment that Jewett was missing. He also longed for the rigor and structure of an academic environment.

    Jewett arrived at MIT in 2017, initially only planning on completing the master’s program in civil and environmental engineering. It was then that he first met Josephine Carstensen, a newly hired lecturer in the department. Jewett was interested in Carstensen’s work on “topology optimization,” which uses algorithms to design structures that can achieve their performance requirements while using only a limited amount of material. He was particularly interested in applying this approach to concrete design, and he collaborated with Carstensen to help demonstrate its viability.

    After earning his master’s, Jewett spent a year and a half as a structural engineer in New York City. But when Carstensen was hired as a professor, she reached out to Jewett about joining her lab as a PhD student. He was ready for another change.

    Now in the third year of his PhD program, Jewett’s dissertation work builds upon his master’s thesis to further refine algorithms that can design building-scale concrete structures that use less material, which would help lower carbon emissions from the construction industry. It is estimated that the concrete industry alone is responsible for 8 percent of global carbon emissions, so any efforts to reduce that number could help in the fight against climate change.

    Implementing new ideas

    Topology optimization is a small field, with the bulk of the prior work being computational without any experimental verification. The work Jewett completed for his master’s thesis was just the start of a long learning process.

    “I do feel like I’m just getting to the part where I can start implementing my own ideas without as much support as I’ve needed in the past,” says Jewett. “In the last couple of months, I’ve been working on a reinforced concrete optimization algorithm that I hope will be the cornerstone of my thesis.”

    The process of fine-tuning a generative algorithm is slow going, particularly when tackling a multifaceted problem.

    “It can take days or usually weeks to take a step toward making it work as an entire integrated system,” says Jewett. “The days when that breakthrough happens and I can see the algorithm converging on a solution that makes sense — those are really exciting moments.”

    By harnessing computational power, Jewett is searching for materially efficient components that can be used to make up structures such as bridges or buildings. These are other constraints to consider as well, particularly ensuring that the cost of manufacturing isn’t too high. Having worked in the industry before starting the PhD program, Jewett has an eye toward doing work that can be feasibly implemented.

    Inspiring others

    When Jewett first visited MIT campus, he was drawn in by the collaborative environment of the institute and the students’ drive to learn. Now, he’s a part of that process as a teaching assistant and a supervisor in the Undergraduate Research Opportunities Program.  

    Working as a teaching assistant isn’t a requirement for Jewett’s program, but it’s been one of his favorite parts of his time at MIT.

    “The MIT undergrads are so gifted and just constantly impress me,” says Jewett. “Being able to teach, especially in the context of what MIT values is a lot of fun. And I learn, too. My coding practices have gotten so much better since working with undergrads here.”

    Jewett’s experiences have inspired him to pursue a career in academia after the completion of his program, which he expects to complete in the spring of 2025. But he’s making sure to take care of himself along the way. He still finds time to plan cycling trips with his friends and has gotten into running ever since moving to Boston. So far, he’s completed two marathons.

    “It’s so inspiring to be in a place where so many good ideas are just bouncing back and forth all over campus,” says Jewett. “And on most days, I remember that and it inspires me. But it’s also the case that academics is hard, PhD programs are hard, and MIT — there’s pressure being here, and sometimes that pressure can feel like it’s working against you.”

    Jewett is grateful for the mental health resources that MIT provides students. While he says it can be imperfect, it’s been a crucial part of his journey.

    “My PhD thesis will be done in 2025, but the work won’t be done. The time horizon of when these things need to be implemented is relatively short if we want to make an impact before global temperatures have already risen too high. My PhD research will be developing a framework for how that could be done with concrete construction, but I’d like to keep thinking about other materials and construction methods even after this project is finished.” More