More stories

  • in

    New tools are available to help reduce the energy that AI models devour

    When searching for flights on Google, you may have noticed that each flight’s carbon-emission estimate is now presented next to its cost. It’s a way to inform customers about their environmental impact, and to let them factor this information into their decision-making.

    A similar kind of transparency doesn’t yet exist for the computing industry, despite its carbon emissions exceeding those of the entire airline industry. Escalating this energy demand are artificial intelligence models. Huge, popular models like ChatGPT signal a trend of large-scale artificial intelligence, boosting forecasts that predict data centers will draw up to 21 percent of the world’s electricity supply by 2030.

    The MIT Lincoln Laboratory Supercomputing Center (LLSC) is developing techniques to help data centers reel in energy use. Their techniques range from simple but effective changes, like power-capping hardware, to adopting novel tools that can stop AI training early on. Crucially, they have found that these techniques have a minimal impact on model performance.

    In the wider picture, their work is mobilizing green-computing research and promoting a culture of transparency. “Energy-aware computing is not really a research area, because everyone’s been holding on to their data,” says Vijay Gadepally, senior staff in the LLSC who leads energy-aware research efforts. “Somebody has to start, and we’re hoping others will follow.”

    Curbing power and cooling down

    Like many data centers, the LLSC has seen a significant uptick in the number of AI jobs running on its hardware. Noticing an increase in energy usage, computer scientists at the LLSC were curious about ways to run jobs more efficiently. Green computing is a principle of the center, which is powered entirely by carbon-free energy.

    Training an AI model — the process by which it learns patterns from huge datasets — requires using graphics processing units (GPUs), which are power-hungry hardware. As one example, the GPUs that trained GPT-3 (the precursor to ChatGPT) are estimated to have consumed 1,300 megawatt-hours of electricity, roughly equal to that used by 1,450 average U.S. households per month.

    While most people seek out GPUs because of their computational power, manufacturers offer ways to limit the amount of power a GPU is allowed to draw. “We studied the effects of capping power and found that we could reduce energy consumption by about 12 percent to 15 percent, depending on the model,” Siddharth Samsi, a researcher within the LLSC, says.

    The trade-off for capping power is increasing task time — GPUs will take about 3 percent longer to complete a task, an increase Gadepally says is “barely noticeable” considering that models are often trained over days or even months. In one of their experiments in which they trained the popular BERT language model, limiting GPU power to 150 watts saw a two-hour increase in training time (from 80 to 82 hours) but saved the equivalent of a U.S. household’s week of energy.

    The team then built software that plugs this power-capping capability into the widely used scheduler system, Slurm. The software lets data center owners set limits across their system or on a job-by-job basis.

    “We can deploy this intervention today, and we’ve done so across all our systems,” Gadepally says.

    Side benefits have arisen, too. Since putting power constraints in place, the GPUs on LLSC supercomputers have been running about 30 degrees Fahrenheit cooler and at a more consistent temperature, reducing stress on the cooling system. Running the hardware cooler can potentially also increase reliability and service lifetime. They can now consider delaying the purchase of new hardware — reducing the center’s “embodied carbon,” or the emissions created through the manufacturing of equipment — until the efficiencies gained by using new hardware offset this aspect of the carbon footprint. They’re also finding ways to cut down on cooling needs by strategically scheduling jobs to run at night and during the winter months.

    “Data centers can use these easy-to-implement approaches today to increase efficiencies, without requiring modifications to code or infrastructure,” Gadepally says.

    Taking this holistic look at a data center’s operations to find opportunities to cut down can be time-intensive. To make this process easier for others, the team — in collaboration with Professor Devesh Tiwari and Baolin Li at Northeastern University — recently developed and published a comprehensive framework for analyzing the carbon footprint of high-performance computing systems. System practitioners can use this analysis framework to gain a better understanding of how sustainable their current system is and consider changes for next-generation systems.  

    Adjusting how models are trained and used

    On top of making adjustments to data center operations, the team is devising ways to make AI-model development more efficient.

    When training models, AI developers often focus on improving accuracy, and they build upon previous models as a starting point. To achieve the desired output, they have to figure out what parameters to use, and getting it right can take testing thousands of configurations. This process, called hyperparameter optimization, is one area LLSC researchers have found ripe for cutting down energy waste. 

    “We’ve developed a model that basically looks at the rate at which a given configuration is learning,” Gadepally says. Given that rate, their model predicts the likely performance. Underperforming models are stopped early. “We can give you a very accurate estimate early on that the best model will be in this top 10 of 100 models running,” he says.

    In their studies, this early stopping led to dramatic savings: an 80 percent reduction in the energy used for model training. They’ve applied this technique to models developed for computer vision, natural language processing, and material design applications.

    “In my opinion, this technique has the biggest potential for advancing the way AI models are trained,” Gadepally says.

    Training is just one part of an AI model’s emissions. The largest contributor to emissions over time is model inference, or the process of running the model live, like when a user chats with ChatGPT. To respond quickly, these models use redundant hardware, running all the time, waiting for a user to ask a question.

    One way to improve inference efficiency is to use the most appropriate hardware. Also with Northeastern University, the team created an optimizer that matches a model with the most carbon-efficient mix of hardware, such as high-power GPUs for the computationally intense parts of inference and low-power central processing units (CPUs) for the less-demanding aspects. This work recently won the best paper award at the International ACM Symposium on High-Performance Parallel and Distributed Computing.

    Using this optimizer can decrease energy use by 10-20 percent while still meeting the same “quality-of-service target” (how quickly the model can respond).

    This tool is especially helpful for cloud customers, who lease systems from data centers and must select hardware from among thousands of options. “Most customers overestimate what they need; they choose over-capable hardware just because they don’t know any better,” Gadepally says.

    Growing green-computing awareness

    The energy saved by implementing these interventions also reduces the associated costs of developing AI, often by a one-to-one ratio. In fact, cost is usually used as a proxy for energy consumption. Given these savings, why aren’t more data centers investing in green techniques?

    “I think it’s a bit of an incentive-misalignment problem,” Samsi says. “There’s been such a race to build bigger and better models that almost every secondary consideration has been put aside.”

    They point out that while some data centers buy renewable-energy credits, these renewables aren’t enough to cover the growing energy demands. The majority of electricity powering data centers comes from fossil fuels, and water used for cooling is contributing to stressed watersheds. 

    Hesitancy may also exist because systematic studies on energy-saving techniques haven’t been conducted. That’s why the team has been pushing their research in peer-reviewed venues in addition to open-source repositories. Some big industry players, like Google DeepMind, have applied machine learning to increase data center efficiency but have not made their work available for others to deploy or replicate. 

    Top AI conferences are now pushing for ethics statements that consider how AI could be misused. The team sees the climate aspect as an AI ethics topic that has not yet been given much attention, but this also appears to be slowly changing. Some researchers are now disclosing the carbon footprint of training the latest models, and industry is showing a shift in energy transparency too, as in this recent report from Meta AI.

    They also acknowledge that transparency is difficult without tools that can show AI developers their consumption. Reporting is on the LLSC roadmap for this year. They want to be able to show every LLSC user, for every job, how much energy they consume and how this amount compares to others, similar to home energy reports.

    Part of this effort requires working more closely with hardware manufacturers to make getting these data off hardware easier and more accurate. If manufacturers can standardize the way the data are read out, then energy-saving and reporting tools can be applied across different hardware platforms. A collaboration is underway between the LLSC researchers and Intel to work on this very problem.

    Even for AI developers who are aware of the intense energy needs of AI, they can’t do much on their own to curb this energy use. The LLSC team wants to help other data centers apply these interventions and provide users with energy-aware options. Their first partnership is with the U.S. Air Force, a sponsor of this research, which operates thousands of data centers. Applying these techniques can make a significant dent in their energy consumption and cost.

    “We’re putting control into the hands of AI developers who want to lessen their footprint,” Gadepally says. “Do I really need to gratuitously train unpromising models? Am I willing to run my GPUs slower to save energy? To our knowledge, no other supercomputing center is letting you consider these options. Using our tools, today, you get to decide.”

    Visit this webpage to see the group’s publications related to energy-aware computing and findings described in this article. More

  • in

    Desalination system could produce freshwater that is cheaper than tap water

    Engineers at MIT and in China are aiming to turn seawater into drinking water with a completely passive device that is inspired by the ocean, and powered by the sun.

    In a paper appearing today in the journal Joule, the team outlines the design for a new solar desalination system that takes in saltwater and heats it with natural sunlight.

    The configuration of the device allows water to circulate in swirling eddies, in a manner similar to the much larger “thermohaline” circulation of the ocean. This circulation, combined with the sun’s heat, drives water to evaporate, leaving salt behind. The resulting water vapor can then be condensed and collected as pure, drinkable water. In the meantime, the leftover salt continues to circulate through and out of the device, rather than accumulating and clogging the system.

    The new system has a higher water-production rate and a higher salt-rejection rate than all other passive solar desalination concepts currently being tested.

    The researchers estimate that if the system is scaled up to the size of a small suitcase, it could produce about 4 to 6 liters of drinking water per hour and last several years before requiring replacement parts. At this scale and performance, the system could produce drinking water at a rate and price that is cheaper than tap water.

    “For the first time, it is possible for water, produced by sunlight, to be even cheaper than tap water,” says Lenan Zhang, a research scientist in MIT’s Device Research Laboratory.

    The team envisions a scaled-up device could passively produce enough drinking water to meet the daily requirements of a small family. The system could also supply off-grid, coastal communities where seawater is easily accessible.

    Zhang’s study co-authors include MIT graduate student Yang Zhong and Evelyn Wang, the Ford Professor of Engineering, along with Jintong Gao, Jinfang You, Zhanyu Ye, Ruzhu Wang, and Zhenyuan Xu of Shanghai Jiao Tong University in China.

    A powerful convection

    The team’s new system improves on their previous design — a similar concept of multiple layers, called stages. Each stage contained an evaporator and a condenser that used heat from the sun to passively separate salt from incoming water. That design, which the team tested on the roof of an MIT building, efficiently converted the sun’s energy to evaporate water, which was then condensed into drinkable water. But the salt that was left over quickly accumulated as crystals that clogged the system after a few days. In a real-world setting, a user would have to place stages on a frequent basis, which would significantly increase the system’s overall cost.

    In a follow-up effort, they devised a solution with a similar layered configuration, this time with an added feature that helped to circulate the incoming water as well as any leftover salt. While this design prevented salt from settling and accumulating on the device, it desalinated water at a relatively low rate.

    In the latest iteration, the team believes it has landed on a design that achieves both a high water-production rate, and high salt rejection, meaning that the system can quickly and reliably produce drinking water for an extended period. The key to their new design is a combination of their two previous concepts: a multistage system of evaporators and condensers, that is also configured to boost the circulation of water — and salt — within each stage.

    “We introduce now an even more powerful convection, that is similar to what we typically see in the ocean, at kilometer-long scales,” Xu says.

    The small circulations generated in the team’s new system is similar to the “thermohaline” convection in the ocean — a phenomenon that drives the movement of water around the world, based on differences in sea temperature (“thermo”) and salinity (“haline”).

    “When seawater is exposed to air, sunlight drives water to evaporate. Once water leaves the surface, salt remains. And the higher the salt concentration, the denser the liquid, and this heavier water wants to flow downward,” Zhang explains. “By mimicking this kilometer-wide phenomena in small box, we can take advantage of this feature to reject salt.”

    Tapping out

    The heart of the team’s new design is a single stage that resembles a thin box, topped with a dark material that efficiently absorbs the heat of the sun. Inside, the box is separated into a top and bottom section. Water can flow through the top half, where the ceiling is lined with an evaporator layer that uses the sun’s heat to warm up and evaporate any water in direct contact. The water vapor is then funneled to the bottom half of the box, where a condensing layer air-cools the vapor into salt-free, drinkable liquid. The researchers set the entire box at a tilt within a larger, empty vessel, then attached a tube from the top half of the box down through the bottom of the vessel, and floated the vessel in saltwater.

    In this configuration, water can naturally push up through the tube and into the box, where the tilt of the box, combined with the thermal energy from the sun, induces the water to swirl as it flows through. The small eddies help to bring water in contact with the upper evaporating layer while keeping salt circulating, rather than settling and clogging.

    The team built several prototypes, with one, three, and 10 stages, and tested their performance in water of varying salinity, including natural seawater and water that was seven times saltier.

    From these tests, the researchers calculated that if each stage were scaled up to a square meter, it would produce up to 5 liters of drinking water per hour, and that the system could desalinate water without accumulating salt for several years. Given this extended lifetime, and the fact that the system is entirely passive, requiring no electricity to run, the team estimates that the overall cost of running the system would be cheaper than what it costs to produce tap water in the United States.

    “We show that this device is capable of achieving a long lifetime,” Zhong says. “That means that, for the first time, it is possible for drinking water produced by sunlight to be cheaper than tap water. This opens up the possibility for solar desalination to address real-world problems.”

    “This is a very innovative approach that effectively mitigates key challenges in the field of desalination,” says Guihua Yu, who develops sustainable water and energy storage systems at the University of Texas at Austin, and was not involved in the research. “The design is particularly beneficial for regions struggling with high-salinity water. Its modular design makes it highly suitable for household water production, allowing for scalability and adaptability to meet individual needs.”

    Funding for the research at Shanghai Jiao Tong University was supported by the Natural Science Foundation of China. More

  • in

    Tracking US progress on the path to a decarbonized economy

    Investments in new technologies and infrastucture that help reduce greenhouse gas emissions — everything from electric vehicles to heat pumps — are growing rapidly in the United States. Now, a new database enables these investments to be comprehensively monitored in real-time, thereby helping to assess the efficacy of policies designed to spur clean investments and address climate change.

    The Clean Investment Monitor (CIM), developed by a team at MIT’s Center for Energy and Environmental Policy Research (CEEPR) led by Institute Innovation Fellow Brian Deese and in collaboration with the Rhodium Group, an independent research firm, provides a timely and methodologically consistent tracking of all announced public and private investments in the manufacture and deployment of clean technologies and infrastructure in the U.S. The CIM offers a means of assessing the country’s progress in transitioning to a cleaner economy and reducing greenhouse gas emissions.

    In the year from July 1, 2022, to June 30, 2023, data from the CIM show, clean investments nationwide totaled $213 billion. To put that figure in perspective, 18 states in the U.S. have GDPs each lower than $213 billion.

    “As clean technology becomes a larger and larger sector in the United States, its growth will have far-reaching implications — for our economy, for our leadership in innovation, and for reducing our greenhouse gas emissions,” says Deese, who served as the director of the White House National Economic Council from January 2021 to February 2023. “The Clean Investment Monitor is a tool designed to help us understand and assess this growth in a real-time, comprehensive way. Our hope is that the CIM will enhance research and improve public policies designed to accelerate the clean energy transition.”

    Launched on Sept. 13, the CIM shows that the $213 billion invested over the last year reflects a 37 percent increase from the $155 billion invested in the previous 12-month period. According to CIM data, the fastest growth has been in the manufacturing sector, where investment grew 125 percent year-on-year, particularly in electric vehicle and solar manufacturing.

    Beyond manufacturing, the CIM also provides data on investment in clean energy production, such as solar, wind, and nuclear; industrial decarbonization, such as sustainable aviation fuels; and retail investments by households and businesses in technologies like heat pumps and zero-emission vehicles. The CIM’s data goes back to 2018, providing a baseline before the passage of the legislation in 2021 and 2022.

    “We’re really excited to bring MIT’s analytical rigor to bear to help develop the Clean Investment Monitor,” says Christopher Knittel, the George P. Shultz Professor of Energy Economics at the MIT Sloan School of Management and CEEPR’s faculty director. “Bolstered by Brian’s keen understanding of the policy world, this tool is poised to become the go-to reference for anyone looking to understand clean investment flows and what drives them.”

    In 2021 and 2022, the U.S. federal government enacted a series of new laws that together aimed to catalyze the largest-ever national investment in clean energy technologies and related infrastructure. The Clean Investment Monitor can also be used to track how well the legislation is living up to expectations.

    The three pieces of federal legislation — the Infrastructure Investment and Jobs Act, enacted in 2021, and the Inflation Reduction Act (IRA) and the CHIPS and Science Act, both enacted in 2022 — provide grants, loans, loan guarantees, and tax incentives to spur investments in technologies that reduce greenhouse gas emissions.

    The effectiveness of the legislation in hastening the U.S. transition to a clean economy will be crucial in determining whether the country reaches its goal of reducing greenhouse gas emissions by 50 percent to 52 percent below 2005 levels in 2030. An analysis earlier this year estimated that the IRA will lead to a 43 percent to 48 percent decline in economywide emissions below 2005 levels by 2035, compared with 27 percent to 35 percent in a reference scenario without the law’s provisions, helping bring the U.S. goal closer in reach.

    The Clean Investment Monitor is available at cleaninvestmentmonitor.org. More

  • in

    Pixel-by-pixel analysis yields insights into lithium-ion batteries

    By mining data from X-ray images, researchers at MIT, Stanford University, SLAC National Accelerator, and the Toyota Research Institute have made significant new discoveries about the reactivity of lithium iron phosphate, a material used in batteries for electric cars and in other rechargeable batteries.

    The new technique has revealed several phenomena that were previously impossible to see, including variations in the rate of lithium intercalation reactions in different regions of a lithium iron phosphate nanoparticle.

    The paper’s most significant practical finding — that these variations in reaction rate are correlated with differences in the thickness of the carbon coating on the surface of the particles — could lead to improvements in the efficiency of charging and discharging such batteries.

    “What we learned from this study is that it’s the interfaces that really control the dynamics of the battery, especially in today’s modern batteries made from nanoparticles of the active material. That means that our focus should really be on engineering that interface,” says Martin Bazant, the E.G. Roos Professor of Chemical Engineering and a professor of mathematics at MIT, who is the senior author of the study.

    This approach to discovering the physics behind complex patterns in images could also be used to gain insights into many other materials, not only other types of batteries but also biological systems, such as dividing cells in a developing embryo.

    “What I find most exciting about this work is the ability to take images of a system that’s undergoing the formation of some pattern, and learning the principles that govern that,” Bazant says.

    Hongbo Zhao PhD ’21, a former MIT graduate student who is now a postdoc at Princeton University, is the lead author of the new study, which appears today in Nature. Other authors include Richard Bratz, the Edwin R. Gilliland Professor of Chemical Engineering at MIT; William Chueh, an associate professor of materials science and engineering at Stanford and director of the SLAC-Stanford Battery Center; and Brian Storey, senior director of Energy and Materials at the Toyota Research Institute.

    “Until now, we could make these beautiful X-ray movies of battery nanoparticles at work, but it was challenging to measure and understand subtle details of how they function because the movies were so information-rich,” Chueh says. “By applying image learning to these nanoscale movies, we can extract insights that were not previously possible.”

    Modeling reaction rates

    Lithium iron phosphate battery electrodes are made of many tiny particles of lithium iron phosphate, surrounded by an electrolyte solution. A typical particle is about 1 micron in diameter and about 100 nanometers thick. When the battery discharges, lithium ions flow from the electrolyte solution into the material by an electrochemical reaction known as ion intercalation. When the battery charges, the intercalation reaction is reversed, and ions flow in the opposite direction.

    “Lithium iron phosphate (LFP) is an important battery material due to low cost, a good safety record, and its use of abundant elements,” Storey says. “We are seeing an increased use of LFP in the EV market, so the timing of this study could not be better.”

    Before the current study, Bazant had done a great deal of theoretical modeling of patterns formed by lithium-ion intercalation. Lithium iron phosphate prefers to exist in one of two stable phases: either full of lithium ions or empty. Since 2005, Bazant has been working on mathematical models of this phenomenon, known as phase separation, which generates distinctive patterns of lithium-ion flow driven by intercalation reactions. In 2015, while on sabbatical at Stanford, he began working with Chueh to try to interpret images of lithium iron phosphate particles from scanning tunneling X-ray microscopy.

    Using this type of microscopy, the researchers can obtain images that reveal the concentration of lithium ions, pixel-by-pixel, at every point in the particle. They can scan the particles several times as the particles charge or discharge, allowing them to create movies of how lithium ions flow in and out of the particles.

    In 2017, Bazant and his colleagues at SLAC received funding from the Toyota Research Institute to pursue further studies using this approach, along with other battery-related research projects.

    By analyzing X-ray images of 63 lithium iron phosphate particles as they charged and discharged, the researchers found that the movement of lithium ions within the material could be nearly identical to the computer simulations that Bazant had created earlier. Using all 180,000 pixels as measurements, the researchers trained the computational model to produce equations that accurately describe the nonequilibrium thermodynamics and reaction kinetics of the battery material.
    By analyzing X-ray images of lithium iron phosphate particles as they charged and discharged, researchers have shown that the movement of lithium ions within the material was nearly identical to computer simulations they had created earlier.  In each pair, the actual particles are on the left and the simulations are on the right.Courtesy of the researchers

    “Every little pixel in there is jumping from full to empty, full to empty. And we’re mapping that whole process, using our equations to understand how that’s happening,” Bazant says.

    The researchers also found that the patterns of lithium-ion flow that they observed could reveal spatial variations in the rate at which lithium ions are absorbed at each location on the particle surface.

    “It was a real surprise to us that we could learn the heterogeneities in the system — in this case, the variations in surface reaction rate — simply by looking at the images,” Bazant says. “There are regions that seem to be fast and others that seem to be slow.”

    Furthermore, the researchers showed that these differences in reaction rate were correlated with the thickness of the carbon coating on the surface of the lithium iron phosphate particles. That carbon coating is applied to lithium iron phosphate to help it conduct electricity — otherwise the material would conduct too slowly to be useful as a battery.

    “We discovered at the nano scale that variation of the carbon coating thickness directly controls the rate, which is something you could never figure out if you didn’t have all of this modeling and image analysis,” Bazant says.

    The findings also offer quantitative support for a hypothesis Bazant formulated several years ago: that the performance of lithium iron phosphate electrodes is limited primarily by the rate of coupled ion-electron transfer at the interface between the solid particle and the carbon coating, rather than the rate of lithium-ion diffusion in the solid.

    Optimized materials

    The results from this study suggest that optimizing the thickness of the carbon layer on the electrode surface could help researchers to design batteries that would work more efficiently, the researchers say.

    “This is the first study that’s been able to directly attribute a property of the battery material with a physical property of the coating,” Bazant says. “The focus for optimizing and designing batteries should be on controlling reaction kinetics at the interface of the electrolyte and electrode.”

    “This publication is the culmination of six years of dedication and collaboration,” Storey says. “This technique allows us to unlock the inner workings of the battery in a way not previously possible. Our next goal is to improve battery design by applying this new understanding.”  

    In addition to using this type of analysis on other battery materials, Bazant anticipates that it could be useful for studying pattern formation in other chemical and biological systems.

    This work was supported by the Toyota Research Institute through the Accelerated Materials Design and Discovery program. More

  • in

    Jackson Jewett wants to design buildings that use less concrete

    After three years leading biking tours through U.S. National Parks, Jackson Jewett decided it was time for a change.

    “It was a lot of fun, but I realized I missed buildings,” says Jewett. “I really wanted to be a part of that industry, learn more about it, and reconnect with my roots in the built environment.”

    Jewett grew up in California in what he describes as a “very creative household.”

    “I remember making very elaborate Halloween costumes with my parents, making fun dioramas for school projects, and building forts in the backyard, that kind of thing,” Jewett explains.

    Both of his parents have backgrounds in design; his mother studied art in college and his father is a practicing architect. From a young age, Jewett was interested in following in his father’s footsteps. But when he arrived at the University of California at Berkeley in the midst of the 2009 housing crash, it didn’t seem like the right time. Jewett graduated with a degree in cognitive science and a minor in history of architecture. And even as he led tours through Yellowstone, the Grand Canyon, and other parks, buildings were in the back of his mind.

    It wasn’t just the built environment that Jewett was missing. He also longed for the rigor and structure of an academic environment.

    Jewett arrived at MIT in 2017, initially only planning on completing the master’s program in civil and environmental engineering. It was then that he first met Josephine Carstensen, a newly hired lecturer in the department. Jewett was interested in Carstensen’s work on “topology optimization,” which uses algorithms to design structures that can achieve their performance requirements while using only a limited amount of material. He was particularly interested in applying this approach to concrete design, and he collaborated with Carstensen to help demonstrate its viability.

    After earning his master’s, Jewett spent a year and a half as a structural engineer in New York City. But when Carstensen was hired as a professor, she reached out to Jewett about joining her lab as a PhD student. He was ready for another change.

    Now in the third year of his PhD program, Jewett’s dissertation work builds upon his master’s thesis to further refine algorithms that can design building-scale concrete structures that use less material, which would help lower carbon emissions from the construction industry. It is estimated that the concrete industry alone is responsible for 8 percent of global carbon emissions, so any efforts to reduce that number could help in the fight against climate change.

    Implementing new ideas

    Topology optimization is a small field, with the bulk of the prior work being computational without any experimental verification. The work Jewett completed for his master’s thesis was just the start of a long learning process.

    “I do feel like I’m just getting to the part where I can start implementing my own ideas without as much support as I’ve needed in the past,” says Jewett. “In the last couple of months, I’ve been working on a reinforced concrete optimization algorithm that I hope will be the cornerstone of my thesis.”

    The process of fine-tuning a generative algorithm is slow going, particularly when tackling a multifaceted problem.

    “It can take days or usually weeks to take a step toward making it work as an entire integrated system,” says Jewett. “The days when that breakthrough happens and I can see the algorithm converging on a solution that makes sense — those are really exciting moments.”

    By harnessing computational power, Jewett is searching for materially efficient components that can be used to make up structures such as bridges or buildings. These are other constraints to consider as well, particularly ensuring that the cost of manufacturing isn’t too high. Having worked in the industry before starting the PhD program, Jewett has an eye toward doing work that can be feasibly implemented.

    Inspiring others

    When Jewett first visited MIT campus, he was drawn in by the collaborative environment of the institute and the students’ drive to learn. Now, he’s a part of that process as a teaching assistant and a supervisor in the Undergraduate Research Opportunities Program.  

    Working as a teaching assistant isn’t a requirement for Jewett’s program, but it’s been one of his favorite parts of his time at MIT.

    “The MIT undergrads are so gifted and just constantly impress me,” says Jewett. “Being able to teach, especially in the context of what MIT values is a lot of fun. And I learn, too. My coding practices have gotten so much better since working with undergrads here.”

    Jewett’s experiences have inspired him to pursue a career in academia after the completion of his program, which he expects to complete in the spring of 2025. But he’s making sure to take care of himself along the way. He still finds time to plan cycling trips with his friends and has gotten into running ever since moving to Boston. So far, he’s completed two marathons.

    “It’s so inspiring to be in a place where so many good ideas are just bouncing back and forth all over campus,” says Jewett. “And on most days, I remember that and it inspires me. But it’s also the case that academics is hard, PhD programs are hard, and MIT — there’s pressure being here, and sometimes that pressure can feel like it’s working against you.”

    Jewett is grateful for the mental health resources that MIT provides students. While he says it can be imperfect, it’s been a crucial part of his journey.

    “My PhD thesis will be done in 2025, but the work won’t be done. The time horizon of when these things need to be implemented is relatively short if we want to make an impact before global temperatures have already risen too high. My PhD research will be developing a framework for how that could be done with concrete construction, but I’d like to keep thinking about other materials and construction methods even after this project is finished.” More

  • in

    Study suggests energy-efficient route to capturing and converting CO2

    In the race to draw down greenhouse gas emissions around the world, scientists at MIT are looking to carbon-capture technologies to decarbonize the most stubborn industrial emitters.

    Steel, cement, and chemical manufacturing are especially difficult industries to decarbonize, as carbon and fossil fuels are inherent ingredients in their production. Technologies that can capture carbon emissions and convert them into forms that feed back into the production process could help to reduce the overall emissions from these “hard-to-abate” sectors.

    But thus far, experimental technologies that capture and convert carbon dioxide do so as two separate processes, that themselves require a huge amount of energy to run. The MIT team is looking to combine the two processes into one integrated and far more energy-efficient system that could potentially run on renewable energy to both capture and convert carbon dioxide from concentrated, industrial sources.

    In a study appearing today in ACS Catalysis, the researchers reveal the hidden functioning of how carbon dioxide can be both captured and converted through a single electrochemical process. The process involves using an electrode to attract carbon dioxide released from a sorbent, and to convert it into a reduced, reusable form.

    Others have reported similar demonstrations, but the mechanisms driving the electrochemical reaction have remained unclear. The MIT team carried out extensive experiments to determine that driver, and found that, in the end, it came down to the partial pressure of carbon dioxide. In other words, the more pure carbon dioxide that makes contact with the electrode, the more efficiently the electrode can capture and convert the molecule.

    Knowledge of this main driver, or “active species,” can help scientists tune and optimize similar electrochemical systems to efficiently capture and convert carbon dioxide in an integrated process.

    The study’s results imply that, while these electrochemical systems would probably not work for very dilute environments (for instance, to capture and convert carbon emissions directly from the air), they would be well-suited to the highly concentrated emissions generated by industrial processes, particularly those that have no obvious renewable alternative.

    “We can and should switch to renewables for electricity production. But deeply decarbonizing industries like cement or steel production is challenging and will take a longer time,” says study author Betar Gallant, the Class of 1922 Career Development Associate Professor at MIT. “Even if we get rid of all our power plants, we need some solutions to deal with the emissions from other industries in the shorter term, before we can fully decarbonize them. That’s where we see a sweet spot, where something like this system could fit.”

    The study’s MIT co-authors are lead author and postdoc Graham Leverick and graduate student Elizabeth Bernhardt, along with Aisyah Illyani Ismail, Jun Hui Law, Arif Arifutzzaman, and Mohamed Kheireddine Aroua of Sunway University in Malaysia.

    Breaking bonds

    Carbon-capture technologies are designed to capture emissions, or “flue gas,” from the smokestacks of power plants and manufacturing facilities. This is done primarily using large retrofits to funnel emissions into chambers filled with a “capture” solution — a mix of amines, or ammonia-based compounds, that chemically bind with carbon dioxide, producing a stable form that can be separated out from the rest of the flue gas.

    High temperatures are then applied, typically in the form of fossil-fuel-generated steam, to release the captured carbon dioxide from its amine bond. In its pure form, the gas can then be pumped into storage tanks or underground, mineralized, or further converted into chemicals or fuels.

    “Carbon capture is a mature technology, in that the chemistry has been known for about 100 years, but it requires really large installations, and is quite expensive and energy-intensive to run,” Gallant notes. “What we want are technologies that are more modular and flexible and can be adapted to more diverse sources of carbon dioxide. Electrochemical systems can help to address that.”

    Her group at MIT is developing an electrochemical system that both recovers the captured carbon dioxide and converts it into a reduced, usable product. Such an integrated system, rather than a decoupled one, she says, could be entirely powered with renewable electricity rather than fossil-fuel-derived steam.

    Their concept centers on an electrode that would fit into existing chambers of carbon-capture solutions. When a voltage is applied to the electrode, electrons flow onto the reactive form of carbon dioxide and convert it to a product using protons supplied from water. This makes the sorbent available to bind more carbon dioxide, rather than using steam to do the same.

    Gallant previously demonstrated this electrochemical process could work to capture and convert carbon dioxide into a solid carbonate form.

    “We showed that this electrochemical process was feasible in very early concepts,” she says. “Since then, there have been other studies focused on using this process to attempt to produce useful chemicals and fuels. But there’s been inconsistent explanations of how these reactions work, under the hood.”

    Solo CO2

    In the new study, the MIT team took a magnifying glass under the hood to tease out the specific reactions driving the electrochemical process. In the lab, they generated amine solutions that resemble the industrial capture solutions used to extract carbon dioxide from flue gas. They methodically altered various properties of each solution, such as the pH, concentration, and type of amine, then ran each solution past an electrode made from silver — a metal that is widely used in electrolysis studies and known to efficiently convert carbon dioxide to carbon monoxide. They then measured the concentration of carbon monoxide that was converted at the end of the reaction, and compared this number against that of every other solution they tested, to see which parameter had the most influence on how much carbon monoxide was produced.

    In the end, they found that what mattered most was not the type of amine used to initially capture carbon dioxide, as many have suspected. Instead, it was the concentration of solo, free-floating carbon dioxide molecules, which avoided bonding with amines but were nevertheless present in the solution. This “solo-CO2” determined the concentration of carbon monoxide that was ultimately produced.

    “We found that it’s easier to react this ‘solo’ CO2, as compared to CO2 that has been captured by the amine,” Leverick offers. “This tells future researchers that this process could be feasible for industrial streams, where high concentrations of carbon dioxide could efficiently be captured and converted into useful chemicals and fuels.”

    “This is not a removal technology, and it’s important to state that,” Gallant stresses. “The value that it does bring is that it allows us to recycle carbon dioxide some number of times while sustaining existing industrial processes, for fewer associated emissions. Ultimately, my dream is that electrochemical systems can be used to facilitate mineralization, and permanent storage of CO2 — a true removal technology. That’s a longer-term vision. And a lot of the science we’re starting to understand is a first step toward designing those processes.”

    This research is supported by Sunway University in Malaysia. More

  • in

    Fast-tracking fusion energy’s arrival with AI and accessibility

    As the impacts of climate change continue to grow, so does interest in fusion’s potential as a clean energy source. While fusion reactions have been studied in laboratories since the 1930s, there are still many critical questions scientists must answer to make fusion power a reality, and time is of the essence. As part of their strategy to accelerate fusion energy’s arrival and reach carbon neutrality by 2050, the U.S. Department of Energy (DoE) has announced new funding for a project led by researchers at MIT’s Plasma Science and Fusion Center (PSFC) and four collaborating institutions.

    Cristina Rea, a research scientist and group leader at the PSFC, will serve as the primary investigator for the newly funded three-year collaboration to pilot the integration of fusion data into a system that can be read by AI-powered tools. The PSFC, together with scientists from the College of William and Mary, the University of Wisconsin at Madison, Auburn University, and the nonprofit HDF Group, plan to create a holistic fusion data platform, the elements of which could offer unprecedented access for researchers, especially underrepresented students. The project aims to encourage diverse participation in fusion and data science, both in academia and the workforce, through outreach programs led by the group’s co-investigators, of whom four out of five are women. 

    The DoE’s award, part of a $29 million funding package for seven projects across 19 institutions, will support the group’s efforts to distribute data produced by fusion devices like the PSFC’s Alcator C-Mod, a donut-shaped “tokamak” that utilized powerful magnets to control and confine fusion reactions. Alcator C-Mod operated from 1991 to 2016 and its data are still being studied, thanks in part to the PSFC’s commitment to the free exchange of knowledge.

    Currently, there are nearly 50 public experimental magnetic confinement-type fusion devices; however, both historical and current data from these devices can be difficult to access. Some fusion databases require signing user agreements, and not all data are catalogued and organized the same way. Moreover, it can be difficult to leverage machine learning, a class of AI tools, for data analysis and to enable scientific discovery without time-consuming data reorganization. The result is fewer scientists working on fusion, greater barriers to discovery, and a bottleneck in harnessing AI to accelerate progress.

    The project’s proposed data platform addresses technical barriers by being FAIR — Findable, Interoperable, Accessible, Reusable — and by adhering to UNESCO’s Open Science (OS) recommendations to improve the transparency and inclusivity of science; all of the researchers’ deliverables will adhere to FAIR and OS principles, as required by the DoE. The platform’s databases will be built using MDSplusML, an upgraded version of the MDSplus open-source software developed by PSFC researchers in the 1980s to catalogue the results of Alcator C-Mod’s experiments. Today, nearly 40 fusion research institutes use MDSplus to store and provide external access to their fusion data. The release of MDSplusML aims to continue that legacy of open collaboration.

    The researchers intend to address barriers to participation for women and disadvantaged groups not only by improving general access to fusion data, but also through a subsidized summer school that will focus on topics at the intersection of fusion and machine learning, which will be held at William and Mary for the next three years.

    Of the importance of their research, Rea says, “This project is about responding to the fusion community’s needs and setting ourselves up for success. Scientific advancements in fusion are enabled via multidisciplinary collaboration and cross-pollination, so accessibility is absolutely essential. I think we all understand now that diverse communities have more diverse ideas, and they allow faster problem-solving.”

    The collaboration’s work also aligns with vital areas of research identified in the International Atomic Energy Agency’s “AI for Fusion” Coordinated Research Project (CRP). Rea was selected as the technical coordinator for the IAEA’s CRP emphasizing community engagement and knowledge access to accelerate fusion research and development. In a letter of support written for the group’s proposed project, the IAEA stated that, “the work [the researchers] will carry out […] will be beneficial not only to our CRP but also to the international fusion community in large.”

    PSFC Director and Hitachi America Professor of Engineering Dennis Whyte adds, “I am thrilled to see PSFC and our collaborators be at the forefront of applying new AI tools while simultaneously encouraging and enabling extraction of critical data from our experiments.”

    “Having the opportunity to lead such an important project is extremely meaningful, and I feel a responsibility to show that women are leaders in STEM,” says Rea. “We have an incredible team, strongly motivated to improve our fusion ecosystem and to contribute to making fusion energy a reality.” More

  • in

    Ms. Nuclear Energy is winning over nuclear skeptics

    First-year MIT nuclear science and engineering (NSE) doctoral student Kaylee Cunningham is not the first person to notice that nuclear energy has a public relations problem. But her commitment to dispel myths about the alternative power source has earned her the moniker “Ms. Nuclear Energy” on TikTok and a devoted fan base on the social media platform.

    Cunningham’s activism kicked into place shortly after a week-long trip to Iceland to study geothermal energy. During a discussion about how the country was going to achieve its net zero energy goals, a representative from the University of Reykjavik balked at Cunnigham’s suggestion of including a nuclear option in the alternative energy mix. “The response I got was that we’re a peace-loving nation, we don’t do that,” Cunningham remembers. “I was appalled by the reaction, I mean we’re talking energy not weapons here, right?” she asks. Incredulous, Cunningham made a TikTok that targeted misinformation. Overnight she garnered 10,000 followers and “Ms. Nuclear Energy” was off to the races. Ms. Nuclear Energy is now Cunningham’s TikTok handle.

    Kaylee Cunningham: Dispelling myths and winning over skeptics

    A theater and science nerd

    TikTok is a fitting platform for a theater nerd like Cunningham. Born in Melrose, Massachusetts, Cunningham’s childhood was punctuated by moves to places where her roofer father’s work took the family. She moved to North Carolina shortly after fifth grade and fell in love with theater. “I was doing theater classes, the spring musical, it was my entire world,” Cunningham remembers. When she moved again, this time to Florida halfway through her first year of high school, she found the spring musical had already been cast. But she could help behind the scenes. Through that work, Cunningham gained her first real exposure to hands-on tech. She was hooked.

    Soon Cunningham was part of a team that represented her high school at the student Astronaut Challenge, an aerospace competition run by Florida State University. Statewide winners got to fly a space shuttle simulator at the Kennedy Space Center and participate in additional engineering challenges. Cunningham’s team was involved in creating a proposal to help NASA’s Asteroid Redirect Mission, designed to help the agency gather a large boulder from a near-earth asteroid. The task was Cunningham’s induction into an understanding of radiation and “anything nuclear.” Her high school engineering teacher, Nirmala Arunachalam, encouraged Cunningham’s interest in the subject.

    The Astronaut Challenge might just have been the end of Cunningham’s path in nuclear engineering had it not been for her mother. In high school, Cunningham had also enrolled in computer science classes and her love of the subject earned her a scholarship at Norwich University in Vermont where she had pursued a camp in cybersecurity. Cunningham had already laid down the college deposit for Norwich.

    But Cunningham’s mother persuaded her daughter to pay another visit to the University of Florida, where she had expressed interest in pursuing nuclear engineering. To her pleasant surprise, the department chair, Professor James Baciak, pulled out all the stops, bringing mother and daughter on a tour of the on-campus nuclear reactor and promising Cunningham a paid research position. Cunningham was sold and Backiak has been a mentor throughout her research career.

    Merging nuclear engineering and computer science

    Undergraduate research internships, including one at Oak Ridge National Laboratory, where she could combine her two loves, nuclear engineering and computer science, convinced Cunningham she wanted to pursue a similar path in graduate school.

    Cunningham’s undergraduate application to MIT had been rejected but that didn’t deter her from applying to NSE for graduate school. Having spent her early years in an elementary school barely 20 minutes from campus, she had grown up hearing that “the smartest people in the world go to MIT.” Cunningham figured that if she got into MIT, it would be “like going back home to Massachusetts” and that she could fit right in.

    Under the advisement of Professor Michael Short, Cunningham is looking to pursue her passions in both computer science and nuclear engineering in her doctoral studies.

    The activism continues

    Simultaneously, Cunningham is determined to keep her activism going.

    Her ability to digest “complex topics into something understandable to people who have no connection to academia” has helped Cunningham on TikTok. “It’s been something I’ve been doing all my life with my parents and siblings and extended family,” she says.

    Punctuating her video snippets with humor — a Simpsons reference is par for the course — helps Cunningham break through to her audience who love her goofy and tongue-in-cheek approach to the subject matter without compromising accuracy. “Sometimes I do stupid dances and make a total fool of myself, but I’ve really found my niche by being willing to engage and entertain people and educate them at the same time.”

    Such education needs to be an important part of an industry that’s received its share of misunderstandings, Cunningham says. “Technical people trying to communicate in a way that the general people don’t understand is such a concerning thing,” she adds. Case in point: the response in the wake of the Three Mile Island accident, which prevented massive contamination leaks. It was a perfect example of how well our safety regulations actually work, Cunningham says, “but you’d never guess from the PR fallout from it all.”

    As Ms. Nuclear Energy, Cunningham receives her share of skepticism. One viewer questioned the safety of nuclear reactors if “tons of pollution” was spewing out from them. Cunningham produced a TikTok that addressed this misconception. Pointing to the “pollution” in a photo, Cunningham clarifies that it’s just water vapor. The TikTok has garnered over a million views. “It really goes to show how starving for accurate information the public really is,” Cunningham says, “ in this age of having all the information we could ever want at our fingertips, it’s hard to sift through and decide what’s real and accurate and what isn’t.”

    Another reason for her advocacy: doing her part to encourage young people toward a nuclear science or engineering career. “If we’re going to start putting up tons of small modular reactors around the country, we need people to build them, people to run them, and we need regulatory bodies to inspect and keep them safe,” Cunningham points out. “ And we don’t have enough people entering the workforce in comparison to those that are retiring from the workforce,” she adds. “I’m able to engage those younger audiences and put nuclear engineering on their radar,” Cunningham says. The advocacy has been paying off: Cunningham regularly receives — and responds to — inquiries from high school junior girls looking for advice on pursuing nuclear engineering.

    All the activism is in service toward a clear end goal. “At the end of the day, the fight is to save the planet,” Cunningham says, “I honestly believe that nuclear power is the best chance we’ve got to fight climate change and keep our planet alive.” More