More stories

  • in

    MIT expands research collaboration with Commonwealth Fusion Systems to build net energy fusion machine, SPARC

    MIT’s Plasma Science and Fusion Center (PSFC) will substantially expand its fusion energy research and education activities under a new five-year agreement with Institute spinout Commonwealth Fusion Systems (CFS).

    “This expanded relationship puts MIT and PSFC in a prime position to be an even stronger academic leader that can help deliver the research and education needs of the burgeoning fusion energy industry, in part by utilizing the world’s first burning plasma and net energy fusion machine, SPARC,” says PSFC director Dennis Whyte. “CFS will build SPARC and develop a commercial fusion product, while MIT PSFC will focus on its core mission of cutting-edge research and education.”

    Commercial fusion energy has the potential to play a significant role in combating climate change, and there is a concurrent increase in interest from the energy sector, governments, and foundations. The new agreement, administered by the MIT Energy Initiative (MITEI), where CFS is a startup member, will help PSFC expand its fusion technology efforts with a wider variety of sponsors. The collaboration enables rapid execution at scale and technology transfer into the commercial sector as soon as possible.

    This new agreement doubles CFS’ financial commitment to PSFC, enabling greater recruitment and support of students, staff, and faculty. “We’ll significantly increase the number of graduate students and postdocs, and just as important they will be working on a more diverse set of fusion science and technology topics,” notes Whyte. It extends the collaboration between PSFC and CFS that resulted in numerous advances toward fusion power plants, including last fall’s demonstration of a high-temperature superconducting (HTS) fusion electromagnet with record-setting field strength of 20 tesla.

    The combined magnetic fusion efforts at PSFC will surpass those in place during the operations of the pioneering Alcator C-Mod tokamak device that operated from 1993 to 2016. This increase in activity reflects a moment when multiple fusion energy technologies are seeing rapidly accelerating development worldwide, and the emergence of a new fusion energy industry that would require thousands of trained people.

    MITEI director Robert Armstrong adds, “Our goal from the beginning was to create a membership model that would allow startups who have specific research challenges to leverage the MITEI ecosystem, including MIT faculty, students, and other MITEI members. The team at the PSFC and MITEI have worked seamlessly to support CFS, and we are excited for this next phase of the relationship.”

    PSFC is supporting CFS’ efforts toward realizing the SPARC fusion platform, which facilitates rapid development and refinement of elements (including HTS magnets) needed to build ARC, a compact, modular, high-field fusion power plant that would set the stage for commercial fusion energy production. The concepts originated in Whyte’s nuclear science and engineering class 22.63 (Principles of Fusion Engineering) and have been carried forward by students and PSFC staff, many of whom helped found CFS; the new activity will expand research into advanced technologies for the envisioned pilot plant.

    “This has been an incredibly effective collaboration that has resulted in a major breakthrough for commercial fusion with the successful demonstration of revolutionary fusion magnet technology that will enable the world’s first commercially relevant net energy fusion device, SPARC, currently under construction,” says Bob Mumgaard SM ’15, PhD ’15, CEO of Commonwealth Fusion Systems. “We look forward to this next phase in the collaboration with MIT as we tackle the critical research challenges ahead for the next steps toward fusion power plant development.”

    In the push for commercial fusion energy, the next five years are critical, requiring intensive work on materials longevity, heat transfer, fuel recycling, maintenance, and other crucial aspects of power plant development. It will need innovation from almost every engineering discipline. “Having great teams working now, it will cut the time needed to move from SPARC to ARC, and really unleash the creativity. And the thing MIT does so well is cut across disciplines,” says Whyte.

    “To address the climate crisis, the world needs to deploy existing clean energy solutions as widely and as quickly as possible, while at the same time developing new technologies — and our goal is that those new technologies will include fusion power,” says Maria T. Zuber, MIT’s vice president for research. “To make new climate solutions a reality, we need focused, sustained collaborations like the one between MIT and Commonwealth Fusion Systems. Delivering fusion power onto the grid is a monumental challenge, and the combined capabilities of these two organizations are what the challenge demands.”

    On a strategic level, climate change and the imperative need for widely implementable carbon-free energy have helped orient the PSFC team toward scalability. “Building one or 10 fusion plants doesn’t make a difference — we have to build thousands,” says Whyte. “The design decisions we make will impact the ability to do that down the road. The real enemy here is time, and we want to remove as many impediments as possible and commit to funding a new generation of scientific leaders. Those are critically important in a field with as much interdisciplinary integration as fusion.” More

  • in

    Team creates map for production of eco-friendly metals

    In work that could usher in more efficient, eco-friendly processes for producing important metals like lithium, iron, and cobalt, researchers from MIT and the SLAC National Accelerator Laboratory have mapped what is happening at the atomic level behind a particularly promising approach called metal electrolysis.

    By creating maps for a wide range of metals, they not only determined which metals should be easiest to produce using this approach, but also identified fundamental barriers behind the efficient production of others. As a result, the researchers’ map could become an important design tool for optimizing the production of all these metals.

    The work could also aid the development of metal-air batteries, cousins of the lithium-ion batteries used in today’s electric vehicles.

    Most of the metals key to society today are produced using fossil fuels. These fuels generate the high temperatures necessary to convert the original ore into its purified metal. But that process is a significant source of greenhouse gases — steel alone accounts for some 7 percent of carbon dioxide emissions globally. As a result, researchers from around the world are working to identify more eco-friendly ways for the production of metals.

    One promising approach is metal electrolysis, in which a metal oxide, the ore, is zapped with electricity to create pure metal with oxygen as the byproduct. That is the reaction explored at the atomic level in new research reported in the April 8 issue of the journal Chemistry of Materials.

    Donald Siegel is department chair and professor of mechanical engineering at the University of Texas at Austin. Says Siegel, who was not involved in the Chemistry of Materials study: “This work is an important contribution to improving the efficiency of metal production from metal oxides. It clarifies our understanding of low-carbon electrolysis processes by tracing the underlying thermodynamics back to elementary metal-oxygen interactions. I expect that this work will aid in the creation of design rules that will make these industrially important processes less reliant on fossil fuels.”

    Yang Shao-Horn, the JR East Professor of Engineering in MIT’s Department of Materials Science and Engineering (DMSE) and Department of Mechanical Engineering, is a leader of the current work, with Michal Bajdich of SLAC.

    “Here we aim to establish some basic understanding to predict the efficiency of electrochemical metal production and metal-air batteries from examining computed thermodynamic barriers for the conversion between metal and metal oxides,” says Shao-Horn, who is on the research team for MIT’s new Center for Electrification and Decarbonization of Industry, a winner of the Institute’s first-ever Climate Grand Challenges competition. Shao-Horn is also affiliated with MIT’s Materials Research Laboratory and Research Laboratory of Electronics.

    In addition to Shao-Horn and Bajdich, other authors of the Chemistry of Materials paper are Jaclyn R. Lunger, first author and a DMSE graduate student; mechanical engineering senior Naomi Lutz; and DMSE graduate student Jiayu Peng.

    Other applications

    The work could also aid in developing metal-air batteries such as lithium-air, aluminum-air, and zinc-air batteries. These cousins of the lithium-ion batteries used in today’s electric vehicles have the potential to electrify aviation because their energy densities are much higher. However, they are not yet on the market due to a variety of problems including inefficiency.

    Charging metal-air batteries also involves electrolysis. As a result, the new atomic-level understanding of these reactions could not only help engineers develop efficient electrochemical routes for metal production, but also design more efficient metal-air batteries.

    Learning from water splitting

    Electrolysis is also used to split water into oxygen and hydrogen, which stores the resulting energy. That hydrogen, in turn, could become an eco-friendly alternative to fossil fuels. Since much more is known about water electrolysis, the focus of Bajdich’s work at SLAC, than the electrolysis of metal oxides, the team compared the two processes for the first time.

    The result: “Slowly, we uncovered the elementary steps involved in metal electrolysis,” says Bajdich. The work was challenging, says Lunger, because “it was unclear to us what those steps are. We had to figure out how to get from A to B,” or from a metal oxide to metal and oxygen.

    All of the work was conducted with supercomputer simulations. “It’s like a sandbox of atoms, and then we play with them. It’s a little like Legos,” says Bajdich. More specifically, the team explored different scenarios for the electrolysis of several metals. Each involved different catalysts, molecules that boost the speed of a reaction.

    Says Lunger, “To optimize the reaction, you want to find the catalyst that makes it most efficient.” The team’s map is essentially a guide for designing the best catalysts for each different metal.

    What’s next? Lunger noted that the current work focused on the electrolysis of pure metals. “I’m interested in seeing what happens in more complex systems involving multiple metals. Can you make the reaction more efficient if there’s sodium and lithium present, or cadmium and cesium?”

    This work was supported by a U.S. Department of Energy Office of Science Graduate Student Research award. It was also supported by an MIT Energy Initiative fellowship, the Toyota Research Institute through the Accelerated Materials Design and Discovery Program, the Catalysis Science Program of Department of Energy, Office of Basic Energy Sciences, and by the Differentiate Program through the U.S. Advanced Research Projects Agency — Energy.  More

  • in

    Engineers use artificial intelligence to capture the complexity of breaking waves

    Waves break once they swell to a critical height, before cresting and crashing into a spray of droplets and bubbles. These waves can be as large as a surfer’s point break and as small as a gentle ripple rolling to shore. For decades, the dynamics of how and when a wave breaks have been too complex to predict.

    Now, MIT engineers have found a new way to model how waves break. The team used machine learning along with data from wave-tank experiments to tweak equations that have traditionally been used to predict wave behavior. Engineers typically rely on such equations to help them design resilient offshore platforms and structures. But until now, the equations have not been able to capture the complexity of breaking waves.

    The updated model made more accurate predictions of how and when waves break, the researchers found. For instance, the model estimated a wave’s steepness just before breaking, and its energy and frequency after breaking, more accurately than the conventional wave equations.

    Their results, published today in the journal Nature Communications, will help scientists understand how a breaking wave affects the water around it. Knowing precisely how these waves interact can help hone the design of offshore structures. It can also improve predictions for how the ocean interacts with the atmosphere. Having better estimates of how waves break can help scientists predict, for instance, how much carbon dioxide and other atmospheric gases the ocean can absorb.

    “Wave breaking is what puts air into the ocean,” says study author Themis Sapsis, an associate professor of mechanical and ocean engineering and an affiliate of the Institute for Data, Systems, and Society at MIT. “It may sound like a detail, but if you multiply its effect over the area of the entire ocean, wave breaking starts becoming fundamentally important to climate prediction.”

    The study’s co-authors include lead author and MIT postdoc Debbie Eeltink, Hubert Branger and Christopher Luneau of Aix-Marseille University, Amin Chabchoub of Kyoto University, Jerome Kasparian of the University of Geneva, and T.S. van den Bremer of Delft University of Technology.

    Learning tank

    To predict the dynamics of a breaking wave, scientists typically take one of two approaches: They either attempt to precisely simulate the wave at the scale of individual molecules of water and air, or they run experiments to try and characterize waves with actual measurements. The first approach is computationally expensive and difficult to simulate even over a small area; the second requires a huge amount of time to run enough experiments to yield statistically significant results.

    The MIT team instead borrowed pieces from both approaches to develop a more efficient and accurate model using machine learning. The researchers started with a set of equations that is considered the standard description of wave behavior. They aimed to improve the model by “training” the model on data of breaking waves from actual experiments.

    “We had a simple model that doesn’t capture wave breaking, and then we had the truth, meaning experiments that involve wave breaking,” Eeltink explains. “Then we wanted to use machine learning to learn the difference between the two.”

    The researchers obtained wave breaking data by running experiments in a 40-meter-long tank. The tank was fitted at one end with a paddle which the team used to initiate each wave. The team set the paddle to produce a breaking wave in the middle of the tank. Gauges along the length of the tank measured the water’s height as waves propagated down the tank.

    “It takes a lot of time to run these experiments,” Eeltink says. “Between each experiment you have to wait for the water to completely calm down before you launch the next experiment, otherwise they influence each other.”

    Safe harbor

    In all, the team ran about 250 experiments, the data from which they used to train a type of machine-learning algorithm known as a neural network. Specifically, the algorithm is trained to compare the real waves in experiments with the predicted waves in the simple model, and based on any differences between the two, the algorithm tunes the model to fit reality.

    After training the algorithm on their experimental data, the team introduced the model to entirely new data — in this case, measurements from two independent experiments, each run at separate wave tanks with different dimensions. In these tests, they found the updated model made more accurate predictions than the simple, untrained model, for instance making better estimates of a breaking wave’s steepness.

    The new model also captured an essential property of breaking waves known as the “downshift,” in which the frequency of a wave is shifted to a lower value. The speed of a wave depends on its frequency. For ocean waves, lower frequencies move faster than higher frequencies. Therefore, after the downshift, the wave will move faster. The new model predicts the change in frequency, before and after each breaking wave, which could be especially relevant in preparing for coastal storms.

    “When you want to forecast when high waves of a swell would reach a harbor, and you want to leave the harbor before those waves arrive, then if you get the wave frequency wrong, then the speed at which the waves are approaching is wrong,” Eeltink says.

    The team’s updated wave model is in the form of an open-source code that others could potentially use, for instance in climate simulations of the ocean’s potential to absorb carbon dioxide and other atmospheric gases. The code can also be worked into simulated tests of offshore platforms and coastal structures.

    “The number one purpose of this model is to predict what a wave will do,” Sapsis says. “If you don’t model wave breaking right, it would have tremendous implications for how structures behave. With this, you could simulate waves to help design structures better, more efficiently, and without huge safety factors.”

    This research is supported, in part, by the Swiss National Science Foundation, and by the U.S. Office of Naval Research. More

  • in

    How can we reduce the carbon footprint of global computing?

    The voracious appetite for energy from the world’s computers and communications technology presents a clear threat for the globe’s warming climate. That was the blunt assessment from presenters in the intensive two-day Climate Implications of Computing and Communications workshop held on March 3 and 4, hosted by MIT’s Climate and Sustainability Consortium (MCSC), MIT-IBM Watson AI Lab, and the Schwarzman College of Computing.

    The virtual event featured rich discussions and highlighted opportunities for collaboration among an interdisciplinary group of MIT faculty and researchers and industry leaders across multiple sectors — underscoring the power of academia and industry coming together.

    “If we continue with the existing trajectory of compute energy, by 2040, we are supposed to hit the world’s energy production capacity. The increase in compute energy and demand has been increasing at a much faster rate than the world energy production capacity increase,” said Bilge Yildiz, the Breene M. Kerr Professor in the MIT departments of Nuclear Science and Engineering and Materials Science and Engineering, one of the workshop’s 18 presenters. This computing energy projection draws from the Semiconductor Research Corporations’s decadal report.To cite just one example: Information and communications technology already account for more than 2 percent of global energy demand, which is on a par with the aviation industries emissions from fuel.“We are the very beginning of this data-driven world. We really need to start thinking about this and act now,” said presenter Evgeni Gousev, senior director at Qualcomm.  Innovative energy-efficiency optionsTo that end, the workshop presentations explored a host of energy-efficiency options, including specialized chip design, data center architecture, better algorithms, hardware modifications, and changes in consumer behavior. Industry leaders from AMD, Ericsson, Google, IBM, iRobot, NVIDIA, Qualcomm, Tertill, Texas Instruments, and Verizon outlined their companies’ energy-saving programs, while experts from across MIT provided insight into current research that could yield more efficient computing.Panel topics ranged from “Custom hardware for efficient computing” to “Hardware for new architectures” to “Algorithms for efficient computing,” among others.

    Visual representation of the conversation during the workshop session entitled “Energy Efficient Systems.”

    Image: Haley McDevitt

    Previous item
    Next item

    The goal, said Yildiz, is to improve energy efficiency associated with computing by more than a million-fold.“I think part of the answer of how we make computing much more sustainable has to do with specialized architectures that have very high level of utilization,” said Darío Gil, IBM senior vice president and director of research, who stressed that solutions should be as “elegant” as possible.  For example, Gil illustrated an innovative chip design that uses vertical stacking to reduce the distance data has to travel, and thus reduces energy consumption. Surprisingly, more effective use of tape — a traditional medium for primary data storage — combined with specialized hard drives (HDD), can yield a dramatic savings in carbon dioxide emissions.Gil and presenters Bill Dally, chief scientist and senior vice president of research of NVIDIA; Ahmad Bahai, CTO of Texas Instruments; and others zeroed in on storage. Gil compared data to a floating iceberg in which we can have fast access to the “hot data” of the smaller visible part while the “cold data,” the large underwater mass, represents data that tolerates higher latency. Think about digital photo storage, Gil said. “Honestly, are you really retrieving all of those photographs on a continuous basis?” Storage systems should provide an optimized mix of of HDD for hot data and tape for cold data based on data access patterns.Bahai stressed the significant energy saving gained from segmenting standby and full processing. “We need to learn how to do nothing better,” he said. Dally spoke of mimicking the way our brain wakes up from a deep sleep, “We can wake [computers] up much faster, so we don’t need to keep them running in full speed.”Several workshop presenters spoke of a focus on “sparsity,” a matrix in which most of the elements are zero, as a way to improve efficiency in neural networks. Or as Dally said, “Never put off till tomorrow, where you could put off forever,” explaining efficiency is not “getting the most information with the fewest bits. It’s doing the most with the least energy.”Holistic and multidisciplinary approaches“We need both efficient algorithms and efficient hardware, and sometimes we need to co-design both the algorithm and the hardware for efficient computing,” said Song Han, a panel moderator and assistant professor in the Department of Electrical Engineering and Computer Science (EECS) at MIT.Some presenters were optimistic about innovations already underway. According to Ericsson’s research, as much as 15 percent of the carbon emissions globally can be reduced through the use of existing solutions, noted Mats Pellbäck Scharp, head of sustainability at Ericsson. For example, GPUs are more efficient than CPUs for AI, and the progression from 3G to 5G networks boosts energy savings.“5G is the most energy efficient standard ever,” said Scharp. “We can build 5G without increasing energy consumption.”Companies such as Google are optimizing energy use at their data centers through improved design, technology, and renewable energy. “Five of our data centers around the globe are operating near or above 90 percent carbon-free energy,” said Jeff Dean, Google’s senior fellow and senior vice president of Google Research.Yet, pointing to the possible slowdown in the doubling of transistors in an integrated circuit — or Moore’s Law — “We need new approaches to meet this compute demand,” said Sam Naffziger, AMD senior vice president, corporate fellow, and product technology architect. Naffziger spoke of addressing performance “overkill.” For example, “we’re finding in the gaming and machine learning space we can make use of lower-precision math to deliver an image that looks just as good with 16-bit computations as with 32-bit computations, and instead of legacy 32b math to train AI networks, we can use lower-energy 8b or 16b computations.”

    Visual representation of the conversation during the workshop session entitled “Wireless, networked, and distributed systems.”

    Image: Haley McDevitt

    Previous item
    Next item

    Other presenters singled out compute at the edge as a prime energy hog.“We also have to change the devices that are put in our customers’ hands,” said Heidi Hemmer, senior vice president of engineering at Verizon. As we think about how we use energy, it is common to jump to data centers — but it really starts at the device itself, and the energy that the devices use. Then, we can think about home web routers, distributed networks, the data centers, and the hubs. “The devices are actually the least energy-efficient out of that,” concluded Hemmer.Some presenters had different perspectives. Several called for developing dedicated silicon chipsets for efficiency. However, panel moderator Muriel Medard, the Cecil H. Green Professor in EECS, described research at MIT, Boston University, and Maynooth University on the GRAND (Guessing Random Additive Noise Decoding) chip, saying, “rather than having obsolescence of chips as the new codes come in and in different standards, you can use one chip for all codes.”Whatever the chip or new algorithm, Helen Greiner, CEO of Tertill (a weeding robot) and co-founder of iRobot, emphasized that to get products to market, “We have to learn to go away from wanting to get the absolute latest and greatest, the most advanced processor that usually is more expensive.” She added, “I like to say robot demos are a dime a dozen, but robot products are very infrequent.”Greiner emphasized consumers can play a role in pushing for more energy-efficient products — just as drivers began to demand electric cars.Dean also sees an environmental role for the end user.“We have enabled our cloud customers to select which cloud region they want to run their computation in, and they can decide how important it is that they have a low carbon footprint,” he said, also citing other interfaces that might allow consumers to decide which air flights are more efficient or what impact installing a solar panel on their home would have.However, Scharp said, “Prolonging the life of your smartphone or tablet is really the best climate action you can do if you want to reduce your digital carbon footprint.”Facing increasing demandsDespite their optimism, the presenters acknowledged the world faces increasing compute demand from machine learning, AI, gaming, and especially, blockchain. Panel moderator Vivienne Sze, associate professor in EECS, noted the conundrum.“We can do a great job in making computing and communication really efficient. But there is this tendency that once things are very efficient, people use more of it, and this might result in an overall increase in the usage of these technologies, which will then increase our overall carbon footprint,” Sze said.Presenters saw great potential in academic/industry partnerships, particularly from research efforts on the academic side. “By combining these two forces together, you can really amplify the impact,” concluded Gousev.Presenters at the Climate Implications of Computing and Communications workshop also included: Joel Emer, professor of the practice in EECS at MIT; David Perreault, the Joseph F. and Nancy P. Keithley Professor of EECS at MIT; Jesús del Alamo, MIT Donner Professor and professor of electrical engineering in EECS at MIT; Heike Riel, IBM Fellow and head science and technology at IBM; and Takashi Ando, principal research staff member at IBM Research. The recorded workshop sessions are available on YouTube. More

  • in

    From seawater to drinking water, with the push of a button

    MIT researchers have developed a portable desalination unit, weighing less than 10 kilograms, that can remove particles and salts to generate drinking water.

    The suitcase-sized device, which requires less power to operate than a cell phone charger, can also be driven by a small, portable solar panel, which can be purchased online for around $50. It automatically generates drinking water that exceeds World Health Organization quality standards. The technology is packaged into a user-friendly device that runs with the push of one button.

    Unlike other portable desalination units that require water to pass through filters, this device utilizes electrical power to remove particles from drinking water. Eliminating the need for replacement filters greatly reduces the long-term maintenance requirements.

    This could enable the unit to be deployed in remote and severely resource-limited areas, such as communities on small islands or aboard seafaring cargo ships. It could also be used to aid refugees fleeing natural disasters or by soldiers carrying out long-term military operations.

    “This is really the culmination of a 10-year journey that I and my group have been on. We worked for years on the physics behind individual desalination processes, but pushing all those advances into a box, building a system, and demonstrating it in the ocean, that was a really meaningful and rewarding experience for me,” says senior author Jongyoon Han, a professor of electrical engineering and computer science and of biological engineering, and a member of the Research Laboratory of Electronics (RLE).

    Joining Han on the paper are first author Junghyo Yoon, a research scientist in RLE; Hyukjin J. Kwon, a former postdoc; SungKu Kang, a postdoc at Northeastern University; and Eric Brack of the U.S. Army Combat Capabilities Development Command (DEVCOM). The research has been published online in Environmental Science and Technology.

    Play video

    Filter-free technology

    Commercially available portable desalination units typically require high-pressure pumps to push water through filters, which are very difficult to miniaturize without compromising the energy-efficiency of the device, explains Yoon.

    Instead, their unit relies on a technique called ion concentration polarization (ICP), which was pioneered by Han’s group more than 10 years ago. Rather than filtering water, the ICP process applies an electrical field to membranes placed above and below a channel of water. The membranes repel positively or negatively charged particles — including salt molecules, bacteria, and viruses — as they flow past. The charged particles are funneled into a second stream of water that is eventually discharged.

    The process removes both dissolved and suspended solids, allowing clean water to pass through the channel. Since it only requires a low-pressure pump, ICP uses less energy than other techniques.

    But ICP does not always remove all the salts floating in the middle of the channel. So the researchers incorporated a second process, known as electrodialysis, to remove remaining salt ions.

    Yoon and Kang used machine learning to find the ideal combination of ICP and electrodialysis modules. The optimal setup includes a two-stage ICP process, with water flowing through six modules in the first stage then through three in the second stage, followed by a single electrodialysis process. This minimized energy usage while ensuring the process remains self-cleaning.

    “While it is true that some charged particles could be captured on the ion exchange membrane, if they get trapped, we just reverse the polarity of the electric field and the charged particles can be easily removed,” Yoon explains.

    They shrunk and stacked the ICP and electrodialysis modules to improve their energy efficiency and enable them to fit inside a portable device. The researchers designed the device for nonexperts, with just one button to launch the automatic desalination and purification process. Once the salinity level and the number of particles decrease to specific thresholds, the device notifies the user that the water is drinkable.

    The researchers also created a smartphone app that can control the unit wirelessly and report real-time data on power consumption and water salinity.

    Beach tests

    After running lab experiments using water with different salinity and turbidity (cloudiness) levels, they field-tested the device at Boston’s Carson Beach.

    Yoon and Kwon set the box near the shore and tossed the feed tube into the water. In about half an hour, the device had filled a plastic drinking cup with clear, drinkable water.

    “It was successful even in its first run, which was quite exciting and surprising. But I think the main reason we were successful is the accumulation of all these little advances that we made along the way,” Han says.

    The resulting water exceeded World Health Organization quality guidelines, and the unit reduced the amount of suspended solids by at least a factor of 10. Their prototype generates drinking water at a rate of 0.3 liters per hour, and requires only 20 watts of power per liter.

    “Right now, we are pushing our research to scale up that production rate,” Yoon says.

    One of the biggest challenges of designing the portable system was engineering an intuitive device that could be used by anyone, Han says.

    Yoon hopes to make the device more user-friendly and improve its energy efficiency and production rate through a startup he plans to launch to commercialize the technology.

    In the lab, Han wants to apply the lessons he’s learned over the past decade to water-quality issues that go beyond desalination, such as rapidly detecting contaminants in drinking water.

    “This is definitely an exciting project, and I am proud of the progress we have made so far, but there is still a lot of work to do,” he says.

    For example, while “development of portable systems using electro-membrane processes is an original and exciting direction in off-grid, small-scale desalination,” the effects of fouling, especially if the water has high turbidity, could significantly increase maintenance requirements and energy costs, notes Nidal Hilal, professor of engineering and director of the New York University Abu Dhabi Water research center, who was not involved with this research.

    “Another limitation is the use of expensive materials,” he adds. “It would be interesting to see similar systems with low-cost materials in place.”

    The research was funded, in part, by the DEVCOM Soldier Center, the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS), the Experimental AI Postdoc Fellowship Program of Northeastern University, and the Roux AI Institute. More

  • in

    Machine learning, harnessed to extreme computing, aids fusion energy development

    MIT research scientists Pablo Rodriguez-Fernandez and Nathan Howard have just completed one of the most demanding calculations in fusion science — predicting the temperature and density profiles of a magnetically confined plasma via first-principles simulation of plasma turbulence. Solving this problem by brute force is beyond the capabilities of even the most advanced supercomputers. Instead, the researchers used an optimization methodology developed for machine learning to dramatically reduce the CPU time required while maintaining the accuracy of the solution.

    Fusion energyFusion offers the promise of unlimited, carbon-free energy through the same physical process that powers the sun and the stars. It requires heating the fuel to temperatures above 100 million degrees, well above the point where the electrons are stripped from their atoms, creating a form of matter called plasma. On Earth, researchers use strong magnetic fields to isolate and insulate the hot plasma from ordinary matter. The stronger the magnetic field, the better the quality of the insulation that it provides.

    Rodriguez-Fernandez and Howard have focused on predicting the performance expected in the SPARC device, a compact, high-magnetic-field fusion experiment, currently under construction by the MIT spin-out company Commonwealth Fusion Systems (CFS) and researchers from MIT’s Plasma Science and Fusion Center. While the calculation required an extraordinary amount of computer time, over 8 million CPU-hours, what was remarkable was not how much time was used, but how little, given the daunting computational challenge.

    The computational challenge of fusion energyTurbulence, which is the mechanism for most of the heat loss in a confined plasma, is one of the science’s grand challenges and the greatest problem remaining in classical physics. The equations that govern fusion plasmas are well known, but analytic solutions are not possible in the regimes of interest, where nonlinearities are important and solutions encompass an enormous range of spatial and temporal scales. Scientists resort to solving the equations by numerical simulation on computers. It is no accident that fusion researchers have been pioneers in computational physics for the last 50 years.

    One of the fundamental problems for researchers is reliably predicting plasma temperature and density given only the magnetic field configuration and the externally applied input power. In confinement devices like SPARC, the external power and the heat input from the fusion process are lost through turbulence in the plasma. The turbulence itself is driven by the difference in the extremely high temperature of the plasma core and the relatively cool temperatures of the plasma edge (merely a few million degrees). Predicting the performance of a self-heated fusion plasma therefore requires a calculation of the power balance between the fusion power input and the losses due to turbulence.

    These calculations generally start by assuming plasma temperature and density profiles at a particular location, then computing the heat transported locally by turbulence. However, a useful prediction requires a self-consistent calculation of the profiles across the entire plasma, which includes both the heat input and turbulent losses. Directly solving this problem is beyond the capabilities of any existing computer, so researchers have developed an approach that stitches the profiles together from a series of demanding but tractable local calculations. This method works, but since the heat and particle fluxes depend on multiple parameters, the calculations can be very slow to converge.

    However, techniques emerging from the field of machine learning are well suited to optimize just such a calculation. Starting with a set of computationally intensive local calculations run with the full-physics, first-principles CGYRO code (provided by a team from General Atomics led by Jeff Candy) Rodriguez-Fernandez and Howard fit a surrogate mathematical model, which was used to explore and optimize a search within the parameter space. The results of the optimization were compared to the exact calculations at each optimum point, and the system was iterated to a desired level of accuracy. The researchers estimate that the technique reduced the number of runs of the CGYRO code by a factor of four.

    New approach increases confidence in predictionsThis work, described in a recent publication in the journal Nuclear Fusion, is the highest fidelity calculation ever made of the core of a fusion plasma. It refines and confirms predictions made with less demanding models. Professor Jonathan Citrin, of the Eindhoven University of Technology and leader of the fusion modeling group for DIFFER, the Dutch Institute for Fundamental Energy Research, commented: “The work significantly accelerates our capabilities in more routinely performing ultra-high-fidelity tokamak scenario prediction. This algorithm can help provide the ultimate validation test of machine design or scenario optimization carried out with faster, more reduced modeling, greatly increasing our confidence in the outcomes.” 

    In addition to increasing confidence in the fusion performance of the SPARC experiment, this technique provides a roadmap to check and calibrate reduced physics models, which run with a small fraction of the computational power. Such models, cross-checked against the results generated from turbulence simulations, will provide a reliable prediction before each SPARC discharge, helping to guide experimental campaigns and improving the scientific exploitation of the device. It can also be used to tweak and improve even simple data-driven models, which run extremely quickly, allowing researchers to sift through enormous parameter ranges to narrow down possible experiments or possible future machines.

    The research was funded by CFS, with computational support from the National Energy Research Scientific Computing Center, a U.S. Department of Energy Office of Science User Facility. More

  • in

    Using excess heat to improve electrolyzers and fuel cells

    Reducing the use of fossil fuels will have unintended consequences for the power-generation industry and beyond. For example, many industrial chemical processes use fossil-fuel byproducts as precursors to things like asphalt, glycerine, and other important chemicals. One solution to reduce the impact of the loss of fossil fuels on industrial chemical processes is to store and use the heat that nuclear fission produces. New MIT research has dramatically improved a way to put that heat toward generating chemicals through a process called electrolysis. 

    Electrolyzers are devices that use electricity to split water (H2O) and generate molecules of hydrogen (H2) and oxygen (O2). Hydrogen is used in fuel cells to generate electricity and drive electric cars or drones or in industrial operations like the production of steel, ammonia, and polymers. Electrolyzers can also take in water and carbon dioxide (CO2) and produce oxygen and ethylene (C2H4), a chemical used in polymers and elsewhere.

    There are three main types of electrolyzers. One type works at room temperature, but has downsides; they’re inefficient and require rare metals, such as platinum. A second type is more efficient but runs at high temperatures, above 700 degrees Celsius. But metals corrode at that temperature, and the devices need expensive sealing and insulation. The third type would be a Goldilocks solution for nuclear heat if it were perfected, running at 300-600 C and requiring mostly cheap materials like stainless steel. These cells have never been operated as efficiently as theory says they should. The new work, published this month in Nature, both illuminates the problem and offers a solution.

    A sandwich mystery

    The intermediate-temperature devices use what are called protonic ceramic electrochemical cells. Each cell is a sandwich, with a dense electrolyte layered between two porous electrodes. Water vapor is pumped into the top electrode. A wire on the side connects the two electrodes, and externally generated electricity runs from the top to the bottom. The voltage pulls electrons out of the water, which splits the molecule, releasing oxygen. A hydrogen atom without an electron is just a proton. The protons get pulled through the electrolyte to rejoin with the electrons at the bottom electrode and form H2 molecules, which are then collected.

    On its own, the electrolyte in the middle, made mainly of barium, cerium, and zirconium, conducts protons very well. “But when we put the same material into this three-layer device, the proton conductivity of the full cell is pretty bad,” says Yanhao Dong, a postdoc in MIT’s Department of Nuclear Science and Engineering and a paper co-author. “Its conductivity is only about 50 percent of the bulk form’s. We wondered why there’s an inconsistency here.”

    A couple of clues pointed them in the right direction. First, if they don’t prepare the cell very carefully, the top layer, only about 20 microns (.02 millimeters) thick, doesn’t stay attached. “Sometimes if you use just Scotch tape, it will peel off,” Dong says. Second, when they looked at a cross section of a device using a scanning electron microscope, they saw that the top surface of the electrolyte layer was flat, whereas the bottom surface of the porous electrode sitting on it was bumpy, and the two came into contact in only a few places. They didn’t bond well. That precarious interface leads to both structural de-lamination and poor proton passage from the electrode to the electrolyte.

    Acidic solution

    The solution turned out to be simple: researchers roughed up the top of the electrolyte. Specifically, they applied acid for 10 minutes, which etched grooves into the surface. Ju Li, the Battelle Energy Alliance Professor in Nuclear Engineering and professor of materials science and engineering at MIT, and a paper co-author, likens it to sandblasting a surface before applying paint to increase adhesion. Their acid-treated cells produced about 200 percent more hydrogen per area at 1.5 volts at 600 C than did any previous cell of its type, and worked well down to 350 C with very little performance decay over extended operation. 

    “The authors reported a surprisingly simple yet highly effective surface treatment to dramatically improve the interface,” says Liangbing Hu, the director of the Center for Materials Innovation at the Maryland Energy Innovation Institute, who was not involved in the work. He calls the cell performance “exceptional.”

    “We are excited and surprised” by the results, Dong says. “The engineering solution seems quite simple. And that’s actually good, because it makes it very applicable to real applications.” In a practical product, many such cells would be stacked together to form a module. MIT’s partner in the project, Idaho National Laboratory, is very strong in engineering and prototyping, so Li expects to see electrolyzers based on this technology at scale before too long. “At the materials level, this is a breakthrough that shows that at a real-device scale you can work at this sweet spot of temperature of 350 to 600 degrees Celsius for nuclear fission and fusion reactors,” he says.

    “Reduced operating temperature enables cheaper materials for the large-scale assembly, including the stack,” says Idaho National Laboratory researcher and paper co-author Dong Ding. “The technology operates within the same temperature range as several important, current industrial processes, including ammonia production and CO2 reduction. Matching these temperatures will expedite the technology’s adoption within the existing industry.”

    “This is very significant for both Idaho National Lab and us,” Li adds, “because it bridges nuclear energy and renewable electricity.” He notes that the technology could also help fuel cells, which are basically electrolyzers run in reverse, using green hydrogen or hydrocarbons to generate electricity. According to Wei Wu, a materials scientist at Idaho National Laboratory and a paper co-author, “this technique is quite universal and compatible with other solid electrochemical devices.”

    Dong says it’s rare for a paper to advance both science and engineering to such a degree. “We are happy to combine those together and get both very good scientific understanding and also very good real-world performance.”

    This work, done in collaboration with Idaho National Laboratory, New Mexico State University, and the University of Nebraska–Lincoln, was funded, in part, by the U.S. Department of Energy. More

  • in

    Five MIT PhD students awarded 2022 J-WAFS fellowships for water and food solutions

    The Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) recently announced the selection of its 2022-23 cohort of graduate fellows. Two students were named Rasikbhai L. Meswani Fellows for Water Solutions and three students were named J-WAFS Graduate Student Fellows. All five fellows will receive full tuition and a stipend for one semester, and J-WAFS will support the students throughout the 2022-23 academic year by providing networking, mentorship, and opportunities to showcase their research.

    New this year, fellowship nominations were open not only to students pursuing water research, but food-related research as well. The five students selected were chosen for their commitment to solutions-based research that aims to alleviate problems such as water supply or purification, food security, or agriculture. Their projects exemplify the wide range of research that J-WAFS supports, from enhancing nutrition through improved methods to deliver micronutrients to developing high-performance drip irrigation technology. The strong applicant pool reflects the passion MIT students have to address the water and food crises currently facing the planet.

    “This year’s fellows are drawn from a dynamic and engaged community across the Institute whose creativity and ingenuity are pushing forward transformational water and food solutions,” says J-WAFS executive director Renee J. Robins. “We congratulate these students as we recognize their outstanding achievements and their promise as up-and-coming leaders in global water and food sectors.”

    2022-23 Rasikbhai L. Meswani Fellows for Water SolutionsThe Rasikbhai L. Meswani Fellowship for Water Solutions is a fellowship for students pursuing water-related research at MIT. The Rasikbhai L. Meswani Fellowship for Water Solutions was made possible by a generous gift from Elina and Nikhil Meswani and family.

    Aditya Ghodgaonkar is a PhD candidate in the Department of Mechanical Engineering at MIT, where he works in the Global Engineering and Research (GEAR) Lab under Professor Amos Winter. Ghodgaonkar received a bachelor’s degree in mechanical engineering from the RV College of Engineering in India. He then moved to the United States and received a master’s degree in mechanical engineering from Purdue University.Ghodgaonkar is currently designing hydraulic components for drip irrigation that could support the development of water-efficient irrigation systems that are off-grid, inexpensive, and low-maintenance. He has focused on designing drip irrigation emitters that are resistant to clogging, seeking inspiration about flow regulation from marine fauna such as manta rays, as well as turbomachinery concepts. Ghodgaonkar notes that clogging is currently an expensive technical challenge to diagnose, mitigate, and resolve. With an eye on hundreds of millions of farms in developing countries, he aims to bring the benefits of irrigation technology to even the poorest farmers.Outside of his research, Ghodgaonkar is a mentor in MIT Makerworks and has been a teaching assistant for classes such as 2.007 (Design and Manufacturing I). He also helped organize the annual MIT Water Summit last fall.

    Devashish Gokhale is a PhD candidate advised by Professor Patrick Doyle in the Department of Chemical Engineering. He received a bachelor’s degree in chemical engineering from the Indian Institute of Technology Madras, where he researched fluid flow in energy-efficient pumps. Gokhale’s commitment to global water security stemmed from his experience growing up in India, where water sources are threatened by population growth, industrialization, and climate change.As a researcher in the Doyle group, Devashish is developing sustainable and reusable materials for water treatment, with a focus on the elimination of emerging contaminants and other micropollutants from water through cost-effective processes. Many of these contaminants are carcinogens or endocrine disruptors, posing significant threats to both humans and animals. His advisor notes that Devashish was the first researcher in the Doyle group to work on water purification, bringing his passion for the topic to the lab.Gokhale’s research won an award for potential scalability in last year’s J-WAFS World Water Day competition. He also serves as the lecture series chair in the MIT Water Club.

    2022-23 J-WAFS Graduate Student FellowsThe J-WAFS Fellowship for Water and Food Solutions is funded by the J-WAFS Research Affiliate Program, which offers companies the opportunity to collaborate with MIT on water and food research. A portion of each research affiliate’s fees supports this fellowship. The program is central to J-WAFS’ efforts to engage across sector and disciplinary boundaries in solving real-world problems. Currently, there are two J-WAFS Research Affiliates: Xylem, Inc., a water technology company, and GoAigua, a company leading the digital transformation of the water industry.

    James Zhang is a PhD candidate in the Department of Mechanical Engineering at MIT, where he has worked in the NanoEngineering Laboratory with Professor Gang Chen since 2019. As an undergraduate at Carnegie Mellon University, he double majored in mechanical engineering and engineering public policy. He then received a master’s degree in mechanical engineering from MIT. In addition to working in the NanoEngineering Laboratory, James has also worked in the Zhao Laboratory and in the Boriskina Research Group at MIT.Zhang is developing a technology that uses light-induced evaporation to clean water. He is currently investigating the fundamental properties of how light interacts with brackish water surfaces. With strong theoretical as well as experimental components, his research could lead to innovations in desalinating brackish water at high energy efficiencies. Outside of his research, Zhang has served as a student moderator for the MIT International Colloquia on Thermal Innovations.

    Katharina Fransen is a PhD candidate advised by Professor Bradley Olsen in the Department of Chemical Engineering at MIT. She received a bachelor’s degree in chemical engineering from the University of Minnesota, where she was involved in the Society of Women Engineers. Fransen is motivated by the challenge of protecting the most vulnerable global communities from the large quantities of plastic waste associated with traditional food packaging materials. As a researcher in the Olsen Lab, Fransen is developing new plastics that are biologically-based and biodegradable, so they can degrade in the environment instead of polluting communities with plastic waste. These polymers are also optimized for food packaging applications to keep food fresher for longer, preventing food waste.Outside of her research, Fransen is involved in Diversity in Chemical Engineering as the coordinator for the graduate application mentorship program for underrepresented groups. She is also an active member of Graduate Womxn in ChemE and mentors an Undergraduate Research Opportunities Program student.

    Linzixuan (Rhoda) Zhang is a PhD candidate advised by Professor Robert Langer and Ana Jaklenec in the Department of Chemical Engineering at MIT. She received a bachelor’s degree in chemical engineering from the University of Illinois at Urbana-Champaign, where she researched how to genetically engineer microorganisms for the efficient production of advanced biofuels and chemicals.Zhang is currently developing a micronutrient delivery platform that fortifies foods with essential vitamins and nutrients. She has helped develop a group of biodegradable polymers that can stabilize micronutrients under harsh conditions, enabling local food companies to fortify food with essential vitamins. This work aims to tackle a hidden crisis in low- and middle-income countries, where a chronic lack of essential micronutrients affects an estimated 2 billion people.Zhang is also working on the development of self-boosting vaccines to promote more widespread vaccine access and serves as a research mentor in the Langer Lab. More