More stories

  • in

    Designing zeolites, porous materials made to trap molecules

    Zeolites are a class of minerals used in everything from industrial catalysts and chemical filters to laundry detergents and cat litter. They are mostly composed of silicon and aluminum — two abundant, inexpensive elements — plus oxygen; they have a crystalline structure; and most significantly, they are porous. Among the regularly repeating atomic patterns in them are tiny interconnected openings, or pores, that can trap molecules that just fit inside them, allow smaller ones to pass through, or block larger ones from entering. A zeolite can remove unwanted molecules from gases and liquids, or trap them temporarily and then release them, or hold them while they undergo rapid chemical reactions.

    Some zeolites occur naturally, but they take unpredictable forms and have variable-sized pores. “People synthesize artificial versions to ensure absolute purity and consistency,” says Rafael Gómez-Bombarelli, the Jeffrey Cheah Career Development Chair in Engineering in the Department of Materials Science and Engineering (DMSE). And they work hard to influence the size of the internal pores in hopes of matching the molecule or other particle they’re looking to capture.

    The basic recipe for making zeolites sounds simple. Mix together the raw ingredients — basically, silicon dioxide and aluminum oxide — and put them in a reactor for a few days at a high temperature and pressure. Depending on the ratio between the ingredients and the temperature, pressure, and timing, as the initial gel slowly solidifies into crystalline form, different zeolites emerge.

    But there’s one special ingredient to add “to help the system go where you want it to go,” says Gómez-Bombarelli. “It’s a molecule that serves as a template so that the zeolite you want will crystallize around it and create pores of the desired size and shape.”

    The so-called templating molecule binds to the material before it solidifies. As crystallization progresses, the molecule directs the structure, or “framework,” that forms around it. After crystallization, the temperature is raised and the templating molecule burns off, leaving behind a solid aluminosilicate material filled with open pores that are — given the correct templating molecule and synthesis conditions — just the right size and shape to recognize the targeted molecule.

    The zeolite conundrum

    Theoretical studies suggest that there should be hundreds of thousands of possible zeolites. But despite some 60 years of intensive research, only about 250 zeolites have been made. This is sometimes called the “zeolite conundrum.” Why haven’t more been made — especially now, when they could help ongoing efforts to decarbonize energy and the chemical industry?

    One challenge is figuring out the best recipe for making them: Factors such as the best ratio between the silicon and aluminum, what cooking temperature to use, and whether to stir the ingredients all influence the outcome. But the real key, the researchers say, lies in choosing a templating molecule that’s best for producing the intended zeolite framework. Making that match is difficult: There are hundreds of known templating molecules and potentially a million zeolites, and researchers are continually designing new molecules because millions more could be made and might work better.

    For decades, the exploration of how to synthesize a particular zeolite has been done largely by trial and error — a time-consuming, expensive, inefficient way to go about it. There has also been considerable effort to use “atomistic” (atom-by-atom) simulation to figure out what known or novel templating molecule to use to produce a given zeolite. But the experimental and modeling results haven’t generated reliable guidance. In many cases, researchers have carefully selected or designed a molecule to make a particular zeolite, but when they tried their molecule in the lab, the zeolite that formed wasn’t what they expected or desired. So they needed to start over.

    Those experiences illustrate what Gómez-Bombarelli and his colleagues believe is the problem that’s been plaguing zeolite design for decades. All the efforts — both experimental and theoretical — have focused on finding the templating molecule that’s best for forming a specific zeolite. But what if that templating molecule is also really good — or even better — at forming some other zeolite?

    To determine the “best” molecule for making a certain zeolite framework, and the “best” zeolite framework to act as host to a particular molecule, the researchers decided to look at both sides of the pairing. Daniel Schwalbe-Koda PhD ’22, a former member of Gómez-Bombarelli’s group and now a postdoc at Lawrence Livermore National Laboratory, describes the process as a sort of dance with molecules and zeolites in a room looking for partners. “Each molecule wants to find a partner zeolite, and each zeolite wants to find a partner molecule,” he says. “But it’s not enough to find a good dance partner from the perspective of only one dancer. The potential partner could prefer to dance with someone else, after all. So it needs to be a particularly good pairing.” The upshot: “You need to look from the perspective of each of them.”

    To find the best match from both perspectives, the researchers needed to try every molecule with every zeolite and quantify how well the pairings worked.

    A broader metric for evaluating pairs

    Before performing that analysis, the researchers defined a new “evaluating metric” that they could use to rank each templating molecule-zeolite pair. The standard metric for measuring the affinity between a molecule and a zeolite is “binding energy,” that is, how strongly the molecule clings to the zeolite or, conversely, how much energy is required to separate the two. While recognizing the value of that metric, the MIT-led team wanted to take more parameters into account.

    Their new evaluating metric therefore includes not only binding energy but also the size, shape, and volume of the molecule and the opening in the zeolite framework. And their approach calls for turning the molecule to different orientations to find the best possible fit.

    Affinity scores for all molecule-zeolite pairs based on that evaluating metric would enable zeolite researchers to answer two key questions: What templating molecule will form the zeolite that I want? And if I use that templating molecule, what other zeolites might it form instead? Using the molecule-zeolite affinity scores, researchers could first identify molecules that look good for making a desired zeolite. They could then rule out the ones that also look good for forming other zeolites, leaving a set of molecules deemed to be “highly selective” for making the desired zeolite.  

    Validating the approach: A rich literature

    But does their new metric work better than the standard one? To find out, the team needed to perform atomistic simulations using their new evaluating metric and then benchmark their results against experimental evidence reported in the literature. There are many thousands of journal articles reporting on experiments involving zeolites — in many cases, detailing not only the molecule-zeolite pairs and outcomes but also synthesis conditions and other details. Ferreting out articles with the information the researchers needed was a job for machine learning — in particular, for natural language processing.

    For that task, Gómez-Bombarelli and Schwalbe-Koda turned to their DMSE colleague Elsa Olivetti PhD ’07, the Esther and Harold E. Edgerton Associate Professor in Materials Science and Engineering. Using a literature-mining technique that she and a group of collaborators had developed, she and her DMSE team processed more than 2 million materials science papers, found some 90,000 relating to zeolites, and extracted 1,338 of them for further analysis. The yield was 549 templating molecules tested, 209 zeolite frameworks produced, and 5,663 synthesis routes followed.

    Based on those findings, the researchers used their new evaluating metric and a novel atomistic simulation technique to examine more than half-a-million templating molecule-zeolite pairs. Their results reproduced experimental outcomes reported in more than a thousand journal articles. Indeed, the new metric outperformed the traditional binding energy metric, and their simulations were orders of magnitude faster than traditional approaches.

    Ready for experimental investigations

    Now the researchers were ready to put their approach to the test: They would use it to design new templating molecules and try them out in experiments performed by a team led by Yuriy Román-Leshkov, the Robert T. Haslam (1911) Professor of Chemical Engineering, and a team from the Instituto de Tecnologia Química in Valencia, Spain, led by Manuel Moliner and Avelino Corma.

    One set of experiments focused on a zeolite called chabazite, which is used in catalytic converters for vehicles. Using their techniques, the researchers designed a new templating molecule for synthesizing chabazite, and the experimental results confirmed their approach. Their analyses had shown that the new templating molecule would be good for forming chabazite and not for forming anything else. “Its binding strength isn’t as high as other molecules for chabazite, so people hadn’t used it,” says Gómez-Bombarelli. “But it’s pretty good, and it’s not good for anything else, so it’s selective — and it’s way cheaper than the usual ones.”

    In addition, in their new molecule, the electrical charge is distributed differently than in the traditional ones, which led to new possibilities. The researchers found that by adjusting both the shape and charge of the molecule, they could control where the negative charge occurs on the pore that’s created in the final zeolite. “The charge placement that results can make the chabazite a much better catalyst than it was before,” says Gómez-Bombarelli. “So our same rules for molecule design also determine where the negative charge is going to end up, which can lead to whole different classes of catalysts.”

    Schwalbe-Koda describes another experiment that demonstrates the importance of molecular shape as well as the types of new materials made possible using the team’s approach. In one striking example, the team designed a templating molecule with a height and width that’s halfway between those of two molecules that are now commonly used—one for making chabazite and the other for making a zeolite called AEI. (Every new zeolite structure is examined by the International Zeolite Association and — once approved — receives a three-letter designation.)

    Experiments using that in-between templating molecule resulted in the formation of not one zeolite or the other, but a combination of the two in a single solid. “The result blends two different structures together in a way that the final result is better than the sum of its parts,” says Schwalbe-Koda. “The catalyst is like the one used in catalytic converters in today’s trucks — only better.” It’s more efficient in converting nitrogen oxides to harmless nitrogen gases and water, and — because of the two different pore sizes and the aluminosilicate composition — it works well on exhaust that’s fairly hot, as during normal operation, and also on exhaust that’s fairly cool, as during startup.

    Putting the work into practice

    As with all materials, the commercial viability of a zeolite will depend in part on the cost of making it. The researchers’ technique can identify promising templating molecules, but some of them may be difficult to synthesize in the lab. As a result, the overall cost of that molecule-zeolite combination may be too high to be competitive.

    Gómez-Bombarelli and his team therefore include in their assessment process a calculation of cost for synthesizing each templating molecule they identified — generally the most expensive part of making a given zeolite. They use a publicly available model devised in 2018 by Connor Coley PhD ’19, now the Henri Slezynger (1957) Career Development Assistant Professor of Chemical Engineering at MIT. The model takes into account all the starting materials and the step-by-step chemical reactions needed to produce the targeted templating molecule.

    However, commercialization decisions aren’t based solely on cost. Sometimes there’s a trade-off between cost and performance. “For instance, given our chabazite findings, would customers or the community trade a little bit of activity for a 100-fold decrease in the cost of the templating molecule?” says Gómez-Bombarelli. “The answer is likely yes. So we’ve made a tool that can help them navigate that trade-off.” And there are other factors to consider. For example, is this templating molecule truly novel, or have others already studied it — or perhaps even hold a patent on it?

    “While an algorithm can guide development of templating molecules and quantify specific molecule-zeolite matches, other types of assessments are best left to expert judgment,” notes Schwalbe-Koda. “We need a partnership between computational analysis and human intuition and experience.”

    To that end, the MIT researchers and their colleagues decided to share their techniques and findings with other zeolite researchers. Led by Schwalbe-Koda, they created an online database that they made publicly accessible and easy to use — an unusual step, given the competitive industries that rely on zeolites. The interactive website — zeodb.mit.edu — contains the researchers’ final metrics for templating molecule-zeolite pairs resulting from hundreds of thousands of simulations; all the identified journal articles, along with which molecules and zeolites were examined and what synthesis conditions were used; and many more details. Users are free to search and organize the data in any way that suits them.

    Gómez-Bombarelli, Schwalbe-Koda, and their colleagues hope that their techniques and the interactive website will help other researchers explore and discover promising new templating molecules and zeolites, some of which could have profound impacts on efforts to decarbonize energy and tackle climate change.

    This research involved a team of collaborators at MIT, the Instituto de Tecnologia Química (UPV-CSIC), and Stockholm University. The work was supported in part by the MIT Energy Initiative Seed Fund Program and by seed funds from the MIT International Science and Technology Initiative. Daniel Schwalbe-Koda was supported by an ExxonMobil-MIT Energy Fellowship in 2020–21.

    This article appears in the Spring 2022 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Taking a magnifying glass to data center operations

    When the MIT Lincoln Laboratory Supercomputing Center (LLSC) unveiled its TX-GAIA supercomputer in 2019, it provided the MIT community a powerful new resource for applying artificial intelligence to their research. Anyone at MIT can submit a job to the system, which churns through trillions of operations per second to train models for diverse applications, such as spotting tumors in medical images, discovering new drugs, or modeling climate effects. But with this great power comes the great responsibility of managing and operating it in a sustainable manner — and the team is looking for ways to improve.

    “We have these powerful computational tools that let researchers build intricate models to solve problems, but they can essentially be used as black boxes. What gets lost in there is whether we are actually using the hardware as effectively as we can,” says Siddharth Samsi, a research scientist in the LLSC. 

    To gain insight into this challenge, the LLSC has been collecting detailed data on TX-GAIA usage over the past year. More than a million user jobs later, the team has released the dataset open source to the computing community.

    Their goal is to empower computer scientists and data center operators to better understand avenues for data center optimization — an important task as processing needs continue to grow. They also see potential for leveraging AI in the data center itself, by using the data to develop models for predicting failure points, optimizing job scheduling, and improving energy efficiency. While cloud providers are actively working on optimizing their data centers, they do not often make their data or models available for the broader high-performance computing (HPC) community to leverage. The release of this dataset and associated code seeks to fill this space.

    “Data centers are changing. We have an explosion of hardware platforms, the types of workloads are evolving, and the types of people who are using data centers is changing,” says Vijay Gadepally, a senior researcher at the LLSC. “Until now, there hasn’t been a great way to analyze the impact to data centers. We see this research and dataset as a big step toward coming up with a principled approach to understanding how these variables interact with each other and then applying AI for insights and improvements.”

    Papers describing the dataset and potential applications have been accepted to a number of venues, including the IEEE International Symposium on High-Performance Computer Architecture, the IEEE International Parallel and Distributed Processing Symposium, the Annual Conference of the North American Chapter of the Association for Computational Linguistics, the IEEE High-Performance and Embedded Computing Conference, and International Conference for High Performance Computing, Networking, Storage and Analysis. 

    Workload classification

    Among the world’s TOP500 supercomputers, TX-GAIA combines traditional computing hardware (central processing units, or CPUs) with nearly 900 graphics processing unit (GPU) accelerators. These NVIDIA GPUs are specialized for deep learning, the class of AI that has given rise to speech recognition and computer vision.

    The dataset covers CPU, GPU, and memory usage by job; scheduling logs; and physical monitoring data. Compared to similar datasets, such as those from Google and Microsoft, the LLSC dataset offers “labeled data, a variety of known AI workloads, and more detailed time series data compared with prior datasets. To our knowledge, it’s one of the most comprehensive and fine-grained datasets available,” Gadepally says. 

    Notably, the team collected time-series data at an unprecedented level of detail: 100-millisecond intervals on every GPU and 10-second intervals on every CPU, as the machines processed more than 3,000 known deep-learning jobs. One of the first goals is to use this labeled dataset to characterize the workloads that different types of deep-learning jobs place on the system. This process would extract features that reveal differences in how the hardware processes natural language models versus image classification or materials design models, for example.   

    The team has now launched the MIT Datacenter Challenge to mobilize this research. The challenge invites researchers to use AI techniques to identify with 95 percent accuracy the type of job that was run, using their labeled time-series data as ground truth.

    Such insights could enable data centers to better match a user’s job request with the hardware best suited for it, potentially conserving energy and improving system performance. Classifying workloads could also allow operators to quickly notice discrepancies resulting from hardware failures, inefficient data access patterns, or unauthorized usage.

    Too many choices

    Today, the LLSC offers tools that let users submit their job and select the processors they want to use, “but it’s a lot of guesswork on the part of users,” Samsi says. “Somebody might want to use the latest GPU, but maybe their computation doesn’t actually need it and they could get just as impressive results on CPUs, or lower-powered machines.”

    Professor Devesh Tiwari at Northeastern University is working with the LLSC team to develop techniques that can help users match their workloads to appropriate hardware. Tiwari explains that the emergence of different types of AI accelerators, GPUs, and CPUs has left users suffering from too many choices. Without the right tools to take advantage of this heterogeneity, they are missing out on the benefits: better performance, lower costs, and greater productivity.

    “We are fixing this very capability gap — making users more productive and helping users do science better and faster without worrying about managing heterogeneous hardware,” says Tiwari. “My PhD student, Baolin Li, is building new capabilities and tools to help HPC users leverage heterogeneity near-optimally without user intervention, using techniques grounded in Bayesian optimization and other learning-based optimization methods. But, this is just the beginning. We are looking into ways to introduce heterogeneity in our data centers in a principled approach to help our users achieve the maximum advantage of heterogeneity autonomously and cost-effectively.”

    Workload classification is the first of many problems to be posed through the Datacenter Challenge. Others include developing AI techniques to predict job failures, conserve energy, or create job scheduling approaches that improve data center cooling efficiencies.

    Energy conservation 

    To mobilize research into greener computing, the team is also planning to release an environmental dataset of TX-GAIA operations, containing rack temperature, power consumption, and other relevant data.

    According to the researchers, huge opportunities exist to improve the power efficiency of HPC systems being used for AI processing. As one example, recent work in the LLSC determined that simple hardware tuning, such as limiting the amount of power an individual GPU can draw, could reduce the energy cost of training an AI model by 20 percent, with only modest increases in computing time. “This reduction translates to approximately an entire week’s worth of household energy for a mere three-hour time increase,” Gadepally says.

    They have also been developing techniques to predict model accuracy, so that users can quickly terminate experiments that are unlikely to yield meaningful results, saving energy. The Datacenter Challenge will share relevant data to enable researchers to explore other opportunities to conserve energy.

    The team expects that lessons learned from this research can be applied to the thousands of data centers operated by the U.S. Department of Defense. The U.S. Air Force is a sponsor of this work, which is being conducted under the USAF-MIT AI Accelerator.

    Other collaborators include researchers at MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Professor Charles Leiserson’s Supertech Research Group is investigating performance-enhancing techniques for parallel computing, and research scientist Neil Thompson is designing studies on ways to nudge data center users toward climate-friendly behavior.

    Samsi presented this work at the inaugural AI for Datacenter Optimization (ADOPT’22) workshop last spring as part of the IEEE International Parallel and Distributed Processing Symposium. The workshop officially introduced their Datacenter Challenge to the HPC community.

    “We hope this research will allow us and others who run supercomputing centers to be more responsive to user needs while also reducing the energy consumption at the center level,” Samsi says. More

  • in

    Building better batteries, faster

    To help combat climate change, many car manufacturers are racing to add more electric vehicles in their lineups. But to convince prospective buyers, manufacturers need to improve how far these cars can go on a single charge. One of their main challenges? Figuring out how to make extremely powerful but lightweight batteries.

    Typically, however, it takes decades for scientists to thoroughly test new battery materials, says Pablo Leon, an MIT graduate student in materials science. To accelerate this process, Leon is developing a machine-learning tool for scientists to automate one of the most time-consuming, yet key, steps in evaluating battery materials.

    With his tool in hand, Leon plans to help search for new materials to enable the development of powerful and lightweight batteries. Such batteries would not only improve the range of EVs, but they could also unlock potential in other high-power systems, such as solar energy systems that continuously deliver power, even at night.

    From a young age, Leon knew he wanted to pursue a PhD, hoping to one day become a professor of engineering, like his father. Growing up in College Station, Texas, home to Texas A&M University, where his father worked, many of Leon’s friends also had parents who were professors or affiliated with the university. Meanwhile, his mom worked outside the university, as a family counselor in a neighboring city.

    In college, Leon followed in his father’s and older brother’s footsteps to become a mechanical engineer, earning his bachelor’s degree at Texas A&M. There, he learned how to model the behaviors of mechanical systems, such as a metal spring’s stiffness. But he wanted to delve deeper, down to the level of atoms, to understand exactly where these behaviors come from.

    So, when Leon applied to graduate school at MIT, he switched fields to materials science, hoping to satisfy his curiosity. But the transition to a different field was “a really hard process,” Leon says, as he rushed to catch up to his peers.

    To help with the transition, Leon sought out a congenial research advisor and found one in Rafael Gómez-Bombarelli, an assistant professor in the Department of Materials Science and Engineering (DMSE). “Because he’s from Spain and my parents are Peruvian, there’s a cultural ease with the way we talk,” Leon says. According to Gómez-Bombarelli, sometimes the two of them even discuss research in Spanish — a “rare treat.” That connection has empowered Leon to freely brainstorm ideas or talk through concerns with his advisor, enabling him to make significant progress in his research.

    Leveraging machine learning to research battery materials

    Scientists investigating new battery materials generally use computer simulations to understand how different combinations of materials perform. These simulations act as virtual microscopes for batteries, zooming in to see how materials interact at an atomic level. With these details, scientists can understand why certain combinations do better, guiding their search for high-performing materials.

    But building accurate computer simulations is extremely time-intensive, taking years and sometimes even decades. “You need to know how every atom interacts with every other atom in your system,” Leon says. To create a computer model of these interactions, scientists first make a rough guess at a model using complex quantum mechanics calculations. They then compare the model with results from real-life experiments, manually tweaking different parts of the model, including the distances between atoms and the strength of chemical bonds, until the simulation matches real life.

    With well-studied battery materials, the simulation process is somewhat easier. Scientists can buy simulation software that includes pre-made models, Leon says, but these models often have errors and still require additional tweaking.

    To build accurate computer models more quickly, Leon is developing a machine-learning-based tool that can efficiently guide the trial-and-error process. “The hope with our machine learning framework is to not have to rely on proprietary models or do any hand-tuning,” he says. Leon has verified that for well-studied materials, his tool is as accurate as the manual method for building models.

    With this system, scientists will have a single, standardized approach for building accurate models in lieu of the patchwork of approaches currently in place, Leon says.

    Leon’s tool comes at an opportune time, when many scientists are investigating a new paradigm of batteries: solid-state batteries. Compared to traditional batteries, which contain liquid electrolytes, solid-state batteries are safer, lighter, and easier to manufacture. But creating versions of these batteries that are powerful enough for EVs or renewable energy storage is challenging.

    This is largely because in battery chemistry, ions dislike flowing through solids and instead prefer liquids, in which atoms are spaced further apart. Still, scientists believe that with the right combination of materials, solid-state batteries can provide enough electricity for high-power systems, such as EVs. 

    Leon plans to use his machine-learning tool to help look for good solid-state battery materials more quickly. After he finds some powerful candidates in simulations, he’ll work with other scientists to test out the new materials in real-world experiments.

    Helping students navigate graduate school

    To get to where he is today, doing exciting and impactful research, Leon credits his community of family and mentors. Because of his upbringing, Leon knew early on which steps he would need to take to get into graduate school and work toward becoming a professor. And he appreciates the privilege of his position, even more so as a Peruvian American, given that many Latino students are less likely to have access to the same resources. “I understand the academic pipeline in a way that I think a lot of minority groups in academia don’t,” he says.

    Now, Leon is helping prospective graduate students from underrepresented backgrounds navigate the pipeline through the DMSE Application Assistance Program. Each fall, he mentors applicants for the DMSE PhD program at MIT, providing feedback on their applications and resumes. The assistance program is student-run and separate from the admissions process.

    Knowing firsthand how invaluable mentorship is from his relationship with his advisor, Leon is also heavily involved in mentoring junior PhD students in his department. This past year, he served as the academic chair on his department’s graduate student organization, the Graduate Materials Council. With MIT still experiencing disruptions from Covid-19, Leon noticed a problem with student cohesiveness. “I realized that traditional [informal] modes of communication across [incoming class] years had been cut off,” he says, making it harder for junior students to get advice from their senior peers. “They didn’t have any community to fall back on.”

    To help fix this problem, Leon served as a go-to mentor for many junior students. He helped second-year PhD students prepare for their doctoral qualification exam, an often-stressful rite of passage. He also hosted seminars for first-year students to teach them how to make the most of their classes and help them acclimate to the department’s fast-paced classes. For fun, Leon organized an axe-throwing event to further facilitate student cameraderie.

    Leon’s efforts were met with success. Now, “newer students are building back the community,” he says, “so I feel like I can take a step back” from being academic chair. He will instead continue mentoring junior students through other programs within the department. He also plans to extend his community-building efforts among faculty and students, facilitating opportunities for students to find good mentors and work on impactful research. With these efforts, Leon hopes to help others along the academic pipeline that he’s become familiar with, journeying together over their PhDs. More

  • in

    Solving a longstanding conundrum in heat transfer

    It is a problem that has beguiled scientists for a century. But, buoyed by a $625,000 Distinguished Early Career Award from the U.S. Department of Energy (DoE), Matteo Bucci, an associate professor in the Department of Nuclear Science and Engineering (NSE), hopes to be close to an answer.

    Tackling the boiling crisis

    Whether you’re heating a pot of water for pasta or are designing nuclear reactors, one phenomenon — boiling — is vital for efficient execution of both processes.

    “Boiling is a very effective heat transfer mechanism; it’s the way to remove large amounts of heat from the surface, which is why it is used in many high-power density applications,” Bucci says. An example use case: nuclear reactors.

    To the layperson, boiling appears simple — bubbles form and burst, removing heat. But what if so many bubbles form and coalesce that they form a band of vapor that prevents further heat transfer? Such a problem is a known entity and is labeled the boiling crisis. It would lead to runaway heat, and a failure of fuel rods in nuclear reactors. So “understanding and determining under which conditions the boiling crisis is likely to happen is critical to designing more efficient and cost-competitive nuclear reactors,” Bucci says.

    Early work on the boiling crisis dates back nearly a century ago, to 1926. And while much work has been done, “it is clear that we haven’t found an answer,” Bucci says. The boiling crisis remains a challenge because while models abound, the measurement of related phenomena to prove or disprove these models has been difficult. “[Boiling] is a process that happens on a very, very small length scale and over very, very short times,” Bucci says. “We are not able to observe it at the level of detail necessary to understand what really happens and validate hypotheses.”

    But, over the past few years, Bucci and his team have been developing diagnostics that can measure the phenomena related to boiling and thereby provide much-needed answers to a classic problem. Diagnostics are anchored in infrared thermometry and a technique using visible light. “By combining these two techniques I think we’re going to be ready to answer standing questions related to heat transfer, we can make our way out of the rabbit hole,” Bucci says. The grant award from the U.S. DoE for Nuclear Energy Projects will aid in this and Bucci’s other research efforts.

    An idyllic Italian childhood

    Tackling difficult problems is not new territory for Bucci, who grew up in the small town of Città di Castello near Florence, Italy. Bucci’s mother was an elementary school teacher. His father used to have a machine shop, which helped develop Bucci’s scientific bent. “I liked LEGOs a lot when I was a kid. It was a passion,” he adds.

    Despite Italy going through a severe pullback from nuclear engineering during his formative years, the subject fascinated Bucci. Job opportunities in the field were uncertain but Bucci decided to dig in. “If I have to do something for the rest of my life, it might as well be something I like,” he jokes. Bucci attended the University of Pisa for undergraduate and graduate studies in nuclear engineering.

    His interest in heat transfer mechanisms took root during his doctoral studies, a research subject he pursued in Paris at the French Alternative Energies and Atomic Energy Commission (CEA). It was there that a colleague suggested work on the boiling water crisis. This time Bucci set his sights on NSE at MIT and reached out to Professor Jacopo Buongiorno to inquire about research at the institution. Bucci had to fundraise at CEA to conduct research at MIT. He arrived just a couple of days before the Boston Marathon bombing in 2013 with a round-trip ticket. But Bucci has stayed ever since, moving on to become a research scientist and then associate professor at NSE.

    Bucci admits he struggled to adapt to the environment when he first arrived at MIT, but work and friendships with colleagues — he counts NSE’s Guanyu Su and Reza Azizian as among his best friends — helped conquer early worries.

    The integration of artificial intelligence

    In addition to diagnostics for boiling, Bucci and his team are working on ways of integrating artificial intelligence and experimental research. He is convinced that “the integration of advanced diagnostics, machine learning, and advanced modeling tools will blossom in a decade.”

    Bucci’s team is developing an autonomous laboratory for boiling heat transfer experiments. Running on machine learning, the setup decides which experiments to run based on a learning objective the team assigns. “We formulate a question and the machine will answer by optimizing the kinds of experiments that are necessary to answer those questions,” Bucci says, “I honestly think this is the next frontier for boiling,” he adds.

    “It’s when you climb a tree and you reach the top, that you realize that the horizon is much more vast and also more beautiful,” Bucci says of his zeal to pursue more research in the field.

    Even as he seeks new heights, Bucci has not forgotten his origins. Commemorating Italy’s hosting of the World Cup in 1990, a series of posters showcasing a soccer field fitted into the Roman Colosseum occupies pride of place in his home and office. Created by Alberto Burri, the posters are of sentimental value: The (now deceased) Italian artist also hailed from Bucci’s hometown — Città di Castello. More

  • in

    New hardware offers faster computation for artificial intelligence, with much less energy

    As scientists push the boundaries of machine learning, the amount of time, energy, and money required to train increasingly complex neural network models is skyrocketing. A new area of artificial intelligence called analog deep learning promises faster computation with a fraction of the energy usage.

    Programmable resistors are the key building blocks in analog deep learning, just like transistors are the core elements for digital processors. By repeating arrays of programmable resistors in complex layers, researchers can create a network of analog artificial “neurons” and “synapses” that execute computations just like a digital neural network. This network can then be trained to achieve complex AI tasks like image recognition and natural language processing.

    A multidisciplinary team of MIT researchers set out to push the speed limits of a type of human-made analog synapse that they had previously developed. They utilized a practical inorganic material in the fabrication process that enables their devices to run 1 million times faster than previous versions, which is also about 1 million times faster than the synapses in the human brain.

    Moreover, this inorganic material also makes the resistor extremely energy-efficient. Unlike materials used in the earlier version of their device, the new material is compatible with silicon fabrication techniques. This change has enabled fabricating devices at the nanometer scale and could pave the way for integration into commercial computing hardware for deep-learning applications.

    “With that key insight, and the very powerful nanofabrication techniques we have at MIT.nano, we have been able to put these pieces together and demonstrate that these devices are intrinsically very fast and operate with reasonable voltages,” says senior author Jesús A. del Alamo, the Donner Professor in MIT’s Department of Electrical Engineering and Computer Science (EECS). “This work has really put these devices at a point where they now look really promising for future applications.”

    “The working mechanism of the device is electrochemical insertion of the smallest ion, the proton, into an insulating oxide to modulate its electronic conductivity. Because we are working with very thin devices, we could accelerate the motion of this ion by using a strong electric field, and push these ionic devices to the nanosecond operation regime,” explains senior author Bilge Yildiz, the Breene M. Kerr Professor in the departments of Nuclear Science and Engineering and Materials Science and Engineering.

    “The action potential in biological cells rises and falls with a timescale of milliseconds, since the voltage difference of about 0.1 volt is constrained by the stability of water,” says senior author Ju Li, the Battelle Energy Alliance Professor of Nuclear Science and Engineering and professor of materials science and engineering, “Here we apply up to 10 volts across a special solid glass film of nanoscale thickness that conducts protons, without permanently damaging it. And the stronger the field, the faster the ionic devices.”

    These programmable resistors vastly increase the speed at which a neural network is trained, while drastically reducing the cost and energy to perform that training. This could help scientists develop deep learning models much more quickly, which could then be applied in uses like self-driving cars, fraud detection, or medical image analysis.

    “Once you have an analog processor, you will no longer be training networks everyone else is working on. You will be training networks with unprecedented complexities that no one else can afford to, and therefore vastly outperform them all. In other words, this is not a faster car, this is a spacecraft,” adds lead author and MIT postdoc Murat Onen.

    Co-authors include Frances M. Ross, the Ellen Swallow Richards Professor in the Department of Materials Science and Engineering; postdocs Nicolas Emond and Baoming Wang; and Difei Zhang, an EECS graduate student. The research is published today in Science.

    Accelerating deep learning

    Analog deep learning is faster and more energy-efficient than its digital counterpart for two main reasons. “First, computation is performed in memory, so enormous loads of data are not transferred back and forth from memory to a processor.” Analog processors also conduct operations in parallel. If the matrix size expands, an analog processor doesn’t need more time to complete new operations because all computation occurs simultaneously.

    The key element of MIT’s new analog processor technology is known as a protonic programmable resistor. These resistors, which are measured in nanometers (one nanometer is one billionth of a meter), are arranged in an array, like a chess board.

    In the human brain, learning happens due to the strengthening and weakening of connections between neurons, called synapses. Deep neural networks have long adopted this strategy, where the network weights are programmed through training algorithms. In the case of this new processor, increasing and decreasing the electrical conductance of protonic resistors enables analog machine learning.

    The conductance is controlled by the movement of protons. To increase the conductance, more protons are pushed into a channel in the resistor, while to decrease conductance protons are taken out. This is accomplished using an electrolyte (similar to that of a battery) that conducts protons but blocks electrons.

    To develop a super-fast and highly energy efficient programmable protonic resistor, the researchers looked to different materials for the electrolyte. While other devices used organic compounds, Onen focused on inorganic phosphosilicate glass (PSG).

    PSG is basically silicon dioxide, which is the powdery desiccant material found in tiny bags that come in the box with new furniture to remove moisture. It is studied as a proton conductor under humidified conditions for fuel cells. It is also the most well-known oxide used in silicon processing. To make PSG, a tiny bit of phosphorus is added to the silicon to give it special characteristics for proton conduction.

    Onen hypothesized that an optimized PSG could have a high proton conductivity at room temperature without the need for water, which would make it an ideal solid electrolyte for this application. He was right.

    Surprising speed

    PSG enables ultrafast proton movement because it contains a multitude of nanometer-sized pores whose surfaces provide paths for proton diffusion. It can also withstand very strong, pulsed electric fields. This is critical, Onen explains, because applying more voltage to the device enables protons to move at blinding speeds.

    “The speed certainly was surprising. Normally, we would not apply such extreme fields across devices, in order to not turn them into ash. But instead, protons ended up shuttling at immense speeds across the device stack, specifically a million times faster compared to what we had before. And this movement doesn’t damage anything, thanks to the small size and low mass of protons. It is almost like teleporting,” he says.

    “The nanosecond timescale means we are close to the ballistic or even quantum tunneling regime for the proton, under such an extreme field,” adds Li.

    Because the protons don’t damage the material, the resistor can run for millions of cycles without breaking down. This new electrolyte enabled a programmable protonic resistor that is a million times faster than their previous device and can operate effectively at room temperature, which is important for incorporating it into computing hardware.

    Thanks to the insulating properties of PSG, almost no electric current passes through the material as protons move. This makes the device extremely energy efficient, Onen adds.

    Now that they have demonstrated the effectiveness of these programmable resistors, the researchers plan to reengineer them for high-volume manufacturing, says del Alamo. Then they can study the properties of resistor arrays and scale them up so they can be embedded into systems.

    At the same time, they plan to study the materials to remove bottlenecks that limit the voltage that is required to efficiently transfer the protons to, through, and from the electrolyte.

    “Another exciting direction that these ionic devices can enable is energy-efficient hardware to emulate the neural circuits and synaptic plasticity rules that are deduced in neuroscience, beyond analog deep neural networks. We have already started such a collaboration with neuroscience, supported by the MIT Quest for Intelligence,” adds Yildiz.

    “The collaboration that we have is going to be essential to innovate in the future. The path forward is still going to be very challenging, but at the same time it is very exciting,” del Alamo says.

    “Intercalation reactions such as those found in lithium-ion batteries have been explored extensively for memory devices. This work demonstrates that proton-based memory devices deliver impressive and surprising switching speed and endurance,” says William Chueh, associate professor of materials science and engineering at Stanford University, who was not involved with this research. “It lays the foundation for a new class of memory devices for powering deep learning algorithms.”

    “This work demonstrates a significant breakthrough in biologically inspired resistive-memory devices. These all-solid-state protonic devices are based on exquisite atomic-scale control of protons, similar to biological synapses but at orders of magnitude faster rates,” says Elizabeth Dickey, the Teddy & Wilton Hawkins Distinguished Professor and head of the Department of Materials Science and Engineering at Carnegie Mellon University, who was not involved with this work. “I commend the interdisciplinary MIT team for this exciting development, which will enable future-generation computational devices.”

    This research is funded, in part, by the MIT-IBM Watson AI Lab. More

  • in

    Four researchers with MIT ties earn Schmidt Science Fellowships

    Four researchers with MIT ties — Juncal Arbelaiz, Xiangkun (Elvis) Cao, Sandya Subramanian, and Heather Zlotnick ’17 — have been honored with competitive Schmidt Science Fellowships.

    Created in 2017, the fellows program aims to bring together the world’s brightest minds “to solve society’s toughest challenges.”

    The four MIT-affiliated researchers are among 29 Schmidt Science Fellows from around the world who will receive postdoctoral support for either one or two years with an annual stipend of $100,000, along with individualized mentoring and participation in the program’s Global Meeting Series. The fellows will also have opportunities to engage with thought-leaders from science, business, policy, and society. According to the award announcement, the fellows are expected to pursue research that shifts from the focus of their PhDs, to help expand and enhance their futures as scientific leaders.

    Juncal Arbelaiz is a PhD candidate in applied mathematics at MIT, who is completing her doctorate this summer. Her doctoral research at MIT is advised by Ali Jadbabaie, the JR East Professor of Engineering and head of the Department of Civil and Environmental Engineering; Anette Hosoi, the Neil and Jane Pappalardo Professor of Mechanical Engineering and associate dean of the School of Engineering; and Bassam Bamieh, professor of mechanical engineering and associate director of the Center for Control, Dynamical Systems, and Computation at the University of California at Santa Barbara. Arbelaiz’s research revolves around the design of optimal decentralized intelligence for spatially-distributed dynamical systems.

    “I cannot think of a better way to start my independent scientific career. I feel very excited and grateful for this opportunity,” says Arbelaiz. With her fellowship, she will enlist systems biology to explore how the nervous system encodes and processes sensory information to address future safety-critical artificial intelligence applications. “The Schmidt Science Fellowship will provide me with a unique opportunity to work at the intersection of biological and machine intelligence for two years and will be a steppingstone towards my longer-term objective of becoming a researcher in bio-inspired machine intelligence,” she says.

    Xiangkun (Elvis) Cao is currently a postdoc in the lab of T. Alan Hatton, the Ralph Landau Professor in Chemical Engineering, and an Impact Fellow at the MIT Climate and Sustainability Consortium. Cao received his PhD in mechanical engineering from Cornell University in 2021, during which he focused on microscopic precision in the simultaneous delivery of light and fluids by optofluidics, with advances relevant to health and sustainability applications. As a Schmidt Science Fellow, he plans to be co-advised by Hatton on carbon capture, and Ted Sargent, professor of chemistry at Northwestern University, on carbon utilization. Cao is passionate about integrated carbon capture and utilization (CCU) from molecular to process levels, machine learning to inspire smart CCU, and the nexus of technology, business, and policy for CCU.

    “The Schmidt Science Fellowship provides the perfect opportunity for me to work across disciplines to study integrated carbon capture and utilization from molecular to process levels,” Cao explains. “My vision is that by integrating carbon capture and utilization, we can concurrently make scientific discoveries and unlock economic opportunities while mitigating global climate change. This way, we can turn our carbon liability into an asset.”

    Sandya Subramanian, a 2021 PhD graduate of the Harvard-MIT Program in Health Sciences and Technology (HST) in the area of medical engineering and medical physics, is currently a postdoc at Stanford Data Science. She is focused on the topics of biomedical engineering, statistics, machine learning, neuroscience, and health care. Her research is on developing new technologies and methods to study the interactions between the brain, the autonomic nervous system, and the gut. “I’m extremely honored to receive the Schmidt Science Fellowship and to join the Schmidt community of leaders and scholars,” says Subramanian. “I’ve heard so much about the fellowship and the fact that it can open doors and give people confidence to pursue challenging or unique paths.”

    According to Subramanian, the autonomic nervous system and its interactions with other body systems are poorly understood but thought to be involved in several disorders, such as functional gastrointestinal disorders, Parkinson’s disease, diabetes, migraines, and eating disorders. The goal of her research is to improve our ability to monitor and quantify these physiologic processes. “I’m really interested in understanding how we can use physiological monitoring technologies to inform clinical decision-making, especially around the autonomic nervous system, and I look forward to continuing the work that I’ve recently started at Stanford as Schmidt Science Fellow,” she says. “A huge thank you to all of the mentors, colleagues, friends, and leaders I had the pleasure of meeting and working with at HST and MIT; I couldn’t have done this without everything I learned there.”

    Hannah Zlotnick ’17 attended MIT for her undergraduate studies, majoring in biological engineering with a minor in mechanical engineering. At MIT, Zlotnick was a student-athlete on the women’s varsity soccer team, a UROP student in Alan Grodzinsky’s laboratory, and a member of Pi Beta Phi. For her PhD, Zlotnick attended the University of Pennsylvania, and worked in Robert Mauck’s laboratory within the departments of Bioengineering and Orthopaedic Surgery.

    Zlotnick’s PhD research focused on harnessing remote forces, such as magnetism or gravity, to enhance engineered cartilage and osteochondral repair both in vitro and in large animal models. Zlotnick now plans to pivot to the field of biofabrication to create tissue models of the knee joint to assess potential therapeutics for osteoarthritis. “I am humbled to be a part of the Schmidt Science Fellows community, and excited to venture into the field of biofabrication,” Zlotnick says. “Hopefully this work uncovers new therapies for patients with inflammatory joint diseases.” More

  • in

    MIT J-WAFS announces 2022 seed grant recipients

    The Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) at MIT has awarded eight MIT principal investigators with 2022 J-WAFS seed grants. The grants support innovative MIT research that has the potential to have significant impact on water- and food-related challenges.

    The only program at MIT that is dedicated to water- and food-related research, J-WAFS has offered seed grant funding to MIT principal investigators and their teams for the past eight years. The grants provide up to $75,000 per year, overhead-free, for two years to support new, early-stage research in areas such as water and food security, safety, supply, and sustainability. Past projects have spanned many diverse disciplines, including engineering, science, technology, and business innovation, as well as social science and economics, architecture, and urban planning. 

    Seven new projects led by eight researchers will be supported this year. With funding going to four different MIT departments, the projects address a range of challenges by employing advanced materials, technology innovations, and new approaches to resource management. The new projects aim to remove harmful chemicals from water sources, develop drought monitoring systems for farmers, improve management of the shellfish industry, optimize water purification materials, and more.

    “Climate change, the pandemic, and most recently the war in Ukraine have exacerbated and put a spotlight on the serious challenges facing global water and food systems,” says J-WAFS director John H. Lienhard. He adds, “The proposals chosen this year have the potential to create measurable, real-world impacts in both the water and food sectors.”  

    The 2022 J-WAFS seed grant researchers and their projects are:

    Gang Chen, the Carl Richard Soderberg Professor of Power Engineering in MIT’s Department of Mechanical Engineering, is using sunlight to desalinate water. The use of solar energy for desalination is not a new idea, particularly solar thermal evaporation methods. However, the solar thermal evaporation process has an overall low efficiency because it relies on breaking hydrogen bonds among individual water molecules, which is very energy-intensive. Chen and his lab recently discovered a photomolecular effect that dramatically lowers the energy required for desalination. 

    The bonds among water molecules inside a water cluster in liquid water are mostly hydrogen bonds. Chen discovered that a photon with energy larger than the bonding energy between the water cluster and the remaining water liquids can cleave off the water cluster at the water-air interface, colliding with air molecules and disintegrating into 60 or even more individual water molecules. This effect has the potential to significantly boost clean water production via new desalination technology that produces a photomolecular evaporation rate that exceeds pure solar thermal evaporation by at least ten-fold. 

    John E. Fernández is the director of the MIT Environmental Solutions Initiative (ESI) and a professor in the Department of Architecture, and also affiliated with the Department of Urban Studies and Planning. Fernández is working with Scott D. Odell, a postdoc in the ESI, to better understand the impacts of mining and climate change in water-stressed regions of Chile.

    The country of Chile is one of the world’s largest exporters of both agricultural and mineral products; however, little research has been done on climate change effects at the intersection of these two sectors. Fernández and Odell will explore how desalination is being deployed by the mining industry to relieve pressure on continental water supplies in Chile, and with what effect. They will also research how climate change and mining intersect to affect Andean glaciers and agricultural communities dependent upon them. The researchers intend for this work to inform policies to reduce social and environmental harms from mining, desalination, and climate change.

    Ariel L. Furst is the Raymond (1921) and Helen St. Laurent Career Development Professor of Chemical Engineering at MIT. Her 2022 J-WAFS seed grant project seeks to effectively remove dangerous and long-lasting chemicals from water supplies and other environmental areas. 

    Perfluorooctanoic acid (PFOA), a component of Teflon, is a member of a group of chemicals known as per- and polyfluoroalkyl substances (PFAS). These human-made chemicals have been extensively used in consumer products like nonstick cooking pans. Exceptionally high levels of PFOA have been measured in water sources near manufacturing sites, which is problematic as these chemicals do not readily degrade in our bodies or the environment. The majority of humans have detectable levels of PFAS in their blood, which can lead to significant health issues including cancer, liver damage, and thyroid effects, as well as developmental effects in infants. Current remediation methods are limited to inefficient capture and are mostly confined to laboratory settings. Furst’s proposed method utilizes low-energy, scaffolded enzyme materials to move beyond simple capture to degrade these hazardous pollutants.

    Heather J. Kulik is an associate professor in the Department of Chemical Engineering at MIT who is developing novel computational strategies to identify optimal materials for purifying water. Water treatment requires purification by selectively separating small ions from water. However, human-made, scalable materials for water purification and desalination are often not stable in typical operating conditions and lack precision pores for good separation. 

    Metal-organic frameworks (MOFs) are promising materials for water purification because their pores can be tailored to have precise shapes and chemical makeup for selective ion affinity. Yet few MOFs have been assessed for their properties relevant to water purification. Kulik plans to use virtual high-throughput screening accelerated by machine learning models and molecular simulation to accelerate discovery of MOFs. Specifically, Kulik will be looking for MOFs with ultra-stable structures in water that do not break down at certain temperatures. 

    Gregory C. Rutledge is the Lammot du Pont Professor of Chemical Engineering at MIT. He is leading a project that will explore how to better separate oils from water. This is an important problem to solve given that industry-generated oil-contaminated water is a major source of pollution to the environment.

    Emulsified oils are particularly challenging to remove from water due to their small droplet sizes and long settling times. Microfiltration is an attractive technology for the removal of emulsified oils, but its major drawback is fouling, or the accumulation of unwanted material on solid surfaces. Rutledge will examine the mechanism of separation behind liquid-infused membranes (LIMs) in which an infused liquid coats the surface and pores of the membrane, preventing fouling. Robustness of the LIM technology for removal of different types of emulsified oils and oil mixtures will be evaluated. César Terrer is an assistant professor in the Department of Civil and Environmental Engineering whose J-WAFS project seeks to answer the question: How can satellite images be used to provide a high-resolution drought monitoring system for farmers? 

    Drought is recognized as one of the world’s most pressing issues, with direct impacts on vegetation that threaten water resources and food production globally. However, assessing and monitoring the impact of droughts on vegetation is extremely challenging as plants’ sensitivity to lack of water varies across species and ecosystems. Terrer will leverage a new generation of remote sensing satellites to provide high-resolution assessments of plant water stress at regional to global scales. The aim is to provide a plant drought monitoring product with farmland-specific services for water and socioeconomic management.

    Michael Triantafyllou is the Henry L. and Grace Doherty Professor in Ocean Science and Engineering in the Department of Mechanical Engineering. He is developing a web-based system for natural resources management that will deploy geospatial analysis, visualization, and reporting to better manage and facilitate aquaculture data.  By providing value to commercial fisheries’ permit holders who employ significant numbers of people and also to recreational shellfish permit holders who contribute to local economies, the project has attracted support from the Massachusetts Division of Marine Fisheries as well as a number of local resource management departments.

    Massachusetts shell fisheries generated roughly $339 million in 2020, accounting for 17 percent of U.S. East Coast production. Managing such a large industry is a time-consuming process, given there are thousands of acres of coastal areas grouped within over 800 classified shellfish growing areas. Extreme climate events present additional challenges. Triantafyllou’s research will help efforts to enforce environmental regulations, support habitat restoration efforts, and prevent shellfish-related food safety issues. More

  • in

    Engineers use artificial intelligence to capture the complexity of breaking waves

    Waves break once they swell to a critical height, before cresting and crashing into a spray of droplets and bubbles. These waves can be as large as a surfer’s point break and as small as a gentle ripple rolling to shore. For decades, the dynamics of how and when a wave breaks have been too complex to predict.

    Now, MIT engineers have found a new way to model how waves break. The team used machine learning along with data from wave-tank experiments to tweak equations that have traditionally been used to predict wave behavior. Engineers typically rely on such equations to help them design resilient offshore platforms and structures. But until now, the equations have not been able to capture the complexity of breaking waves.

    The updated model made more accurate predictions of how and when waves break, the researchers found. For instance, the model estimated a wave’s steepness just before breaking, and its energy and frequency after breaking, more accurately than the conventional wave equations.

    Their results, published today in the journal Nature Communications, will help scientists understand how a breaking wave affects the water around it. Knowing precisely how these waves interact can help hone the design of offshore structures. It can also improve predictions for how the ocean interacts with the atmosphere. Having better estimates of how waves break can help scientists predict, for instance, how much carbon dioxide and other atmospheric gases the ocean can absorb.

    “Wave breaking is what puts air into the ocean,” says study author Themis Sapsis, an associate professor of mechanical and ocean engineering and an affiliate of the Institute for Data, Systems, and Society at MIT. “It may sound like a detail, but if you multiply its effect over the area of the entire ocean, wave breaking starts becoming fundamentally important to climate prediction.”

    The study’s co-authors include lead author and MIT postdoc Debbie Eeltink, Hubert Branger and Christopher Luneau of Aix-Marseille University, Amin Chabchoub of Kyoto University, Jerome Kasparian of the University of Geneva, and T.S. van den Bremer of Delft University of Technology.

    Learning tank

    To predict the dynamics of a breaking wave, scientists typically take one of two approaches: They either attempt to precisely simulate the wave at the scale of individual molecules of water and air, or they run experiments to try and characterize waves with actual measurements. The first approach is computationally expensive and difficult to simulate even over a small area; the second requires a huge amount of time to run enough experiments to yield statistically significant results.

    The MIT team instead borrowed pieces from both approaches to develop a more efficient and accurate model using machine learning. The researchers started with a set of equations that is considered the standard description of wave behavior. They aimed to improve the model by “training” the model on data of breaking waves from actual experiments.

    “We had a simple model that doesn’t capture wave breaking, and then we had the truth, meaning experiments that involve wave breaking,” Eeltink explains. “Then we wanted to use machine learning to learn the difference between the two.”

    The researchers obtained wave breaking data by running experiments in a 40-meter-long tank. The tank was fitted at one end with a paddle which the team used to initiate each wave. The team set the paddle to produce a breaking wave in the middle of the tank. Gauges along the length of the tank measured the water’s height as waves propagated down the tank.

    “It takes a lot of time to run these experiments,” Eeltink says. “Between each experiment you have to wait for the water to completely calm down before you launch the next experiment, otherwise they influence each other.”

    Safe harbor

    In all, the team ran about 250 experiments, the data from which they used to train a type of machine-learning algorithm known as a neural network. Specifically, the algorithm is trained to compare the real waves in experiments with the predicted waves in the simple model, and based on any differences between the two, the algorithm tunes the model to fit reality.

    After training the algorithm on their experimental data, the team introduced the model to entirely new data — in this case, measurements from two independent experiments, each run at separate wave tanks with different dimensions. In these tests, they found the updated model made more accurate predictions than the simple, untrained model, for instance making better estimates of a breaking wave’s steepness.

    The new model also captured an essential property of breaking waves known as the “downshift,” in which the frequency of a wave is shifted to a lower value. The speed of a wave depends on its frequency. For ocean waves, lower frequencies move faster than higher frequencies. Therefore, after the downshift, the wave will move faster. The new model predicts the change in frequency, before and after each breaking wave, which could be especially relevant in preparing for coastal storms.

    “When you want to forecast when high waves of a swell would reach a harbor, and you want to leave the harbor before those waves arrive, then if you get the wave frequency wrong, then the speed at which the waves are approaching is wrong,” Eeltink says.

    The team’s updated wave model is in the form of an open-source code that others could potentially use, for instance in climate simulations of the ocean’s potential to absorb carbon dioxide and other atmospheric gases. The code can also be worked into simulated tests of offshore platforms and coastal structures.

    “The number one purpose of this model is to predict what a wave will do,” Sapsis says. “If you don’t model wave breaking right, it would have tremendous implications for how structures behave. With this, you could simulate waves to help design structures better, more efficiently, and without huge safety factors.”

    This research is supported, in part, by the Swiss National Science Foundation, and by the U.S. Office of Naval Research. More