More stories

  • in

    J-WAFS awards $150K Solutions grant to Patrick Doyle and team for rapid removal of micropollutants from water

    The Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) has awarded a 2022 J-WAFS Solutions grant to Patrick S. Doyle, the Robert T. Haslam Professor of Chemical Engineering at MIT, for his innovative system to tackle water pollution. Doyle will be working with co-Principal Investigator Rafael Gomez-Bombarelli, assistant professor in materials processing in the Department of Materials Science, as well as PhD students Devashish Gokhale and Tynan Perez. Building off of findings from a 2019 J-WAFS seed grant, Doyle and the research team will create cost-effective industry-scale processes to remove micropollutants from water. Project work will commence this month.

    The J-WAFS Solutions program provides one-year, renewable, commercialization grants to help move MIT technology from the laboratory to market. Grants of up to $150,000 are awarded to researchers with breakthrough technologies and inventions in water or food. Since its launch in 2015, J-WAFS Solutions grants have led to seven spinout companies and helped commercialize two products as open-source technologies. The grant program is supported by Community Jameel.

    A widespread problem 

    Micropollutants are contaminants that occur in low concentrations in the environment, yet continuous exposure and bioaccumulation of micropollutants make them a cause for concern. According to the U.S. Environmental Protection Agency, the plastics derivative Bisphenol A (BPA), the “forever chemicals” per-and polyfluoroalkyl substances (PFAS), and heavy metals like lead are common micropollutants known to be found in more than 85 percent of rivers, ponds, and lakes in the United States. Many of these bodies of water are sources of drinking water. Over long periods of time, exposure to micropollutants through drinking water can cause physiological damage in humans, increasing the risk of cancer, developmental disorders, and reproductive failure.

    Since micropollutants occur in low concentrations, it is difficult to detect and monitor their presence, and the chemical diversity of micropollutants makes it difficult to inexpensively remove them from water. Currently, activated carbon is the industry standard for micropollutant elimination, but this method cannot efficiently remove contaminants at parts-per-billion and parts-per-trillion concentrations. There are also strong sustainability concerns associated with activated carbon production, which is energy-intensive and releases large volumes of carbon dioxide.

    A solution with societal and economic benefits

    Doyle and his team are developing a technology that uses sustainable hydrogel microparticles to remove micropollutants from water. The polymeric hydrogel microparticles use chemically anchored structures including micelles and other chelating agents that act like a sponge by absorbing organic micropollutants and heavy metal ions. The microparticles are large enough to separate from water using simple gravitational settling. The system is sustainable because the microparticles can be recycled for continuous use. In testing, the long-lasting, reusable microparticles show quicker removal of contaminants than commercial activated carbon. The researchers plan to utilize machine learning to find optimal microparticle compositions that maximize performance on complex combinations of micropollutants in simulated and real wastewater samples.

    Economically, the technology is a new offering that has applications in numerous large markets where micropollutant elimination is vital, including municipal and industrial water treatment equipment, as well as household water purification systems. The J-WAFS Solutions grant will allow the team to build and test prototypes of the water treatment system, identify the best use cases and customers, and perform technoeconomic analyses and market research to formulate a preliminary business plan. With J-WAFS commercialization support, the project could eventually lead to a startup company.

    “Emerging micropollutants are a growing threat to drinking water supplies worldwide,” says J-WAFS Director John H. Lienhard, the Abdul Latif Jameel Professor of Water at MIT. “Cost-effective and scalable technologies for micropollutant removal are urgently needed. This project will develop and commercialize a promising new tool for water treatment, with the goal of improving water quality for millions of people.” More

  • in

    Turning carbon dioxide into valuable products

    Carbon dioxide (CO2) is a major contributor to climate change and a significant product of many human activities, notably industrial manufacturing. A major goal in the energy field has been to chemically convert emitted CO2 into valuable chemicals or fuels. But while CO2 is available in abundance, it has not yet been widely used to generate value-added products. Why not?

    The reason is that CO2 molecules are highly stable and therefore not prone to being chemically converted to a different form. Researchers have sought materials and device designs that could help spur that conversion, but nothing has worked well enough to yield an efficient, cost-effective system.

    Two years ago, Ariel Furst, the Raymond (1921) and Helen St. Laurent Career Development Professor of Chemical Engineering at MIT, decided to try using something different — a material that gets more attention in discussions of biology than of chemical engineering. Already, results from work in her lab suggest that her unusual approach is paying off.

    The stumbling block

    The challenge begins with the first step in the CO2 conversion process. Before being transformed into a useful product, CO2 must be chemically converted into carbon monoxide (CO). That conversion can be encouraged using electrochemistry, a process in which input voltage provides the extra energy needed to make the stable CO2 molecules react. The problem is that achieving the CO2-to-CO conversion requires large energy inputs — and even then, CO makes up only a small fraction of the products that are formed.

    To explore opportunities for improving this process, Furst and her research group focused on the electrocatalyst, a material that enhances the rate of a chemical reaction without being consumed in the process. The catalyst is key to successful operation. Inside an electrochemical device, the catalyst is often suspended in an aqueous (water-based) solution. When an electric potential (essentially a voltage) is applied to a submerged electrode, dissolved CO2 will — helped by the catalyst — be converted to CO.

    But there’s one stumbling block: The catalyst and the CO2 must meet on the surface of the electrode for the reaction to occur. In some studies, the catalyst is dispersed in the solution, but that approach requires more catalyst and isn’t very efficient, according to Furst. “You have to both wait for the diffusion of CO2 to the catalyst and for the catalyst to reach the electrode before the reaction can occur,” she explains. As a result, researchers worldwide have been exploring different methods of “immobilizing” the catalyst on the electrode.

    Connecting the catalyst and the electrode

    Before Furst could delve into that challenge, she needed to decide which of the two types of CO2 conversion catalysts to work with: the traditional solid-state catalyst or a catalyst made up of small molecules. In examining the literature, she concluded that small-molecule catalysts held the most promise. While their conversion efficiency tends to be lower than that of solid-state versions, molecular catalysts offer one important advantage: They can be tuned to emphasize reactions and products of interest.

    Two approaches are commonly used to immobilize small-molecule catalysts on an electrode. One involves linking the catalyst to the electrode by strong covalent bonds — a type of bond in which atoms share electrons; the result is a strong, essentially permanent connection. The other sets up a non-covalent attachment between the catalyst and the electrode; unlike a covalent bond, this connection can easily be broken.

    Neither approach is ideal. In the former case, the catalyst and electrode are firmly attached, ensuring efficient reactions; but when the activity of the catalyst degrades over time (which it will), the electrode can no longer be accessed. In the latter case, a degraded catalyst can be removed; but the exact placement of the small molecules of the catalyst on the electrode can’t be controlled, leading to an inconsistent, often decreasing, catalytic efficiency — and simply increasing the amount of catalyst on the electrode surface without concern for where the molecules are placed doesn’t solve the problem.

    What was needed was a way to position the small-molecule catalyst firmly and accurately on the electrode and then release it when it degrades. For that task, Furst turned to what she and her team regard as a kind of “programmable molecular Velcro”: deoxyribonucleic acid, or DNA.

    Adding DNA to the mix

    Mention DNA to most people, and they think of biological functions in living things. But the members of Furst’s lab view DNA as more than just genetic code. “DNA has these really cool physical properties as a biomaterial that people don’t often think about,” she says. “DNA can be used as a molecular Velcro that can stick things together with very high precision.”

    Furst knew that DNA sequences had previously been used to immobilize molecules on surfaces for other purposes. So she devised a plan to use DNA to direct the immobilization of catalysts for CO2 conversion.

    Her approach depends on a well-understood behavior of DNA called hybridization. The familiar DNA structure is a double helix that forms when two complementary strands connect. When the sequence of bases (the four building blocks of DNA) in the individual strands match up, hydrogen bonds form between complementary bases, firmly linking the strands together.

    Using that behavior for catalyst immobilization involves two steps. First, the researchers attach a single strand of DNA to the electrode. Then they attach a complementary strand to the catalyst that is floating in the aqueous solution. When the latter strand gets near the former, the two strands hybridize; they become linked by multiple hydrogen bonds between properly paired bases. As a result, the catalyst is firmly affixed to the electrode by means of two interlocked, self-assembled DNA strands, one connected to the electrode and the other to the catalyst.

    Better still, the two strands can be detached from one another. “The connection is stable, but if we heat it up, we can remove the secondary strand that has the catalyst on it,” says Furst. “So we can de-hybridize it. That allows us to recycle our electrode surfaces — without having to disassemble the device or do any harsh chemical steps.”

    Experimental investigation

    To explore that idea, Furst and her team — postdocs Gang Fan and Thomas Gill, former graduate student Nathan Corbin PhD ’21, and former postdoc Amruta Karbelkar — performed a series of experiments using three small-molecule catalysts based on porphyrins, a group of compounds that are biologically important for processes ranging from enzyme activity to oxygen transport. Two of the catalysts involve a synthetic porphyrin plus a metal center of either cobalt or iron. The third catalyst is hemin, a natural porphyrin compound used to treat porphyria, a set of disorders that can affect the nervous system. “So even the small-molecule catalysts we chose are kind of inspired by nature,” comments Furst.

    In their experiments, the researchers first needed to modify single strands of DNA and deposit them on one of the electrodes submerged in the solution inside their electrochemical cell. Though this sounds straightforward, it did require some new chemistry. Led by Karbelkar and third-year undergraduate researcher Rachel Ahlmark, the team developed a fast, easy way to attach DNA to electrodes. For this work, the researchers’ focus was on attaching DNA, but the “tethering” chemistry they developed can also be used to attach enzymes (protein catalysts), and Furst believes it will be highly useful as a general strategy for modifying carbon electrodes.

    Once the single strands of DNA were deposited on the electrode, the researchers synthesized complementary strands and attached to them one of the three catalysts. When the DNA strands with the catalyst were added to the solution in the electrochemical cell, they readily hybridized with the DNA strands on the electrode. After half-an-hour, the researchers applied a voltage to the electrode to chemically convert CO2 dissolved in the solution and used a gas chromatograph to analyze the makeup of the gases produced by the conversion.

    The team found that when the DNA-linked catalysts were freely dispersed in the solution, they were highly soluble — even when they included small-molecule catalysts that don’t dissolve in water on their own. Indeed, while porphyrin-based catalysts in solution often stick together, once the DNA strands were attached, that counterproductive behavior was no longer evident.

    The DNA-linked catalysts in solution were also more stable than their unmodified counterparts. They didn’t degrade at voltages that caused the unmodified catalysts to degrade. “So just attaching that single strand of DNA to the catalyst in solution makes those catalysts more stable,” says Furst. “We don’t even have to put them on the electrode surface to see improved stability.” When converting CO2 in this way, a stable catalyst will give a steady current over time. Experimental results showed that adding the DNA prevented the catalyst from degrading at voltages of interest for practical devices. Moreover, with all three catalysts in solution, the DNA modification significantly increased the production of CO per minute.

    Allowing the DNA-linked catalyst to hybridize with the DNA connected to the electrode brought further improvements, even compared to the same DNA-linked catalyst in solution. For example, as a result of the DNA-directed assembly, the catalyst ended up firmly attached to the electrode, and the catalyst stability was further enhanced. Despite being highly soluble in aqueous solutions, the DNA-linked catalyst molecules remained hybridized at the surface of the electrode, even under harsh experimental conditions.

    Immobilizing the DNA-linked catalyst on the electrode also significantly increased the rate of CO production. In a series of experiments, the researchers monitored the CO production rate with each of their catalysts in solution without attached DNA strands — the conventional setup — and then with them immobilized by DNA on the electrode. With all three catalysts, the amount of CO generated per minute was far higher when the DNA-linked catalyst was immobilized on the electrode.

    In addition, immobilizing the DNA-linked catalyst on the electrode greatly increased the “selectivity” in terms of the products. One persistent challenge in using CO2 to generate CO in aqueous solutions is that there is an inevitable competition between the formation of CO and the formation of hydrogen. That tendency was eased by adding DNA to the catalyst in solution — and even more so when the catalyst was immobilized on the electrode using DNA. For both the cobalt-porphyrin catalyst and the hemin-based catalyst, the formation of CO relative to hydrogen was significantly higher with the DNA-linked catalyst on the electrode than in solution. With the iron-porphyrin catalyst they were about the same. “With the iron, it doesn’t matter whether it’s in solution or on the electrode,” Furst explains. “Both of them have selectivity for CO, so that’s good, too.”

    Progress and plans

    Furst and her team have now demonstrated that their DNA-based approach combines the advantages of the traditional solid-state catalysts and the newer small-molecule ones. In their experiments, they achieved the highly efficient chemical conversion of CO2 to CO and also were able to control the mix of products formed. And they believe that their technique should prove scalable: DNA is inexpensive and widely available, and the amount of catalyst required is several orders of magnitude lower when it’s immobilized using DNA.

    Based on her work thus far, Furst hypothesizes that the structure and spacing of the small molecules on the electrode may directly impact both catalytic efficiency and product selectivity. Using DNA to control the precise positioning of her small-molecule catalysts, she plans to evaluate those impacts and then extrapolate design parameters that can be applied to other classes of energy-conversion catalysts. Ultimately, she hopes to develop a predictive algorithm that researchers can use as they design electrocatalytic systems for a wide variety of applications.

    This research was supported by a grant from the MIT Energy Initiative Seed Fund.

    This article appears in the Spring 2022 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    MIT students contribute to success of historic fusion experiment

    For more than half a century, researchers around the world have been engaged in attempts to achieve fusion ignition in a laboratory, a grand challenge of the 21st century. The High-Energy-Density Physics (HEDP) group at MIT’s Plasma Science and Fusion Center has focused on an approach called inertial confinement fusion (ICF), which uses lasers to implode a pellet of fuel in a quest for ignition. This group, including nine former and current MIT students, was crucial to an historic ICF ignition experiment performed in 2021; the results were published on the anniversary of that success.

    On Aug. 8, 2021, researchers at the National Ignition Facility (NIF), Lawrence Livermore National Laboratory (LLNL), used 192 laser beams to illuminate the inside of a tiny gold cylinder encapsulating a spherical capsule filled with deuterium-tritium fuel in their quest to produce significant fusion energy. Although researchers had followed this process many times before, using different parameters, this time the ensuing implosion produced an historic fusion yield of 1.37 megaJoules, as measured by a suite of neutron diagnostics. These included the MIT-developed and analyzed Magnetic Recoil Spectrometer (MRS). This result was published in Physical Review Letters on Aug. 8, the one-year anniversary of the ground-breaking development, unequivocally indicating that the first controlled fusion experiment reached ignition.

    Governed by the Lawson criterion, a plasma ignites when the internal fusion heating power is high enough to overcome the physical processes that cool the fusion plasma, creating a positive thermodynamic feedback loop that very rapidly increases the plasma temperature. In the case of ICF, ignition is a state where the fusion plasma can initiate a “fuel burn propagation” into the surrounding dense and cold fuel, enabling the possibility of high fusion-energy gain.

    “This historic result certainly demonstrates that the ignition threshold is a real concept, with well-predicted theoretical calculations, and that a fusion plasma can be ignited in a laboratory” says HEDP Division Head Johan Frenje.

    The HEDP division has contributed to the success of the ignition program at the NIF for more than a decade by providing and using a dozen diagnostics, implemented by MIT PhD students and staff, which have been critical for assessing the performance of an implosion. The hundreds of co-authors on the paper attest to the collaborative effort that went into this milestone. MIT’s contributors included the only student co-authors.

    “The students are responsible for implementing and using a diagnostic to obtain data important to the ICF program at the NIF, says Frenje. “Being responsible for running a diagnostic at the NIF has allowed them to actively participate in the scientific dialog and thus get directly exposed to cutting-edge science.”

    Students involved from the MIT Department of Physics were Neel Kabadi, Graeme Sutcliffe, Tim Johnson, Jacob Pearcy, and Ben Reichelt; students from the Department of Nuclear Science and Engineering included Brandon Lahmann, Patrick Adrian, and Justin Kunimune.

    In addition, former student Alex Zylstra PhD ’15, now a physicist at LLNL, was the experimental lead of this record implosion experiment. More

  • in

    A simple way to significantly increase lifetimes of fuel cells and other devices

    In research that could jump-start work on a range of technologies including fuel cells, which are key to storing solar and wind energy, MIT researchers have found a relatively simple way to increase the lifetimes of these devices: changing the pH of the system.

    Fuel and electrolysis cells made of materials known as solid metal oxides are of interest for several reasons. For example, in the electrolysis mode, they are very efficient at converting electricity from a renewable source into a storable fuel like hydrogen or methane that can be used in the fuel cell mode to generate electricity when the sun isn’t shining or the wind isn’t blowing. They can also be made without using costly metals like platinum. However, their commercial viability has been hampered, in part, because they degrade over time. Metal atoms seeping from the interconnects used to construct banks of fuel/electrolysis cells slowly poison the devices.

    “What we’ve been able to demonstrate is that we can not only reverse that degradation, but actually enhance the performance above the initial value by controlling the acidity of the air-electrode interface,” says Harry L. Tuller, the R.P. Simmons Professor of Ceramics and Electronic Materials in MIT’s Department of Materials Science and Engineering (DMSE).

    The research, initially funded by the U.S. Department of Energy through the Office of Fossil Energy and Carbon Management’s (FECM) National Energy Technology Laboratory, should help the department meet its goal of significantly cutting the degradation rate of solid oxide fuel cells by 2035 to 2050.

    “Extending the lifetime of solid oxide fuels cells helps deliver the low-cost, high-efficiency hydrogen production and power generation needed for a clean energy future,” says Robert Schrecengost, acting director of FECM’s Division of Hydrogen with Carbon Management. “The department applauds these advancements to mature and ultimately commercialize these technologies so that we can provide clean and reliable energy for the American people.”

    “I’ve been working in this area my whole professional life, and what I’ve seen until now is mostly incremental improvements,” says Tuller, who was recently named a 2022 Materials Research Society Fellow for his career-long work in solid-state chemistry and electrochemistry. “People are normally satisfied with seeing improvements by factors of tens-of-percent. So, actually seeing much larger improvements and, as importantly, identifying the source of the problem and the means to work around it, issues that we’ve been struggling with for all these decades, is remarkable.”

    Says James M. LeBeau, the John Chipman Associate Professor of Materials Science and Engineering at MIT, who was also involved in the research, “This work is important because it could overcome [some] of the limitations that have prevented the widespread use of solid oxide fuel cells. Additionally, the basic concept can be applied to many other materials used for applications in the energy-related field.”

    A report describing the work was reported Aug. 11, in Energy & Environmental Science. Additional authors of the paper are Han Gil Seo, a DMSE postdoc; Anna Staerz, formerly a DMSE postdoc, now at Interuniversity Microelectronics Centre (IMEC) Belgium and soon to join the Colorado School of Mines faculty; Dennis S. Kim, a DMSE postdoc; Dino Klotz, a DMSE visiting scientist, now at Zurich Instruments; Michael Xu, a DMSE graduate student; and Clement Nicollet, formerly a DMSE postdoc, now at the Université de Nantes. Seo and Staerz contributed equally to the work.

    Changing the acidity

    A fuel/electrolysis cell has three principal parts: two electrodes (a cathode and anode) separated by an electrolyte. In the electrolysis mode, electricity from, say, the wind, can be used to generate storable fuel like methane or hydrogen. On the other hand, in the reverse fuel cell reaction, that storable fuel can be used to create electricity when the wind isn’t blowing.

    A working fuel/electrolysis cell is composed of many individual cells that are stacked together and connected by steel metal interconnects that include the element chrome to keep the metal from oxidizing. But “it turns out that at the high temperatures that these cells run, some of that chrome evaporates and migrates to the interface between the cathode and the electrolyte, poisoning the oxygen incorporation reaction,” Tuller says. After a certain point, the efficiency of the cell has dropped to a point where it is not worth operating any longer.

    “So if you can extend the life of the fuel/electrolysis cell by slowing down this process, or ideally reversing it, you could go a long way towards making it practical,” Tuller says.

    The team showed that you can do both by controlling the acidity of the cathode surface. They also explained what is happening.

    To achieve their results, the team coated the fuel/electrolysis cell cathode with lithium oxide, a compound that changes the relative acidity of the surface from being acidic to being more basic. “After adding a small amount of lithium, we were able to recover the initial performance of a poisoned cell,” Tuller says. When the engineers added even more lithium, the performance improved far beyond the initial value. “We saw improvements of three to four orders of magnitude in the key oxygen reduction reaction rate and attribute the change to populating the surface of the electrode with electrons needed to drive the oxygen incorporation reaction.”

    The engineers went on to explain what is happening by observing the material at the nanoscale, or billionths of a meter, with state-of-the-art transmission electron microscopy and electron energy loss spectroscopy at MIT.nano. “We were interested in understanding the distribution of the different chemical additives [chromium and lithium oxide] on the surface,” says LeBeau.

    They found that the lithium oxide effectively dissolves the chromium to form a glassy material that no longer serves to degrade the cathode performance.

    Applications for sensors, catalysts, and more

    Many technologies like fuel cells are based on the ability of the oxide solids to rapidly breathe oxygen in and out of their crystalline structures, Tuller says. The MIT work essentially shows how to recover — and speed up — that ability by changing the surface acidity. As a result, the engineers are optimistic that the work could be applied to other technologies including, for example, sensors, catalysts, and oxygen permeation-based reactors.

    The team is also exploring the effect of acidity on systems poisoned by different elements, like silica.

    Concludes Tuller: “As is often the case in science, you stumble across something and notice an important trend that was not appreciated previously. Then you test that concept further, and you discover that it is really very fundamental.”

    In addition to the DOE, this work was also funded by the National Research Foundation of Korea, the MIT Department of Materials Science and Engineering via Tuller’s appointment as the R.P. Simmons Professor of Ceramics and Electronic Materials, and the U.S. Air Force Office of Scientific Research. More

  • in

    Assay determines the percentage of Omicron, other variants in Covid wastewater

    Wastewater monitoring emerged amid the Covid-19 pandemic as an effective and noninvasive way to track a viral outbreak, and advances in the technology have enabled researchers to not only identify but also quantify the presence of particular variants of concern (VOCs) in wastewater samples.

    Last year, researchers with the Singapore-MIT Alliance for Research and Technology (SMART) made the news for developing a quantitative assay for the Alpha variant of SARS-CoV-2 in wastewater, while also working on a similar assay for the Delta variant. Previously, conventional wastewater detection methods could only detect the presence of SARS-CoV-2 viral material in a sample, without identifying the variant of the virus.

    Now, a team at SMART has developed a quantitative RT-qPCR assay that can detect the Omicron variant of SARS-CoV-2. This type of assay enables wastewater surveillance to accurately trace variant dynamics in any given community or population, and support and inform the implementation of appropriate public health measures tailored according to the specific traits of a particular viral pathogen.

    The capacity to count and assess particular VOCs is unique to SMART’s open-source assay, and allows researchers to accurately determine displacement trends in a community. Hence, the new assay can reveal what proportion of SARS-CoV-2 virus circulating in a community belongs to a particular variant. This is particularly significant, as different SARS-CoV-2 VOCs — Alpha, Delta, Omicron, and their offshoots — have emerged at various points throughout the pandemic, each causing a new wave of infections to which the population was more susceptible.

    The team’s new allele-specific RT-qPCR assay is described in a paper, “Rapid displacement of SARS-CoV-2 variant Delta by Omicron revealed by allele-specific PCR in wastewater,” published this month in Water Research. Senior author on the work is Eric Alm, professor of biological engineering at MIT and a principal investigator in the Antimicrobial Resistance (AMR) interdisciplinary research group within SMART, MIT’s research enterprise in Singapore. Co-authors include researchers from Nanyang Technological University (NTU), Singapore National University (NUS), MIT, Singapore Centre for Environmental Life Sciences Engineering (SCELSE), and Istituto Zooprofilattico Sperimentale della Lombardia e dell’Emilia Romagna (IZSLER) in Italy.

    Omicron overtakes delta within three weeks in Italy study

    In their study, SMART researchers found that the increase in booster vaccine population coverage in Italy concurred with the complete displacement of the Delta variant by the Omicron variant in wastewater samples obtained from the Torbole Casaglia wastewater treatment plant, with a catchment size of 62,722 people. Taking less than three weeks, the rapid pace of this displacement can be attributed to Omicron’s infection advantage over the previously dominant Delta in vaccinated individuals, which may stem from Omicron’s more efficient evasion of vaccination-induced immunity.

    “In a world where Covid-19 is endemic, the monitoring of VOCs through wastewater surveillance will be an effective tool for the tracking of variants circulating in the community and will play an increasingly important role in guiding public health response,” says paper co-author Federica Armas, a senior postdoc at SMART AMR. “This work has demonstrated that wastewater surveillance can be used to quickly and quantitatively trace VOCs present in a community.”

    Wastewater surveillance vital for future pandemic responses

    As the global population becomes increasingly vaccinated and exposed to prior infections, nations have begun transitioning toward the classification of SARS-CoV-2 as an endemic disease, rolling back active clinical surveillance toward decentralized antigen rapid tests, and consequently reducing sequencing of patient samples. However, SARS-CoV-2 has been shown to produce novel VOCs that can swiftly emerge and spread rapidly across populations, displacing previously dominant variants of the virus. This was observed when Delta displaced Alpha across the globe after the former’s emergence in India in December 2020, and again when Omicron displaced Delta at an even faster rate following its discovery in South Africa in November 2021. The continuing emergence of novel VOCs therefore necessitates continued vigilance on the monitoring of circulating SARS-CoV-2 variants in communities.

    In a separate review paper on wastewater surveillance titled “Making Waves: Wastewater Surveillance of SARS-CoV-2 in an Endemic Future,” published in the journal Water Research, SMART researchers and collaborators found that the utility of wastewater surveillance in the near future could include 1) monitoring the trend of viral loads in wastewater for quantified viral estimates circulating in a community; 2) sampling of wastewater at the source — e.g., taking samples from particular neighborhoods or buildings — for pinpointing infections in neighborhoods and at the building level; 3) integrating wastewater and clinical surveillance for cost-efficient population surveillance; and 4) genome sequencing wastewater samples to track circulating and emerging variants in the population.

    “Our experience with SARS-CoV-2 has shown that clinical testing can often only paint a limited picture of the true extent of an outbreak or pandemic. With Covid-19 becoming prevalent and with the anticipated emergence of further variants of concern, qualitative and quantitative data from wastewater surveillance will be an integral component of a cost- and resource-efficient public health surveillance program, empowering authorities to make more informed policy decisions,” adds corresponding author Janelle Thompson, associate professor at SCELSE and NTU. “Our review provides a roadmap for the wider deployment of wastewater surveillance, with opportunities and challenges that, if addressed, will enable us to not only better manage Covid-19, but also future-proof societies for other viral pathogens and future pandemics.”

    In addition, the review suggests that future wastewater research should comply with a set of standardized wastewater processing methods to reduce inconsistencies in wastewater data toward improving epidemiological inference. Methods developed in the context of SARS-CoV-2 and its analyses could be of invaluable benefit for future wastewater monitoring work on discovering emerging zoonotic pathogens — pathogens that can be transmitted from animals to humans — and for early detection of future pandemics.

    Furthermore, far from being confined to SARS-CoV-2, wastewater surveillance has already been adapted for use in combating other viral pathogens. Another paper from September 2021 described an advance in the development of effective wastewater surveillance for dengue, Zika, and yellow fever viruses, with SMART researchers successfully measuring decay rates of these medically significant arboviruses in wastewater. This was followed by another review paper by SMART published in July 2022 that explored current progress and future challenges and opportunities in wastewater surveillance for arboviruses. These developments represent an important first step toward establishing arbovirus wastewater surveillance, which would help policymakers in Singapore and beyond make better informed and more targeted public health measures in controlling arbovirus outbreaks such as dengue, which is a significant public health concern in Singapore.

    “Our learnings from using wastewater surveillance as a key tool over the course of Covid-19 will be crucial in helping researchers develop similar methods to monitor and tackle other viral pathogens and future pandemics,” says Lee Wei Lin, first author of the latest SMART paper and research scientist at SMART AMR. “Wastewater surveillance has already shown promising utility in helping to fight other viral pathogens, including some of the world’s most prevalent mosquito-borne diseases, and there is significant potential for the technology to be adapted for use against other infectious viral diseases.”

    The research is carried out by SMART and its collaborators at SCELSE, NTU, and NUS, co-led by Professor Eric Alm (SMART and MIT) and Associate Professor Janelle Thompson (SCELSE and NTU), and is supported by Singapore’sNational Research Foundation (NRF) under its Campus for Research Excellence And Technological Enterprise (CREATE) program. The research is part of an initiative funded by the NRF to develop sewage-based surveillance for rapid outbreak detection and intervention in Singapore.

    SMART was established by MIT in partnership with the NRF in 2007. SMART is the first entity in CREATE developed by NRF and serves as an intellectual and innovation hub for research interactions between MIT and Singapore, undertaking cutting-edge research projects in areas of interest to both Singapore and MIT. SMART currently comprises an Innovation Centre and five interdisciplinary research groups: AMR, Critical Analytics for Manufacturing Personalized-Medicine, Disruptive & Sustainable Technologies for Agricultural Precision, Future Urban Mobility, and Low Energy Electronic Systems.

    The AMR IRG is a translational research and entrepreneurship program that tackles the growing threat of antimicrobial resistance. By leveraging talent and convergent technologies across Singapore and MIT, they tackle AMR head-on by developing multiple innovative and disruptive approaches to identify, respond to, and treat drug-resistant microbial infections. Through strong scientific and clinical collaborations, our goal is to provide transformative, holistic solutions for Singapore and the world. More

  • in

    These neurons have food on the brain

    A gooey slice of pizza. A pile of crispy French fries. Ice cream dripping down a cone on a hot summer day. When you look at any of these foods, a specialized part of your visual cortex lights up, according to a new study from MIT neuroscientists.

    This newly discovered population of food-responsive neurons is located in the ventral visual stream, alongside populations that respond specifically to faces, bodies, places, and words. The unexpected finding may reflect the special significance of food in human culture, the researchers say. 

    “Food is central to human social interactions and cultural practices. It’s not just sustenance,” says Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience and a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines. “Food is core to so many elements of our cultural identity, religious practice, and social interactions, and many other things that humans do.”

    The findings, based on an analysis of a large public database of human brain responses to a set of 10,000 images, raise many additional questions about how and why this neural population develops. In future studies, the researchers hope to explore how people’s responses to certain foods might differ depending on their likes and dislikes, or their familiarity with certain types of food.

    MIT postdoc Meenakshi Khosla is the lead author of the paper, along with MIT research scientist N. Apurva Ratan Murty. The study appears today in the journal Current Biology.

    Visual categories

    More than 20 years ago, while studying the ventral visual stream, the part of the brain that recognizes objects, Kanwisher discovered cortical regions that respond selectively to faces. Later, she and other scientists discovered other regions that respond selectively to places, bodies, or words. Most of those areas were discovered when researchers specifically set out to look for them. However, that hypothesis-driven approach can limit what you end up finding, Kanwisher says.

    “There could be other things that we might not think to look for,” she says. “And even when we find something, how do we know that that’s actually part of the basic dominant structure of that pathway, and not something we found just because we were looking for it?”

    To try to uncover the fundamental structure of the ventral visual stream, Kanwisher and Khosla decided to analyze a large, publicly available dataset of full-brain functional magnetic resonance imaging (fMRI) responses from eight human subjects as they viewed thousands of images.

    “We wanted to see when we apply a data-driven, hypothesis-free strategy, what kinds of selectivities pop up, and whether those are consistent with what had been discovered before. A second goal was to see if we could discover novel selectivities that either haven’t been hypothesized before, or that have remained hidden due to the lower spatial resolution of fMRI data,” Khosla says.

    To do that, the researchers applied a mathematical method that allows them to discover neural populations that can’t be identified from traditional fMRI data. An fMRI image is made up of many voxels — three-dimensional units that represent a cube of brain tissue. Each voxel contains hundreds of thousands of neurons, and if some of those neurons belong to smaller populations that respond to one type of visual input, their responses may be drowned out by other populations within the same voxel.

    The new analytical method, which Kanwisher’s lab has previously used on fMRI data from the auditory cortex, can tease out responses of neural populations within each voxel of fMRI data.

    Using this approach, the researchers found four populations that corresponded to previously identified clusters that respond to faces, places, bodies, and words. “That tells us that this method works, and it tells us that the things that we found before are not just obscure properties of that pathway, but major, dominant properties,” Kanwisher says.

    Intriguingly, a fifth population also emerged, and this one appeared to be selective for images of food.

    “We were first quite puzzled by this because food is not a visually homogenous category,” Khosla says. “Things like apples and corn and pasta all look so unlike each other, yet we found a single population that responds similarly to all these diverse food items.”

    The food-specific population, which the researchers call the ventral food component (VFC), appears to be spread across two clusters of neurons, located on either side of the FFA. The fact that the food-specific populations are spread out between other category-specific populations may help explain why they have not been seen before, the researchers say.

    “We think that food selectivity had been harder to characterize before because the populations that are selective for food are intermingled with other nearby populations that have distinct responses to other stimulus attributes. The low spatial resolution of fMRI prevents us from seeing this selectivity because the responses of different neural population get mixed in a voxel,” Khosla says.

    “The technique which the researchers used to identify category-sensitive cells or areas is impressive, and it recovered known category-sensitive systems, making the food category findings most impressive,” says Paul Rozin, a professor of psychology at the University of Pennsylvania, who was not involved in the study. “I can’t imagine a way for the brain to reliably identify the diversity of foods based on sensory features. That makes this all the more fascinating, and likely to clue us in about something really new.”

    Food vs non-food

    The researchers also used the data to train a computational model of the VFC, based on previous models Murty had developed for the brain’s face and place recognition areas. This allowed the researchers to run additional experiments and predict the responses of the VFC. In one experiment, they fed the model matched images of food and non-food items that looked very similar — for example, a banana and a yellow crescent moon.

    “Those matched stimuli have very similar visual properties, but the main attribute in which they differ is edible versus inedible,” Khosla says. “We could feed those arbitrary stimuli through the predictive model and see whether it would still respond more to food than non-food, without having to collect the fMRI data.”

    They could also use the computational model to analyze much larger datasets, consisting of millions of images. Those simulations helped to confirm that the VFC is highly selective for images of food.

    From their analysis of the human fMRI data, the researchers found that in some subjects, the VFC responded slightly more to processed foods such as pizza than unprocessed foods like apples. In the future they hope to explore how factors such as familiarity and like or dislike of a particular food might affect individuals’ responses to that food.

    They also hope to study when and how this region becomes specialized during early childhood, and what other parts of the brain it communicates with. Another question is whether this food-selective population will be seen in other animals such as monkeys, who do not attach the cultural significance to food that humans do.

    The research was funded by the National Institutes of Health, the National Eye Institute, and the National Science Foundation through the MIT Center for Brains, Minds, and Machines. More

  • in

    Designing zeolites, porous materials made to trap molecules

    Zeolites are a class of minerals used in everything from industrial catalysts and chemical filters to laundry detergents and cat litter. They are mostly composed of silicon and aluminum — two abundant, inexpensive elements — plus oxygen; they have a crystalline structure; and most significantly, they are porous. Among the regularly repeating atomic patterns in them are tiny interconnected openings, or pores, that can trap molecules that just fit inside them, allow smaller ones to pass through, or block larger ones from entering. A zeolite can remove unwanted molecules from gases and liquids, or trap them temporarily and then release them, or hold them while they undergo rapid chemical reactions.

    Some zeolites occur naturally, but they take unpredictable forms and have variable-sized pores. “People synthesize artificial versions to ensure absolute purity and consistency,” says Rafael Gómez-Bombarelli, the Jeffrey Cheah Career Development Chair in Engineering in the Department of Materials Science and Engineering (DMSE). And they work hard to influence the size of the internal pores in hopes of matching the molecule or other particle they’re looking to capture.

    The basic recipe for making zeolites sounds simple. Mix together the raw ingredients — basically, silicon dioxide and aluminum oxide — and put them in a reactor for a few days at a high temperature and pressure. Depending on the ratio between the ingredients and the temperature, pressure, and timing, as the initial gel slowly solidifies into crystalline form, different zeolites emerge.

    But there’s one special ingredient to add “to help the system go where you want it to go,” says Gómez-Bombarelli. “It’s a molecule that serves as a template so that the zeolite you want will crystallize around it and create pores of the desired size and shape.”

    The so-called templating molecule binds to the material before it solidifies. As crystallization progresses, the molecule directs the structure, or “framework,” that forms around it. After crystallization, the temperature is raised and the templating molecule burns off, leaving behind a solid aluminosilicate material filled with open pores that are — given the correct templating molecule and synthesis conditions — just the right size and shape to recognize the targeted molecule.

    The zeolite conundrum

    Theoretical studies suggest that there should be hundreds of thousands of possible zeolites. But despite some 60 years of intensive research, only about 250 zeolites have been made. This is sometimes called the “zeolite conundrum.” Why haven’t more been made — especially now, when they could help ongoing efforts to decarbonize energy and the chemical industry?

    One challenge is figuring out the best recipe for making them: Factors such as the best ratio between the silicon and aluminum, what cooking temperature to use, and whether to stir the ingredients all influence the outcome. But the real key, the researchers say, lies in choosing a templating molecule that’s best for producing the intended zeolite framework. Making that match is difficult: There are hundreds of known templating molecules and potentially a million zeolites, and researchers are continually designing new molecules because millions more could be made and might work better.

    For decades, the exploration of how to synthesize a particular zeolite has been done largely by trial and error — a time-consuming, expensive, inefficient way to go about it. There has also been considerable effort to use “atomistic” (atom-by-atom) simulation to figure out what known or novel templating molecule to use to produce a given zeolite. But the experimental and modeling results haven’t generated reliable guidance. In many cases, researchers have carefully selected or designed a molecule to make a particular zeolite, but when they tried their molecule in the lab, the zeolite that formed wasn’t what they expected or desired. So they needed to start over.

    Those experiences illustrate what Gómez-Bombarelli and his colleagues believe is the problem that’s been plaguing zeolite design for decades. All the efforts — both experimental and theoretical — have focused on finding the templating molecule that’s best for forming a specific zeolite. But what if that templating molecule is also really good — or even better — at forming some other zeolite?

    To determine the “best” molecule for making a certain zeolite framework, and the “best” zeolite framework to act as host to a particular molecule, the researchers decided to look at both sides of the pairing. Daniel Schwalbe-Koda PhD ’22, a former member of Gómez-Bombarelli’s group and now a postdoc at Lawrence Livermore National Laboratory, describes the process as a sort of dance with molecules and zeolites in a room looking for partners. “Each molecule wants to find a partner zeolite, and each zeolite wants to find a partner molecule,” he says. “But it’s not enough to find a good dance partner from the perspective of only one dancer. The potential partner could prefer to dance with someone else, after all. So it needs to be a particularly good pairing.” The upshot: “You need to look from the perspective of each of them.”

    To find the best match from both perspectives, the researchers needed to try every molecule with every zeolite and quantify how well the pairings worked.

    A broader metric for evaluating pairs

    Before performing that analysis, the researchers defined a new “evaluating metric” that they could use to rank each templating molecule-zeolite pair. The standard metric for measuring the affinity between a molecule and a zeolite is “binding energy,” that is, how strongly the molecule clings to the zeolite or, conversely, how much energy is required to separate the two. While recognizing the value of that metric, the MIT-led team wanted to take more parameters into account.

    Their new evaluating metric therefore includes not only binding energy but also the size, shape, and volume of the molecule and the opening in the zeolite framework. And their approach calls for turning the molecule to different orientations to find the best possible fit.

    Affinity scores for all molecule-zeolite pairs based on that evaluating metric would enable zeolite researchers to answer two key questions: What templating molecule will form the zeolite that I want? And if I use that templating molecule, what other zeolites might it form instead? Using the molecule-zeolite affinity scores, researchers could first identify molecules that look good for making a desired zeolite. They could then rule out the ones that also look good for forming other zeolites, leaving a set of molecules deemed to be “highly selective” for making the desired zeolite.  

    Validating the approach: A rich literature

    But does their new metric work better than the standard one? To find out, the team needed to perform atomistic simulations using their new evaluating metric and then benchmark their results against experimental evidence reported in the literature. There are many thousands of journal articles reporting on experiments involving zeolites — in many cases, detailing not only the molecule-zeolite pairs and outcomes but also synthesis conditions and other details. Ferreting out articles with the information the researchers needed was a job for machine learning — in particular, for natural language processing.

    For that task, Gómez-Bombarelli and Schwalbe-Koda turned to their DMSE colleague Elsa Olivetti PhD ’07, the Esther and Harold E. Edgerton Associate Professor in Materials Science and Engineering. Using a literature-mining technique that she and a group of collaborators had developed, she and her DMSE team processed more than 2 million materials science papers, found some 90,000 relating to zeolites, and extracted 1,338 of them for further analysis. The yield was 549 templating molecules tested, 209 zeolite frameworks produced, and 5,663 synthesis routes followed.

    Based on those findings, the researchers used their new evaluating metric and a novel atomistic simulation technique to examine more than half-a-million templating molecule-zeolite pairs. Their results reproduced experimental outcomes reported in more than a thousand journal articles. Indeed, the new metric outperformed the traditional binding energy metric, and their simulations were orders of magnitude faster than traditional approaches.

    Ready for experimental investigations

    Now the researchers were ready to put their approach to the test: They would use it to design new templating molecules and try them out in experiments performed by a team led by Yuriy Román-Leshkov, the Robert T. Haslam (1911) Professor of Chemical Engineering, and a team from the Instituto de Tecnologia Química in Valencia, Spain, led by Manuel Moliner and Avelino Corma.

    One set of experiments focused on a zeolite called chabazite, which is used in catalytic converters for vehicles. Using their techniques, the researchers designed a new templating molecule for synthesizing chabazite, and the experimental results confirmed their approach. Their analyses had shown that the new templating molecule would be good for forming chabazite and not for forming anything else. “Its binding strength isn’t as high as other molecules for chabazite, so people hadn’t used it,” says Gómez-Bombarelli. “But it’s pretty good, and it’s not good for anything else, so it’s selective — and it’s way cheaper than the usual ones.”

    In addition, in their new molecule, the electrical charge is distributed differently than in the traditional ones, which led to new possibilities. The researchers found that by adjusting both the shape and charge of the molecule, they could control where the negative charge occurs on the pore that’s created in the final zeolite. “The charge placement that results can make the chabazite a much better catalyst than it was before,” says Gómez-Bombarelli. “So our same rules for molecule design also determine where the negative charge is going to end up, which can lead to whole different classes of catalysts.”

    Schwalbe-Koda describes another experiment that demonstrates the importance of molecular shape as well as the types of new materials made possible using the team’s approach. In one striking example, the team designed a templating molecule with a height and width that’s halfway between those of two molecules that are now commonly used—one for making chabazite and the other for making a zeolite called AEI. (Every new zeolite structure is examined by the International Zeolite Association and — once approved — receives a three-letter designation.)

    Experiments using that in-between templating molecule resulted in the formation of not one zeolite or the other, but a combination of the two in a single solid. “The result blends two different structures together in a way that the final result is better than the sum of its parts,” says Schwalbe-Koda. “The catalyst is like the one used in catalytic converters in today’s trucks — only better.” It’s more efficient in converting nitrogen oxides to harmless nitrogen gases and water, and — because of the two different pore sizes and the aluminosilicate composition — it works well on exhaust that’s fairly hot, as during normal operation, and also on exhaust that’s fairly cool, as during startup.

    Putting the work into practice

    As with all materials, the commercial viability of a zeolite will depend in part on the cost of making it. The researchers’ technique can identify promising templating molecules, but some of them may be difficult to synthesize in the lab. As a result, the overall cost of that molecule-zeolite combination may be too high to be competitive.

    Gómez-Bombarelli and his team therefore include in their assessment process a calculation of cost for synthesizing each templating molecule they identified — generally the most expensive part of making a given zeolite. They use a publicly available model devised in 2018 by Connor Coley PhD ’19, now the Henri Slezynger (1957) Career Development Assistant Professor of Chemical Engineering at MIT. The model takes into account all the starting materials and the step-by-step chemical reactions needed to produce the targeted templating molecule.

    However, commercialization decisions aren’t based solely on cost. Sometimes there’s a trade-off between cost and performance. “For instance, given our chabazite findings, would customers or the community trade a little bit of activity for a 100-fold decrease in the cost of the templating molecule?” says Gómez-Bombarelli. “The answer is likely yes. So we’ve made a tool that can help them navigate that trade-off.” And there are other factors to consider. For example, is this templating molecule truly novel, or have others already studied it — or perhaps even hold a patent on it?

    “While an algorithm can guide development of templating molecules and quantify specific molecule-zeolite matches, other types of assessments are best left to expert judgment,” notes Schwalbe-Koda. “We need a partnership between computational analysis and human intuition and experience.”

    To that end, the MIT researchers and their colleagues decided to share their techniques and findings with other zeolite researchers. Led by Schwalbe-Koda, they created an online database that they made publicly accessible and easy to use — an unusual step, given the competitive industries that rely on zeolites. The interactive website — zeodb.mit.edu — contains the researchers’ final metrics for templating molecule-zeolite pairs resulting from hundreds of thousands of simulations; all the identified journal articles, along with which molecules and zeolites were examined and what synthesis conditions were used; and many more details. Users are free to search and organize the data in any way that suits them.

    Gómez-Bombarelli, Schwalbe-Koda, and their colleagues hope that their techniques and the interactive website will help other researchers explore and discover promising new templating molecules and zeolites, some of which could have profound impacts on efforts to decarbonize energy and tackle climate change.

    This research involved a team of collaborators at MIT, the Instituto de Tecnologia Química (UPV-CSIC), and Stockholm University. The work was supported in part by the MIT Energy Initiative Seed Fund Program and by seed funds from the MIT International Science and Technology Initiative. Daniel Schwalbe-Koda was supported by an ExxonMobil-MIT Energy Fellowship in 2020–21.

    This article appears in the Spring 2022 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Taking a magnifying glass to data center operations

    When the MIT Lincoln Laboratory Supercomputing Center (LLSC) unveiled its TX-GAIA supercomputer in 2019, it provided the MIT community a powerful new resource for applying artificial intelligence to their research. Anyone at MIT can submit a job to the system, which churns through trillions of operations per second to train models for diverse applications, such as spotting tumors in medical images, discovering new drugs, or modeling climate effects. But with this great power comes the great responsibility of managing and operating it in a sustainable manner — and the team is looking for ways to improve.

    “We have these powerful computational tools that let researchers build intricate models to solve problems, but they can essentially be used as black boxes. What gets lost in there is whether we are actually using the hardware as effectively as we can,” says Siddharth Samsi, a research scientist in the LLSC. 

    To gain insight into this challenge, the LLSC has been collecting detailed data on TX-GAIA usage over the past year. More than a million user jobs later, the team has released the dataset open source to the computing community.

    Their goal is to empower computer scientists and data center operators to better understand avenues for data center optimization — an important task as processing needs continue to grow. They also see potential for leveraging AI in the data center itself, by using the data to develop models for predicting failure points, optimizing job scheduling, and improving energy efficiency. While cloud providers are actively working on optimizing their data centers, they do not often make their data or models available for the broader high-performance computing (HPC) community to leverage. The release of this dataset and associated code seeks to fill this space.

    “Data centers are changing. We have an explosion of hardware platforms, the types of workloads are evolving, and the types of people who are using data centers is changing,” says Vijay Gadepally, a senior researcher at the LLSC. “Until now, there hasn’t been a great way to analyze the impact to data centers. We see this research and dataset as a big step toward coming up with a principled approach to understanding how these variables interact with each other and then applying AI for insights and improvements.”

    Papers describing the dataset and potential applications have been accepted to a number of venues, including the IEEE International Symposium on High-Performance Computer Architecture, the IEEE International Parallel and Distributed Processing Symposium, the Annual Conference of the North American Chapter of the Association for Computational Linguistics, the IEEE High-Performance and Embedded Computing Conference, and International Conference for High Performance Computing, Networking, Storage and Analysis. 

    Workload classification

    Among the world’s TOP500 supercomputers, TX-GAIA combines traditional computing hardware (central processing units, or CPUs) with nearly 900 graphics processing unit (GPU) accelerators. These NVIDIA GPUs are specialized for deep learning, the class of AI that has given rise to speech recognition and computer vision.

    The dataset covers CPU, GPU, and memory usage by job; scheduling logs; and physical monitoring data. Compared to similar datasets, such as those from Google and Microsoft, the LLSC dataset offers “labeled data, a variety of known AI workloads, and more detailed time series data compared with prior datasets. To our knowledge, it’s one of the most comprehensive and fine-grained datasets available,” Gadepally says. 

    Notably, the team collected time-series data at an unprecedented level of detail: 100-millisecond intervals on every GPU and 10-second intervals on every CPU, as the machines processed more than 3,000 known deep-learning jobs. One of the first goals is to use this labeled dataset to characterize the workloads that different types of deep-learning jobs place on the system. This process would extract features that reveal differences in how the hardware processes natural language models versus image classification or materials design models, for example.   

    The team has now launched the MIT Datacenter Challenge to mobilize this research. The challenge invites researchers to use AI techniques to identify with 95 percent accuracy the type of job that was run, using their labeled time-series data as ground truth.

    Such insights could enable data centers to better match a user’s job request with the hardware best suited for it, potentially conserving energy and improving system performance. Classifying workloads could also allow operators to quickly notice discrepancies resulting from hardware failures, inefficient data access patterns, or unauthorized usage.

    Too many choices

    Today, the LLSC offers tools that let users submit their job and select the processors they want to use, “but it’s a lot of guesswork on the part of users,” Samsi says. “Somebody might want to use the latest GPU, but maybe their computation doesn’t actually need it and they could get just as impressive results on CPUs, or lower-powered machines.”

    Professor Devesh Tiwari at Northeastern University is working with the LLSC team to develop techniques that can help users match their workloads to appropriate hardware. Tiwari explains that the emergence of different types of AI accelerators, GPUs, and CPUs has left users suffering from too many choices. Without the right tools to take advantage of this heterogeneity, they are missing out on the benefits: better performance, lower costs, and greater productivity.

    “We are fixing this very capability gap — making users more productive and helping users do science better and faster without worrying about managing heterogeneous hardware,” says Tiwari. “My PhD student, Baolin Li, is building new capabilities and tools to help HPC users leverage heterogeneity near-optimally without user intervention, using techniques grounded in Bayesian optimization and other learning-based optimization methods. But, this is just the beginning. We are looking into ways to introduce heterogeneity in our data centers in a principled approach to help our users achieve the maximum advantage of heterogeneity autonomously and cost-effectively.”

    Workload classification is the first of many problems to be posed through the Datacenter Challenge. Others include developing AI techniques to predict job failures, conserve energy, or create job scheduling approaches that improve data center cooling efficiencies.

    Energy conservation 

    To mobilize research into greener computing, the team is also planning to release an environmental dataset of TX-GAIA operations, containing rack temperature, power consumption, and other relevant data.

    According to the researchers, huge opportunities exist to improve the power efficiency of HPC systems being used for AI processing. As one example, recent work in the LLSC determined that simple hardware tuning, such as limiting the amount of power an individual GPU can draw, could reduce the energy cost of training an AI model by 20 percent, with only modest increases in computing time. “This reduction translates to approximately an entire week’s worth of household energy for a mere three-hour time increase,” Gadepally says.

    They have also been developing techniques to predict model accuracy, so that users can quickly terminate experiments that are unlikely to yield meaningful results, saving energy. The Datacenter Challenge will share relevant data to enable researchers to explore other opportunities to conserve energy.

    The team expects that lessons learned from this research can be applied to the thousands of data centers operated by the U.S. Department of Defense. The U.S. Air Force is a sponsor of this work, which is being conducted under the USAF-MIT AI Accelerator.

    Other collaborators include researchers at MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Professor Charles Leiserson’s Supertech Research Group is investigating performance-enhancing techniques for parallel computing, and research scientist Neil Thompson is designing studies on ways to nudge data center users toward climate-friendly behavior.

    Samsi presented this work at the inaugural AI for Datacenter Optimization (ADOPT’22) workshop last spring as part of the IEEE International Parallel and Distributed Processing Symposium. The workshop officially introduced their Datacenter Challenge to the HPC community.

    “We hope this research will allow us and others who run supercomputing centers to be more responsive to user needs while also reducing the energy consumption at the center level,” Samsi says. More