More stories

  • in

    New tool makes generative AI models more likely to create breakthrough materials

    The artificial intelligence models that turn text into images are also useful for generating new materials. Over the last few years, generative materials models from companies like Google, Microsoft, and Meta have drawn on their training data to help researchers design tens of millions of new materials.But when it comes to designing materials with exotic quantum properties like superconductivity or unique magnetic states, those models struggle. That’s too bad, because humans could use the help. For example, after a decade of research into a class of materials that could revolutionize quantum computing, called quantum spin liquids, only a dozen material candidates have been identified. The bottleneck means there are fewer materials to serve as the basis for technological breakthroughs.Now, MIT researchers have developed a technique that lets popular generative materials models create promising quantum materials by following specific design rules. The rules, or constraints, steer models to create materials with unique structures that give rise to quantum properties.“The models from these large companies generate materials optimized for stability,” says Mingda Li, MIT’s Class of 1947 Career Development Professor. “Our perspective is that’s not usually how materials science advances. We don’t need 10 million new materials to change the world. We just need one really good material.”The approach is described today in a paper published by Nature Materials. The researchers applied their technique to generate millions of candidate materials consisting of geometric lattice structures associated with quantum properties. From that pool, they synthesized two actual materials with exotic magnetic traits.“People in the quantum community really care about these geometric constraints, like the Kagome lattices that are two overlapping, upside-down triangles. We created materials with Kagome lattices because those materials can mimic the behavior of rare earth elements, so they are of high technical importance.” Li says.Li is the senior author of the paper. His MIT co-authors include PhD students Ryotaro Okabe, Mouyang Cheng, Abhijatmedhi Chotrattanapituk, and Denisse Cordova Carrizales; postdoc Manasi Mandal; undergraduate researchers Kiran Mak and Bowen Yu; visiting scholar Nguyen Tuan Hung; Xiang Fu ’22, PhD ’24; and professor of electrical engineering and computer science Tommi Jaakkola, who is an affiliate of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and Institute for Data, Systems, and Society. Additional co-authors include Yao Wang of Emory University, Weiwei Xie of Michigan State University, YQ Cheng of Oak Ridge National Laboratory, and Robert Cava of Princeton University.Steering models toward impactA material’s properties are determined by its structure, and quantum materials are no different. Certain atomic structures are more likely to give rise to exotic quantum properties than others. For instance, square lattices can serve as a platform for high-temperature superconductors, while other shapes known as Kagome and Lieb lattices can support the creation of materials that could be useful for quantum computing.To help a popular class of generative models known as a diffusion models produce materials that conform to particular geometric patterns, the researchers created SCIGEN (short for Structural Constraint Integration in GENerative model). SCIGEN is a computer code that ensures diffusion models adhere to user-defined constraints at each iterative generation step. With SCIGEN, users can give any generative AI diffusion model geometric structural rules to follow as it generates materials.AI diffusion models work by sampling from their training dataset to generate structures that reflect the distribution of structures found in the dataset. SCIGEN blocks generations that don’t align with the structural rules.To test SCIGEN, the researchers applied it to a popular AI materials generation model known as DiffCSP. They had the SCIGEN-equipped model generate materials with unique geometric patterns known as Archimedean lattices, which are collections of 2D lattice tilings of different polygons. Archimedean lattices can lead to a range of quantum phenomena and have been the focus of much research.“Archimedean lattices give rise to quantum spin liquids and so-called flat bands, which can mimic the properties of rare earths without rare earth elements, so they are extremely important,” says Cheng, a co-corresponding author of the work. “Other Archimedean lattice materials have large pores that could be used for carbon capture and other applications, so it’s a collection of special materials. In some cases, there are no known materials with that lattice, so I think it will be really interesting to find the first material that fits in that lattice.”The model generated over 10 million material candidates with Archimedean lattices. One million of those materials survived a screening for stability. Using the supercomputers in Oak Ridge National Laboratory, the researchers then took a smaller sample of 26,000 materials and ran detailed simulations to understand how the materials’ underlying atoms behaved. The researchers found magnetism in 41 percent of those structures.From that subset, the researchers synthesized two previously undiscovered compounds, TiPdBi and TiPbSb, at Xie and Cava’s labs. Subsequent experiments showed the AI model’s predictions largely aligned with the actual material’s properties.“We wanted to discover new materials that could have a huge potential impact by incorporating these structures that have been known to give rise to quantum properties,” says Okabe, the paper’s first author. “We already know that these materials with specific geometric patterns are interesting, so it’s natural to start with them.”Accelerating material breakthroughsQuantum spin liquids could unlock quantum computing by enabling stable, error-resistant qubits that serve as the basis of quantum operations. But no quantum spin liquid materials have been confirmed. Xie and Cava believe SCIGEN could accelerate the search for these materials.“There’s a big search for quantum computer materials and topological superconductors, and these are all related to the geometric patterns of materials,” Xie says. “But experimental progress has been very, very slow,” Cava adds. “Many of these quantum spin liquid materials are subject to constraints: They have to be in a triangular lattice or a Kagome lattice. If the materials satisfy those constraints, the quantum researchers get excited; it’s a necessary but not sufficient condition. So, by generating many, many materials like that, it immediately gives experimentalists hundreds or thousands more candidates to play with to accelerate quantum computer materials research.”“This work presents a new tool, leveraging machine learning, that can predict which materials will have specific elements in a desired geometric pattern,” says Drexel University Professor Steve May, who was not involved in the research. “This should speed up the development of previously unexplored materials for applications in next-generation electronic, magnetic, or optical technologies.”The researchers stress that experimentation is still critical to assess whether AI-generated materials can be synthesized and how their actual properties compare with model predictions. Future work on SCIGEN could incorporate additional design rules into generative models, including chemical and functional constraints.“People who want to change the world care about material properties more than the stability and structure of materials,” Okabe says. “With our approach, the ratio of stable materials goes down, but it opens the door to generate a whole bunch of promising materials.”The work was supported, in part, by the U.S. Department of Energy, the National Energy Research Scientific Computing Center, the National Science Foundation, and Oak Ridge National Laboratory. More

  • in

    MIT geologists discover where energy goes during an earthquake

    The ground-shaking that an earthquake generates is only a fraction of the total energy that a quake releases. A quake can also generate a flash of heat, along with a domino-like fracturing of underground rocks. But exactly how much energy goes into each of these three processes is exceedingly difficult, if not impossible, to measure in the field.Now MIT geologists have traced the energy that is released by “lab quakes” — miniature analogs of natural earthquakes that are carefully triggered in a controlled laboratory setting. For the first time, they have quantified the complete energy budget of such quakes, in terms of the fraction of energy that goes into heat, shaking, and fracturing.They found that only about 10 percent of a lab quake’s energy causes physical shaking. An even smaller fraction — less than 1 percent — goes into breaking up rock and creating new surfaces. The overwhelming portion of a quake’s energy — on average 80 percent — goes into heating up the immediate region around a quake’s epicenter. In fact, the researchers observed that a lab quake can produce a temperature spike hot enough to melt surrounding material and turn it briefly into liquid melt.The geologists also found that a quake’s energy budget depends on a region’s deformation history — the degree to which rocks have been shifted and disturbed by previous tectonic motions. The fractions of quake energy that produce heat, shaking, and rock fracturing can shift depending on what the region has experienced in the past.“The deformation history — essentially what the rock remembers — really influences how destructive an earthquake could be,” says Daniel Ortega-Arroyo, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “That history affects a lot of the material properties in the rock, and it dictates to some degree how it is going to slip.”The team’s lab quakes are a simplified analog of what occurs during a natural earthquake. Down the road, their results could help seismologists predict the likelihood of earthquakes in regions that are prone to seismic events. For instance, if scientists have an idea of how much shaking a quake generated in the past, they might be able to estimate the degree to which the quake’s energy also affected rocks deep underground by melting or breaking them apart. This in turn could reveal how much more or less vulnerable the region is to future quakes.“We could never reproduce the complexity of the Earth, so we have to isolate the physics of what is happening, in these lab quakes,” says Matěj Peč, associate professor of geophysics at MIT. “We hope to understand these processes and try to extrapolate them to nature.”Peč (pronounced “Peck”) and Ortega-Arroyo reported their results on Aug. 28 in the journal AGU Advances. Their MIT co-authors are Hoagy O’Ghaffari and Camilla Cattania, along with Zheng Gong and Roger Fu at Harvard University and Markus Ohl and Oliver Plümper at Utrecht University in the Netherlands.Under the surfaceEarthquakes are driven by energy that is stored up in rocks over millions of years. As tectonic plates slowly grind against each other, stress accumulates through the crust. When rocks are pushed past their material strength, they can suddenly slip along a narrow zone, creating a geologic fault. As rocks slip on either side of the fault, they produce seismic waves that ripple outward and upward.We perceive an earthquake’s energy mainly in the form of ground shaking, which can be measured using seismometers and other ground-based instruments. But the other two major forms of a quake’s energy — heat and underground fracturing — are largely inaccessible with current technologies.“Unlike the weather, where we can see daily patterns and measure a number of pertinent variables, it’s very hard to do that very deep in the Earth,” Ortega-Arroyo says. “We don’t know what’s happening to the rocks themselves, and the timescales over which earthquakes repeat within a fault zone are on the century-to-millenia timescales, making any sort of actionable forecast challenging.”To get an idea of how an earthquake’s energy is partitioned, and how that energy budget might affect a region’s seismic risk, he and Peč went into the lab. Over the last seven years, Peč’s group at MIT has developed methods and instrumentation to simulate seismic events, at the microscale, in an effort to understand how earthquakes at the macroscale may play out.“We are focusing on what’s happening on a really small scale, where we can control many aspects of failure and try to understand it before we can do any scaling to nature,” Ortega-Arroyo says.MicroshakesFor their new study, the team generated miniature lab quakes that simulate a seismic slipping of rocks along a fault zone. They worked with small samples of granite, which are representative of rocks in the seismogenic layer — the geologic region in the continental crust where earthquakes typically originate. They ground up the granite into a fine powder and mixed the crushed granite with a much finer powder of magnetic particles, which they used as a sort of internal temperature gauge. (A particle’s magnetic field strength will change in response to a fluctuation in temperature.)The researchers placed samples of the powdered granite — each about 10 square millimeters and 1 millimeter thin — between two small pistons and wrapped the ensemble in a gold jacket. They then applied a strong magnetic field to orient the powder’s magnetic particles in the same initial direction and to the same field strength. They reasoned that any change in the particles’ orientation and field strength afterward should be a sign of how much heat that region experienced as a result of any seismic event.Once samples were prepared, the team placed them one at a time into a custom-built apparatus that the researchers tuned to apply steadily increasing pressure, similar to the pressures that rocks experience in the Earth’s seismogenic layer, about 10 to 20 kilometers below the surface. They used custom-made piezoelectric sensors, developed by co-author O’Ghaffari, which they attached to either end of a sample to measure any shaking that occurred as they increased the stress on the sample.They observed that at certain stresses, some samples slipped, producing a microscale seismic event similar to an earthquake. By analyzing the magnetic particles in the samples after the fact, they obtained an estimate of how much each sample was temporarily heated — a method developed in collaboration with Roger Fu’s lab at Harvard University. They also estimated the amount of shaking each sample experienced, using measurements from the piezoelectric sensor and numerical models. The researchers also examined each sample under the microscope, at different magnifications, to assess how the size of the granite grains changed — whether and how many grains broke into smaller pieces, for instance.From all these measurements, the team was able to estimate each lab quake’s energy budget. On average, they found that about 80 percent of a quake’s energy goes into heat, while 10 percent generates shaking, and less than 1 percent goes into rock fracturing, or creating new, smaller particle surfaces. “In some instances we saw that, close to the fault, the sample went from room temperature to 1,200 degrees Celsius in a matter of microseconds, and then immediately cooled down once the motion stopped,” Ortega-Arroyo says. “And in one sample, we saw the fault move by about 100 microns, which implies slip velocities essentially about 10 meters per second. It moves very fast, though it doesn’t last very long.”The researchers suspect that similar processes play out in actual, kilometer-scale quakes.“Our experiments offer an integrated approach that provides one of the most complete views of the physics of earthquake-like ruptures in rocks to date,” Peč says. “This will provide clues on how to improve our current earthquake models and natural hazard mitigation.”This research was supported, in part, by the National Science Foundation. More

  • in

    New self-assembling material could be the key to recyclable EV batteries

    Today’s electric vehicle boom is tomorrow’s mountain of electronic waste. And while myriad efforts are underway to improve battery recycling, many EV batteries still end up in landfills.A research team from MIT wants to help change that with a new kind of self-assembling battery material that quickly breaks apart when submerged in a simple organic liquid. In a new paper published in Nature Chemistry, the researchers showed the material can work as the electrolyte in a functioning, solid-state battery cell and then revert back to its original molecular components in minutes.The approach offers an alternative to shredding the battery into a mixed, hard-to-recycle mass. Instead, because the electrolyte serves as the battery’s connecting layer, when the new material returns to its original molecular form, the entire battery disassembles to accelerate the recycling process.“So far in the battery industry, we’ve focused on high-performing materials and designs, and only later tried to figure out how to recycle batteries made with complex structures and hard-to-recycle materials,” says the paper’s first author Yukio Cho PhD ’23. “Our approach is to start with easily recyclable materials and figure out how to make them battery-compatible. Designing batteries for recyclability from the beginning is a new approach.”Joining Cho on the paper are PhD candidate Cole Fincher, Ty Christoff-Tempesta PhD ’22, Kyocera Professor of Ceramics Yet-Ming Chiang, Visiting Associate Professor Julia Ortony, Xiaobing Zuo, and Guillaume Lamour.Better batteriesThere’s a scene in one of the “Harry Potter” films where Professor Dumbledore cleans a dilapidated home with the flick of the wrist and a spell. Cho says that image stuck with him as a kid. (What better way to clean your room?) When he saw a talk by Ortony on engineering molecules so that they could assemble into complex structures and then revert back to their original form, he wondered if it could be used to make battery recycling work like magic.That would be a paradigm shift for the battery industry. Today, batteries require harsh chemicals, high heat, and complex processing to recycle. There are three main parts of a battery: the positively charged cathode, the negatively charged electrode, and the electrolyte that shuttles lithium ions between them. The electrolytes in most lithium-ion batteries are highly flammable and degrade over time into toxic byproducts that require specialized handling.To simplify the recycling process, the researchers decided to make a more sustainable electrolyte. For that, they turned to a class of molecules that self-assemble in water, named aramid amphiphiles (AAs), whose chemical structures and stability mimic that of Kevlar. The researchers further designed the AAs to contain polyethylene glycol (PEG), which can conduct lithium ions, on one end of each molecule. When the molecules are exposed to water, they spontaneously form nanoribbons with ion-conducting PEG surfaces and bases that imitate the robustness of Kevlar through tight hydrogen bonding. The result is a mechanically stable nanoribbon structure that conducts ions across its surface.“The material is composed of two parts,” Cho explains. “The first part is this flexible chain that gives us a nest, or host, for lithium ions to jump around. The second part is this strong organic material component that is used in the Kevlar, which is a bulletproof material. Those make the whole structure stable.”When added to water, the nanoribbons self-assemble to form millions of nanoribbons that can be hot-pressed into a solid-state material.“Within five minutes of being added to water, the solution becomes gel-like, indicating there are so many nanofibers formed in the liquid that they start to entangle each other,” Cho says. “What’s exciting is we can make this material at scale because of the self-assembly behavior.”The team tested the material’s strength and toughness, finding it could endure the stresses associated with making and running the battery. They also constructed a solid-state battery cell that used lithium iron phosphate for the cathode and lithium titanium oxide as the anode, both common materials in today’s batteries. The nanoribbons moved lithium ions successfully between the electrodes, but a side-effect known as polarization limited the movement of lithium ions into the battery’s electrodes during fast bouts of charging and discharging, hampering its performance compared to today’s gold-standard commercial batteries.“The lithium ions moved along the nanofiber all right, but getting the lithium ion from the nanofibers to the metal oxide seems to be the most sluggish point of the process,” Cho says.When they immersed the battery cell into organic solvents, the material immediately dissolved, with each part of the battery falling away for easier recycling. Cho compared the materials’ reaction to cotton candy being submerged in water.“The electrolyte holds the two battery electrodes together and provides the lithium-ion pathways,” Cho says. “So, when you want to recycle the battery, the entire electrolyte layer can fall off naturally and you can recycle the electrodes separately.”Validating a new approachCho says the material is a proof of concept that demonstrates the recycle-first approach.“We don’t want to say we solved all the problems with this material,” Cho says. “Our battery performance was not fantastic because we used only this material as the entire electrolyte for the paper, but what we’re picturing is using this material as one layer in the battery electrolyte. It doesn’t have to be the entire electrolyte to kick off the recycling process.”Cho also sees a lot of room for optimizing the material’s performance with further experiments.Now, the researchers are exploring ways to integrate these kinds of materials into existing battery designs as well as implementing the ideas into new battery chemistries.“It’s very challenging to convince existing vendors to do something very differently,” Cho says. “But with new battery materials that may come out in five or 10 years, it could be easier to integrate this into new designs in the beginning.”Cho also believes the approach could help reshore lithium supplies by reusing materials from batteries that are already in the U.S.“People are starting to realize how important this is,” Cho says. “If we can start to recycle lithium-ion batteries from battery waste at scale, it’ll have the same effect as opening lithium mines in the U.S. Also, each battery requires a certain amount of lithium, so extrapolating out the growth of electric vehicles, we need to reuse this material to avoid massive lithium price spikes.”The work was supported, in part, by the National Science Foundation and the U.S. Department of Energy. More

  • in

    Theory-guided strategy expands the scope of measurable quantum interactions

    A new theory-guided framework could help scientists probe the properties of new semiconductors for next-generation microelectronic devices, or discover materials that boost the performance of quantum computers.Research to develop new or better materials typically involves investigating properties that can be reliably measured with existing lab equipment, but this represents just a fraction of the properties that scientists could potentially probe in principle. Some properties remain effectively “invisible” because they are too difficult to capture directly with existing methods.Take electron-phonon interaction — this property plays a critical role in a material’s electrical, thermal, optical, and superconducting properties, but directly capturing it using existing techniques is notoriously challenging.Now, MIT researchers have proposed a theoretically justified approach that could turn this challenge into an opportunity. Their method reinterprets neutron scattering, an often-overlooked interference effect as a potential direct probe of electron-phonon coupling strength.The procedure creates two interaction effects in the material. The researchers show that, by deliberately designing their experiment to leverage the interference between the two interactions, they can capture the strength of a material’s electron-phonon interaction.The researchers’ theory-informed methodology could be used to shape the design of future experiments, opening the door to measuring new quantities that were previously out of reach.“Rather than discovering new spectroscopy techniques by pure accident, we can use theory to justify and inform the design of our experiments and our physical equipment,” says Mingda Li, the Class of 1947 Career Development Professor and an associate professor of nuclear science and engineering, and senior author of a paper on this experimental method.Li is joined on the paper by co-lead authors Chuliang Fu, an MIT postdoc; Phum Siriviboon and Artittaya Boonkird, both MIT graduate students; as well as others at MIT, the National Institute of Standards and Technology, the University of California at Riverside, Michigan State University, and Oak Ridge National Laboratory. The research appears this week in Materials Today Physics.Investigating interferenceNeutron scattering is a powerful measurement technique that involves aiming a beam of neutrons at a material and studying how the neutrons are scattered after they strike it. The method is ideal for measuring a material’s atomic structure and magnetic properties.When neutrons collide with the material sample, they interact with it through two different mechanisms, creating a nuclear interaction and a magnetic interaction. These interactions can interfere with each other.“The scientific community has known about this interference effect for a long time, but researchers tend to view it as a complication that can obscure measurement signals. So it hasn’t received much focused attention,” Fu says.The team and their collaborators took a conceptual “leap of faith” and decided to explore this oft-overlooked interference effect more deeply.They flipped the traditional materials research approach on its head by starting with a multifaceted theoretical analysis. They explored what happens inside a material when the nuclear interaction and magnetic interaction interfere with each other.Their analysis revealed that this interference pattern is directly proportional to the strength of the material’s electron-phonon interaction.“This makes the interference effect a probe we can use to detect this interaction,” explains Siriviboon.Electron-phonon interactions play a role in a wide range of material properties. They affect how heat flows through a material, impact a material’s ability to absorb and emit light, and can even lead to superconductivity.But the complexity of these interactions makes them hard to directly measure using existing experimental techniques. Instead, researchers often rely on less precise, indirect methods to capture electron-phonon interactions.However, leveraging this interference effect enables direct measurement of the electron-phonon interaction, a major advantage over other approaches.“Being able to directly measure the electron-phonon interaction opens the door to many new possibilities,” says Boonkird.Rethinking materials researchBased on their theoretical insights, the researchers designed an experimental setup to demonstrate their approach.Since the available equipment wasn’t powerful enough for this type of neutron scattering experiment, they were only able to capture a weak electron-phonon interaction signal — but the results were clear enough to support their theory.“These results justify the need for a new facility where the equipment might be 100 to 1,000 times more powerful, enabling scientists to clearly resolve the signal and measure the interaction,” adds Landry.With improved neutron scattering facilities, like those proposed for the upcoming Second Target Station at Oak Ridge National Laboratory, this experimental method could be an effective technique for measuring many crucial material properties.For instance, by helping scientists identify and harness better semiconductors, this approach could enable more energy-efficient appliances, faster wireless communication devices, and more reliable medical equipment like pacemakers and MRI scanners.   Ultimately, the team sees this work as a broader message about the need to rethink the materials research process.“Using theoretical insights to design experimental setups in advance can help us redefine the properties we can measure,” Fu says.To that end, the team and their collaborators are currently exploring other types of interactions they could leverage to investigate additional material properties.“This is a very interesting paper,” says Jon Taylor, director of the neutron scattering division at Oak Ridge National Laboratory, who was not involved with this research. “It would be interesting to have a neutron scattering method that is directly sensitive to charge lattice interactions or more generally electronic effects that were not just magnetic moments. It seems that such an effect is expectedly rather small, so facilities like STS could really help develop that fundamental understanding of the interaction and also leverage such effects routinely for research.”This work is funded, in part, by the U.S. Department of Energy and the National Science Foundation. More

  • in

    MIT chemists boost the efficiency of a key enzyme in photosynthesis

    During photosynthesis, an enzyme called rubisco catalyzes a key reaction — the incorporation of carbon dioxide into organic compounds to create sugars. However, rubisco, which is believed to be the most abundant enzyme on Earth, is very inefficient compared to the other enzymes involved in photosynthesis.MIT chemists have now shown that they can greatly enhance a version of rubisco found in bacteria from a low-oxygen environment. Using a process known as directed evolution, they identified mutations that could boost rubisco’s catalytic efficiency by up to 25 percent.The researchers now plan to apply their technique to forms of rubisco that could be used in plants to help boost their rates of photosynthesis, which could potentially improve crop yields.“This is, I think, a compelling demonstration of successful improvement of a rubisco’s enzymatic properties, holding out a lot of hope for engineering other forms of rubisco,” says Matthew Shoulders, the Class of 1942 Professor of Chemistry at MIT.Shoulders and Robert Wilson, a research scientist in the Department of Chemistry, are the senior authors of the new study, which appears this week in the Proceedings of the National Academy of Sciences. MIT graduate student Julie McDonald is the paper’s lead author.Evolution of efficiencyWhen plants or photosynthetic bacteria absorb energy from the sun, they first convert it into energy-storing molecules such as ATP. In the next phase of photosynthesis, cells use that energy to transform a molecule known as ribulose bisphosphate into glucose, which requires several additional reactions. Rubisco catalyzes the first of those reactions, known as carboxylation. During that reaction, carbon from CO2 is added to ribulose bisphosphate.Compared to the other enzymes involved in photosynthesis, rubisco is very slow, catalyzing only one to 10 reactions per second. Additionally, rubisco can also interact with oxygen, leading to a competing reaction that incorporates oxygen instead of carbon — a process that wastes some of the energy absorbed from sunlight.“For protein engineers, that’s a really attractive set of problems because those traits seem like things that you could hopefully make better by making changes to the enzyme’s amino acid sequence,” McDonald says.Previous research has led to improvement in rubisco’s stability and solubility, which resulted in small gains in enzyme efficiency. Most of those studies used directed evolution — a technique in which a naturally occurring protein is randomly mutated and then screened for the emergence of new, desirable features.This process is usually done using error-prone PCR, a technique that first generates mutations in vitro (outside of the cell), typically introducing only one or two mutations in the target gene. In past studies on rubisco, this library of mutations was then introduced into bacteria that grow at a rate relative to rubisco activity. Limitations in error-prone PCR and in the efficiency of introducing new genes restrict the total number of mutations that can be generated and screened using this approach. Manual mutagenesis and selection steps also add more time to the process over multiple rounds of evolution.The MIT team instead used a newer mutagenesis technique that the Shoulders Lab previously developed, called MutaT7. This technique allows the researchers to perform both mutagenesis and screening in living cells, which dramatically speeds up the process. Their technique also enables them to mutate the target gene at a higher rate.“Our continuous directed evolution technique allows you to look at a lot more mutations in the enzyme than has been done in the past,” McDonald says.Better rubiscoFor this study, the researchers began with a version of rubisco, isolated from a family of semi-anaerobic bacteria known as Gallionellaceae, that is one of the fastest rubisco found in nature. During the directed evolution experiments, which were conducted in E. coli, the researchers kept the microbes in an environment with atmospheric levels of oxygen, creating evolutionary pressure to adapt to oxygen.After six rounds of directed evolution, the researchers identified three different mutations that improved the rubisco’s resistance to oxygen. Each of these mutations are located near the enzyme’s active site (where it performs carboxylation or oxygenation). The researchers believe that these mutations improve the enzyme’s ability to preferentially interact with carbon dioxide over oxygen, which leads to an overall increase in carboxylation efficiency.“The underlying question here is: Can you alter and improve the kinetic properties of rubisco to operate better in environments where you want it to operate better?” Shoulders says. “What changed through the directed evolution process was that rubisco began to like to react with oxygen less. That allows this rubisco to function well in an oxygen-rich environment, where normally it would constantly get distracted and react with oxygen, which you don’t want it to do.”In ongoing work, the researchers are applying this approach to other forms of rubisco, including rubisco from plants. Plants are believed to lose about 30 percent of the energy from the sunlight they absorb through a process called photorespiration, which occurs when rubisco acts on oxygen instead of carbon dioxide.“This really opens the door to a lot of exciting new research, and it’s a step beyond the types of engineering that have dominated rubisco engineering in the past,” Wilson says. “There are definite benefits to agricultural productivity that could be leveraged through a better rubisco.”The research was funded, in part, by the National Science Foundation, the National Institutes of Health, an Abdul Latif Jameel Water and Food Systems Lab Grand Challenge grant, and a Martin Family Society Fellowship for Sustainability. More

  • in

    New fuel cell could enable electric aviation

    Batteries are nearing their limits in terms of how much power they can store for a given weight. That’s a serious obstacle for energy innovation and the search for new ways to power airplanes, trains, and ships. Now, researchers at MIT and elsewhere have come up with a solution that could help electrify these transportation systems.Instead of a battery, the new concept is a kind of fuel cell — which is similar to a battery but can be quickly refueled rather than recharged. In this case, the fuel is liquid sodium metal, an inexpensive and widely available commodity. The other side of the cell is just ordinary air, which serves as a source of oxygen atoms. In between, a layer of solid ceramic material serves as the electrolyte, allowing sodium ions to pass freely through, and a porous air-facing electrode helps the sodium to chemically react with oxygen and produce electricity.In a series of experiments with a prototype device, the researchers demonstrated that this cell could carry more than three times as much energy per unit of weight as the lithium-ion batteries used in virtually all electric vehicles today. Their findings are being published today in the journal Joule, in a paper by MIT doctoral students Karen Sugano, Sunil Mair, and Saahir Ganti-Agrawal; professor of materials science and engineering Yet-Ming Chiang; and five others.“We expect people to think that this is a totally crazy idea,” says Chiang, who is the Kyocera Professor of Ceramics. “If they didn’t, I’d be a bit disappointed because if people don’t think something is totally crazy at first, it probably isn’t going to be that revolutionary.”And this technology does appear to have the potential to be quite revolutionary, he suggests. In particular, for aviation, where weight is especially crucial, such an improvement in energy density could be the breakthrough that finally makes electrically powered flight practical at significant scale.“The threshold that you really need for realistic electric aviation is about 1,000 watt-hours per kilogram,” Chiang says. Today’s electric vehicle lithium-ion batteries top out at about 300 watt-hours per kilogram — nowhere near what’s needed. Even at 1,000 watt-hours per kilogram, he says, that wouldn’t be enough to enable transcontinental or trans-Atlantic flights.That’s still beyond reach for any known battery chemistry, but Chiang says that getting to 1,000 watts per kilogram would be an enabling technology for regional electric aviation, which accounts for about 80 percent of domestic flights and 30 percent of the emissions from aviation.The technology could be an enabler for other sectors as well, including marine and rail transportation. “They all require very high energy density, and they all require low cost,” he says. “And that’s what attracted us to sodium metal.”A great deal of research has gone into developing lithium-air or sodium-air batteries over the last three decades, but it has been hard to make them fully rechargeable. “People have been aware of the energy density you could get with metal-air batteries for a very long time, and it’s been hugely attractive, but it’s just never been realized in practice,” Chiang says.By using the same basic electrochemical concept, only making it a fuel cell instead of a battery, the researchers were able to get the advantages of the high energy density in a practical form. Unlike a battery, whose materials are assembled once and sealed in a container, with a fuel cell the energy-carrying materials go in and out.The team produced two different versions of a lab-scale prototype of the system. In one, called an H cell, two vertical glass tubes are connected by a tube across the middle, which contains a solid ceramic electrolyte material and a porous air electrode. Liquid sodium metal fills the tube on one side, and air flows through the other, providing the oxygen for the electrochemical reaction at the center, which ends up gradually consuming the sodium fuel. The other prototype uses a horizontal design, with a tray of the electrolyte material holding the liquid sodium fuel. The porous air electrode, which facilitates the reaction, is affixed to the bottom of the tray. Tests using an air stream with a carefully controlled humidity level produced a level of more than 1,500 watt-hours per kilogram at the level of an individual “stack,” which would translate to over 1,000 watt-hours at the full system level, Chiang says.The researchers envision that to use this system in an aircraft, fuel packs containing stacks of cells, like racks of food trays in a cafeteria, would be inserted into the fuel cells; the sodium metal inside these packs gets chemically transformed as it provides the power. A stream of its chemical byproduct is given off, and in the case of aircraft this would be emitted out the back, not unlike the exhaust from a jet engine.But there’s a very big difference: There would be no carbon dioxide emissions. Instead the emissions, consisting of sodium oxide, would actually soak up carbon dioxide from the atmosphere. This compound would quickly combine with moisture in the air to make sodium hydroxide — a material commonly used as a drain cleaner — which readily combines with carbon dioxide to form a solid material, sodium carbonate, which in turn forms sodium bicarbonate, otherwise known as baking soda.“There’s this natural cascade of reactions that happens when you start with sodium metal,” Chiang says. “It’s all spontaneous. We don’t have to do anything to make it happen, we just have to fly the airplane.”As an added benefit, if the final product, the sodium bicarbonate, ends up in the ocean, it could help to de-acidify the water, countering another of the damaging effects of greenhouse gases.Using sodium hydroxide to capture carbon dioxide has been proposed as a way of mitigating carbon emissions, but on its own, it’s not an economic solution because the compound is too expensive. “But here, it’s a byproduct,” Chiang explains, so it’s essentially free, producing environmental benefits at no cost.Importantly, the new fuel cell is inherently safer than many other batteries, he says. Sodium metal is extremely reactive and must be well-protected. As with lithium batteries, sodium can spontaneously ignite if exposed to moisture. “Whenever you have a very high energy density battery, safety is always a concern, because if there’s a rupture of the membrane that separates the two reactants, you can have a runaway reaction,” Chiang says. But in this fuel cell, one side is just air, “which is dilute and limited. So you don’t have two concentrated reactants right next to each other. If you’re pushing for really, really high energy density, you’d rather have a fuel cell than a battery for safety reasons.”While the device so far exists only as a small, single-cell prototype, Chiang says the system should be quite straightforward to scale up to practical sizes for commercialization. Members of the research team have already formed a company, Propel Aero, to develop the technology. The company is currently housed in MIT’s startup incubator, The Engine.Producing enough sodium metal to enable widespread, full-scale global implementation of this technology should be practical, since the material has been produced at large scale before. When leaded gasoline was the norm, before it was phased out, sodium metal was used to make the tetraethyl lead used as an additive, and it was being produced in the U.S. at a capacity of 200,000 tons a year. “It reminds us that sodium metal was once produced at large scale and safely handled and distributed around the U.S.,” Chiang says.What’s more, sodium primarily originates from sodium chloride, or salt, so it is abundant, widely distributed around the world, and easily extracted, unlike lithium and other materials used in today’s EV batteries.The system they envisage would use a refillable cartridge, which would be filled with liquid sodium metal and sealed. When it’s depleted, it would be returned to a refilling station and loaded with fresh sodium. Sodium melts at 98 degrees Celsius, just below the boiling point of water, so it is easy to heat to the melting point to refuel the cartridges.Initially, the plan is to produce a brick-sized fuel cell that can deliver about 1,000 watt-hours of energy, enough to power a large drone, in order to prove the concept in a practical form that could be used for agriculture, for example. The team hopes to have such a demonstration ready within the next year.Sugano, who conducted much of the experimental work as part of her doctoral thesis and will now work at the startup, says that a key insight was the importance of moisture in the process. As she tested the device with pure oxygen, and then with air, she found that the amount of humidity in the air was crucial to making the electrochemical reaction efficient. The humid air resulted in the sodium producing its discharge products in liquid rather than solid form, making it much easier for these to be removed by the flow of air through the system. “The key was that we can form this liquid discharge product and remove it easily, as opposed to the solid discharge that would form in dry conditions,” she says.Ganti-Agrawal notes that the team drew from a variety of different engineering subfields. For example, there has been much research on high-temperature sodium, but none with a system with controlled humidity. “We’re pulling from fuel cell research in terms of designing our electrode, we’re pulling from older high-temperature battery research as well as some nascent sodium-air battery research, and kind of mushing it together,” which led to the “the big bump in performance” the team has achieved, he says.The research team also included Alden Friesen, an MIT summer intern who attends Desert Mountain High School in Scottsdale, Arizona; Kailash Raman and William Woodford of Form Energy in Somerville, Massachusetts; Shashank Sripad of And Battery Aero in California, and Venkatasubramanian Viswanathan of the University of Michigan. The work was supported by ARPA-E, Breakthrough Energy Ventures, and the National Science Foundation, and used facilities at MIT.nano. More

  • in

    How J-WAFS Solutions grants bring research to market

    For the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS), 2025 marks a decade of translating groundbreaking research into tangible solutions for global challenges. Few examples illustrate that mission better than NONA Technologies. With support from a J-WAFS Solutions grant, MIT electrical engineering and biological engineering Professor Jongyoon Han and his team developed a portable desalination device that transforms seawater into clean drinking water without filters or high-pressure pumps. The device stands apart from traditional systems because conventional desalination technologies, like reverse osmosis, are energy-intensive, prone to fouling, and typically deployed at large, centralized plants. In contrast, the device developed in Han’s lab employs ion concentration polarization technology to remove salts and particles from seawater, producing potable water that exceeds World Health Organization standards. It is compact, solar-powered, and operable at the push of a button — making it an ideal solution for off-grid and disaster-stricken areas.This research laid the foundation for spinning out NONA Technologies along with co-founders Junghyo Yoon PhD ’21 from Han’s lab and Bruce Crawford MBA ’22, to commercialize the technology and address pressing water-scarcity issues worldwide. “This is really the culmination of a 10-year journey that I and my group have been on,” said Han in an earlier MIT News article. “We worked for years on the physics behind individual desalination processes, but pushing all those advances into a box, building a system, and demonstrating it in the ocean … that was a really meaningful and rewarding experience for me.” You can watch this video showcasing the device in action.Moving breakthrough research out of the lab and into the world is a well-known challenge. While traditional “seed” grants typically support early-stage research at Technology Readiness Level (TRL) 1-2, few funding sources exist to help academic teams navigate to the next phase of technology development. The J-WAFS Solutions Program is strategically designed to address this critical gap by supporting technologies in the high-risk, early-commercialization phase that is often neglected by traditional research, corporate, and venture funding. By supporting technologies at TRLs 3-5, the program increases the likelihood that promising innovations will survive beyond the university setting, advancing sufficiently to attract follow-on funding.Equally important, the program gives academic researchers the time, resources, and flexibility to de-risk their technology, explore customer need and potential real-world applications, and determine whether and how they want to pursue commercialization. For faculty-led teams like Han’s, the J-WAFS Solutions Program provided the critical financial runway and entrepreneurial guidance needed to refine the technology, test assumptions about market fit, and lay the foundation for a startup team. While still in the MIT innovation ecosystem, Nona secured over $200,000 in non-dilutive funding through competitions and accelerators, including the prestigious MIT delta v Educational Accelerator. These early wins laid the groundwork for further investment and technical advancement.Since spinning out of MIT, NONA has made major strides in both technology development and business viability. What started as a device capable of producing just over half-a-liter of clean drinking water per hour has evolved into a system that now delivers 10 times that capacity, at 5 liters per hour. The company successfully raised a $3.5 million seed round to advance its portable desalination device, and entered into a collaboration with the U.S. Army Natick Soldier Systems Center, where it co-developed early prototypes and began generating revenue while validating the technology. Most recently, NONA was awarded two SBIR Phase I grants totaling $575,000, one from the National Science Foundation and another from the National Institute of Environmental Health Sciences.Now operating out of Greentown Labs in Somerville, Massachusetts, NONA has grown to a dedicated team of five and is preparing to launch its nona5 product later this year, with a wait list of over 1,000 customers. It is also kicking off its first industrial pilot, marking a key step toward commercial scale-up. “Starting a business as a postdoc was challenging, especially with limited funding and industry knowledge,” says Yoon, who currently serves as CTO of NONA. “J-WAFS gave me the financial freedom to pursue my venture, and the mentorship pushed me to hit key milestones. Thanks to J-WAFS, I successfully transitioned from an academic researcher to an entrepreneur in the water industry.”NONA is one of several J-WAFS-funded technologies that have moved from the lab to market, part of a growing portfolio of water and food solutions advancing through MIT’s innovation pipeline. As J-WAFS marks a decade of catalyzing innovation in water and food, NONA exemplifies what is possible when mission-driven research is paired with targeted early-stage support and mentorship.To learn more or get involved in supporting startups through the J-WAFS Solutions Program, please contact jwafs@mit.edu. More

  • in

    Surprise discovery could lead to improved catalysts for industrial reactions

    The process of catalysis — in which a material speeds up a chemical reaction — is crucial to the production of many of the chemicals used in our everyday lives. But even though these catalytic processes are widespread, researchers often lack a clear understanding of exactly how they work.A new analysis by researchers at MIT has shown that an important industrial synthesis process, the production of vinyl acetate, requires a catalyst to take two different forms, which cycle back and forth from one to the other as the chemical process unfolds.Previously, it had been thought that only one of the two forms was needed. The new findings are published today in the journal Science, in a paper by MIT graduate students Deiaa Harraz and Kunal Lodaya, Bryan Tang PhD ’23, and MIT professor of chemistry and chemical engineering Yogesh Surendranath.There are two broad classes of catalysts: homogeneous catalysts, which consist of dissolved molecules, and heterogeneous catalysts, which are solid materials whose surface provides the site for the chemical reaction. “For the longest time,” Surendranath says, “there’s been a general view that you either have catalysis happening on these surfaces, or you have them happening on these soluble molecules.” But the new research shows that in the case of vinyl acetate — an important material that goes into many polymer products such as the rubber in the soles of your shoes — there is an interplay between both classes of catalysis.“What we discovered,” Surendranath explains, “is that you actually have these solid metal materials converting into molecules, and then converting back into materials, in a cyclic dance.”He adds: “This work calls into question this paradigm where there’s either one flavor of catalysis or another. Really, there could be an interplay between both of them in certain cases, and that could be really advantageous for having a process that’s selective and efficient.”The synthesis of vinyl acetate has been a large-scale industrial reaction since the 1960s, and it has been well-researched and refined over the years to improve efficiency. This has happened largely through a trial-and-error approach, without a precise understanding of the underlying mechanisms, the researchers say.While chemists are often more familiar with homogeneous catalysis mechanisms, and chemical engineers are often more familiar with surface catalysis mechanisms, fewer researchers study both. This is perhaps part of the reason that the full complexity of this reaction was not previously captured. But Harraz says he and his colleagues are working at the interface between disciplines. “We’ve been able to appreciate both sides of this reaction and find that both types of catalysis are critical,” he says.The reaction that produces vinyl acetate requires something to activate the oxygen molecules that are one of the constituents of the reaction, and something else to activate the other ingredients, acetic acid and ethylene. The researchers found that the form of the catalyst that worked best for one part of the process was not the best for the other. It turns out that the molecular form of the catalyst does the key chemistry with the ethylene and the acetic acid, while it’s the surface that ends up doing the activation of the oxygen.They found that the underlying process involved in interconverting the two forms of the catalyst is actually corrosion, similar to the process of rusting. “It turns out that in rusting, you actually go through a soluble molecular species somewhere in the sequence,” Surendranath says.The team borrowed techniques traditionally used in corrosion research to study the process. They used electrochemical tools to study the reaction, even though the overall reaction does not require a supply of electricity. By making potential measurements, the researchers determined that the corrosion of the palladium catalyst material to soluble palladium ions is driven by an electrochemical reaction with the oxygen, converting it to water. Corrosion is “one of the oldest topics in electrochemistry,” says Lodaya, “but applying the science of corrosion to understand catalysis is much newer, and was essential to our findings.”By correlating measurements of catalyst corrosion with other measurements of the chemical reaction taking place, the researchers proposed that it was the corrosion rate that was limiting the overall reaction. “That’s the choke point that’s controlling the rate of the overall process,” Surendranath says.The interplay between the two types of catalysis works efficiently and selectively “because it actually uses the synergy of a material surface doing what it’s good at and a molecule doing what it’s good at,” Surendranath says. The finding suggests that, when designing new catalysts, rather than focusing on either solid materials or soluble molecules alone, researchers should think about how the interplay of both may open up new approaches.“Now, with an improved understanding of what makes this catalyst so effective, you can try to design specific materials or specific interfaces that promote the desired chemistry,” Harraz says. Since this process has been worked on for so long, these findings may not necessarily lead to improvements in this specific process of making vinyl acetate, but it does provide a better understanding of why the materials work as they do, and could lead to improvements in other catalytic processes.Understanding that “catalysts can transit between molecule and material and back, and the role that electrochemistry plays in those transformations, is a concept that we are really excited to expand on,” Lodaya says.Harraz adds: “With this new understanding that both types of catalysis could play a role, what other catalytic processes are out there that actually involve both? Maybe those have a lot of room for improvement that could benefit from this understanding.”This work is “illuminating, something that will be worth teaching at the undergraduate level,” says Christophe Coperet, a professor of inorganic chemistry at ETH Zurich, who was not associated with the research. “The work highlights new ways of thinking. … [It] is notable in the sense that it not only reconciles homogeneous and heterogeneous catalysis, but it describes these complex processes as half reactions, where electron transfers can cycle between distinct entities.”The research was supported, in part, by the National Science Foundation as a Phase I Center for Chemical Innovation; the Center for Interfacial Ionics; and the Gordon and Betty Moore Foundation. More