More stories

  • in

    MIT announces 2024 Bose Grants

    MIT Provost Cynthia Barnhart announced four Professor Amar G. Bose Research Grants to support bold research projects across diverse areas of study, including a way to generate clean hydrogen from deep in the Earth, build an environmentally friendly house of basalt, design maternity clothing that monitors fetal health, and recruit sharks as ocean oxygen monitors.

    This year’s recipients are Iwnetim Abate, assistant professor of materials science and engineering; Andrew Babbin, the Cecil and Ida Green Associate Professor in Earth, Atmospheric and Planetary Sciences; Yoel Fink, professor of materials science and engineering and of electrical engineering and computer science; and Skylar Tibbits, associate professor of design research in the Department of Architecture.

    The program was named for the visionary founder of the Bose Corporation and MIT alumnus Amar G. Bose ’51, SM ’52, ScD ’56. After gaining admission to MIT, Bose became a top math student and a Fulbright Scholarship recipient. He spent 46 years as a professor at MIT, led innovations in sound design, and founded the Bose Corp. in 1964. MIT launched the Bose grant program 11 years ago to provide funding over a three-year period to MIT faculty who propose original, cross-disciplinary, and often risky research projects that would likely not be funded by conventional sources.

    “The promise of the Bose Fellowship is to help bold, daring ideas become realities, an approach that honors Amar Bose’s legacy,” says Barnhart. “Thanks to support from this program, these talented faculty members have the freedom to explore their bold and innovative ideas.”

    Deep and clean hydrogen futures

    A green energy future will depend on harnessing hydrogen as a clean energy source, sequestering polluting carbon dioxide, and mining the minerals essential to building clean energy technologies such as advanced batteries. Iwnetim Abate thinks he has a solution for all three challenges: an innovative hydrogen reactor.

    He plans to build a reactor that will create natural hydrogen from ultramafic mineral rocks in the crust. “The Earth is literally a giant hydrogen factory waiting to be tapped,” Abate explains. “A back-of-the-envelope calculation for the first seven kilometers of the Earth’s crust estimates that there is enough ultramafic rock to produce hydrogen for 250,000 years.”

    The reactor envisioned by Abate injects water to create a reaction that releases hydrogen, while also supporting the injection of climate-altering carbon dioxide into the rock, providing a global carbon capacity of 100 trillion tons. At the same time, the reactor process could provide essential elements such as lithium, nickel, and cobalt — some of the most important raw materials used in advanced batteries and electronics.

    “Ultimately, our goal is to design and develop a scalable reactor for simultaneously tapping into the trifecta from the Earth’s subsurface,” Abate says.

    Sharks as oceanographers

    If we want to understand more about how oxygen levels in the world’s seas are disturbed by human activities and climate change, we should turn to a sensing platform “that has been honed by 400 million years of evolution to perfectly sample the ocean: sharks,” says Andrew Babbin.

    As the planet warms, oceans are projected to contain less dissolved oxygen, with impacts on the productivity of global fisheries, natural carbon sequestration, and the flux of climate-altering greenhouse gasses from the ocean to the air. While scientists know dissolved oxygen is important, it has proved difficult to track over seasons, decades, and underexplored regions both shallow and deep.

    Babbin’s goal is to develop a low-cost sensor for dissolved oxygen that can be integrated with preexisting electronic shark tags used by marine biologists. “This fleet of sharks … will finally enable us to measure the extent of the low-oxygen zones of the ocean, how they change seasonally and with El Niño/La Niña oscillation, and how they expand or contract into the future.”

    The partnership with sharks will also spotlight the importance of these often-maligned animals for global marine and fisheries health, Babbin says. “We hope in pursuing this work marrying microscopic and macroscopic life we will inspire future oceanographers and conservationists, and lead to a better appreciation for the chemistry that underlies global habitability.”

    Maternity wear that monitors fetal health

    There are 2 million stillbirths around the world each year, and in the United States alone, 21,000 families suffer this terrible loss. In many cases, mothers and their doctors had no warning of any abnormalities or changes in fetal health leading up to these deaths. Yoel Fink and colleagues are looking for a better way to monitor fetal health and provide proactive treatment.

    Fink is building on years of research on acoustic fabrics to design an affordable shirt for mothers that would monitor and communicate important details of fetal health. His team’s original research drew inspiration from the function of the eardrum, designing a fiber that could be woven into other fabrics to create a kind of fabric microphone.

    “Given the sensitivity of the acoustic fabrics in sensing these nanometer-scale vibrations, could a mother’s clothing transcend its conventional role and become a health monitor, picking up on the acoustic signals and subsequent vibrations that arise from her unborn baby’s heartbeat and motion?” Fink says. “Could a simple and affordable worn fabric allow an expecting mom to sleep better, knowing that her fetus is being listened to continuously?”

    The proposed maternity shirt could measure fetal heart and breathing rate, and might be able to give an indication of the fetal body position, he says. In the final stages of development, he and his colleagues hope to develop machine learning approaches that would identify abnormal fetal heart rate and motion and deliver real-time alerts.

    A basalt house in Iceland

    In the land of volcanoes, Skylar Tibbits wants to build a case-study home almost entirely from the basalt rock that makes up the Icelandic landscape.

    Architects are increasingly interested in building using one natural material — creating a monomaterial structure — that can be easily recycled. At the moment, the building industry represents 40 percent of carbon emissions worldwide, and consists of many materials and structures, from metal to plastics to concrete, that can’t be easily disassembled or reused.

    The proposed basalt house in Iceland, a project co-led by J. Jih, associate professor of the practice in the Department of Architecture, is “an architecture that would be fully composed of the surrounding earth, that melts back into that surrounding earth at the end of its lifespan, and that can be recycled infinitely,” Tibbits explains.

    Basalt, the most common rock form in the Earth’s crust, can be spun into fibers for insulation and rebar. Basalt fiber performs as well as glass and carbon fibers at a lower cost in some applications, although it is not widely used in architecture. In cast form, it can make corrosion- and heat-resistant plumbing, cladding and flooring.

    “A monomaterial architecture is both a simple and radical proposal that unfortunately falls outside of traditional funding avenues,” says Tibbits. “The Bose grant is the perfect and perhaps the only option for our research, which we see as a uniquely achievable moonshot with transformative potential for the entire built environment.” More

  • in

    How light can vaporize water without the need for heat

    It’s the most fundamental of processes — the evaporation of water from the surfaces of oceans and lakes, the burning off of fog in the morning sun, and the drying of briny ponds that leaves solid salt behind. Evaporation is all around us, and humans have been observing it and making use of it for as long as we have existed.

    And yet, it turns out, we’ve been missing a major part of the picture all along.

    In a series of painstakingly precise experiments, a team of researchers at MIT has demonstrated that heat isn’t alone in causing water to evaporate. Light, striking the water’s surface where air and water meet, can break water molecules away and float them into the air, causing evaporation in the absence of any source of heat.

    The astonishing new discovery could have a wide range of significant implications. It could help explain mysterious measurements over the years of how sunlight affects clouds, and therefore affect calculations of the effects of climate change on cloud cover and precipitation. It could also lead to new ways of designing industrial processes such as solar-powered desalination or drying of materials.

    The findings, and the many different lines of evidence that demonstrate the reality of the phenomenon and the details of how it works, are described today in the journal PNAS, in a paper by Carl Richard Soderberg Professor of Power Engineering Gang Chen, postdocs Guangxin Lv and Yaodong Tu, and graduate student James Zhang.

    The authors say their study suggests that the effect should happen widely in nature— everywhere from clouds to fogs to the surfaces of oceans, soils, and plants — and that it could also lead to new practical applications, including in energy and clean water production. “I think this has a lot of applications,” Chen says. “We’re exploring all these different directions. And of course, it also affects the basic science, like the effects of clouds on climate, because clouds are the most uncertain aspect of climate models.”

    A newfound phenomenon

    The new work builds on research reported last year, which described this new “photomolecular effect” but only under very specialized conditions: on the surface of specially prepared hydrogels soaked with water. In the new study, the researchers demonstrate that the hydrogel is not necessary for the process; it occurs at any water surface exposed to light, whether it’s a flat surface like a body of water or a curved surface like a droplet of cloud vapor.

    Because the effect was so unexpected, the team worked to prove its existence with as many different lines of evidence as possible. In this study, they report 14 different kinds of tests and measurements they carried out to establish that water was indeed evaporating — that is, molecules of water were being knocked loose from the water’s surface and wafted into the air — due to the light alone, not by heat, which was long assumed to be the only mechanism involved.

    One key indicator, which showed up consistently in four different kinds of experiments under different conditions, was that as the water began to evaporate from a test container under visible light, the air temperature measured above the water’s surface cooled down and then leveled off, showing that thermal energy was not the driving force behind the effect.

    Other key indicators that showed up included the way the evaporation effect varied depending on the angle of the light, the exact color of the light, and its polarization. None of these varying characteristics should happen because at these wavelengths, water hardly absorbs light at all — and yet the researchers observed them.

    The effect is strongest when light hits the water surface at an angle of 45 degrees. It is also strongest with a certain type of polarization, called transverse magnetic polarization. And it peaks in green light — which, oddly, is the color for which water is most transparent and thus interacts the least.

    Chen and his co-researchers have proposed a physical mechanism that can explain the angle and polarization dependence of the effect, showing that the photons of light can impart a net force on water molecules at the water surface that is sufficient to knock them loose from the body of water. But they cannot yet account for the color dependence, which they say will require further study.

    They have named this the photomolecular effect, by analogy with the photoelectric effect that was discovered by Heinrich Hertz in 1887 and finally explained by Albert Einstein in 1905. That effect was one of the first demonstrations that light also has particle characteristics, which had major implications in physics and led to a wide variety of applications, including LEDs. Just as the photoelectric effect liberates electrons from atoms in a material in response to being hit by a photon of light, the photomolecular effect shows that photons can liberate entire molecules from a liquid surface, the researchers say.

    “The finding of evaporation caused by light instead of heat provides new disruptive knowledge of light-water interaction,” says Xiulin Ruan, professor of mechanical engineering at Purdue University, who was not involved in the study. “It could help us gain new understanding of how sunlight interacts with cloud, fog, oceans, and other natural water bodies to affect weather and climate. It has significant potential practical applications such as high-performance water desalination driven by solar energy. This research is among the rare group of truly revolutionary discoveries which are not widely accepted by the community right away but take time, sometimes a long time, to be confirmed.”

    Solving a cloud conundrum

    The finding may solve an 80-year-old mystery in climate science. Measurements of how clouds absorb sunlight have often shown that they are absorbing more sunlight than conventional physics dictates possible. The additional evaporation caused by this effect could account for the longstanding discrepancy, which has been a subject of dispute since such measurements are difficult to make.

    “Those experiments are based on satellite data and flight data,“ Chen explains. “They fly an airplane on top of and below the clouds, and there are also data based on the ocean temperature and radiation balance. And they all conclude that there is more absorption by clouds than theory could calculate. However, due to the complexity of clouds and the difficulties of making such measurements, researchers have been debating whether such discrepancies are real or not. And what we discovered suggests that hey, there’s another mechanism for cloud absorption, which was not accounted for, and this mechanism might explain the discrepancies.”

    Chen says he recently spoke about the phenomenon at an American Physical Society conference, and one physicist there who studies clouds and climate said they had never thought about this possibility, which could affect calculations of the complex effects of clouds on climate. The team conducted experiments using LEDs shining on an artificial cloud chamber, and they observed heating of the fog, which was not supposed to happen since water does not absorb in the visible spectrum. “Such heating can be explained based on the photomolecular effect more easily,” he says.

    Lv says that of the many lines of evidence, “the flat region in the air-side temperature distribution above hot water will be the easiest for people to reproduce.” That temperature profile “is a signature” that demonstrates the effect clearly, he says.

    Zhang adds: “It is quite hard to explain how this kind of flat temperature profile comes about without invoking some other mechanism” beyond the accepted theories of thermal evaporation. “It ties together what a whole lot of people are reporting in their solar desalination devices,” which again show evaporation rates that cannot be explained by the thermal input.

    The effect can be substantial. Under the optimum conditions of color, angle, and polarization, Lv says, “the evaporation rate is four times the thermal limit.”

    Already, since publication of the first paper, the team has been approached by companies that hope to harness the effect, Chen says, including for evaporating syrup and drying paper in a paper mill. The likeliest first applications will come in the areas of solar desalinization systems or other industrial drying processes, he says. “Drying consumes 20 percent of all industrial energy usage,” he points out.

    Because the effect is so new and unexpected, Chen says, “This phenomenon should be very general, and our experiment is really just the beginning.” The experiments needed to demonstrate and quantify the effect are very time-consuming. “There are many variables, from understanding water itself, to extending to other materials, other liquids and even solids,” he says.

    “The observations in the manuscript points to a new physical mechanism that foundationally alters our thinking on the kinetics of evaporation,” says Shannon Yee, an associate professor of mechanical engineering at Georgia Tech, who was not associated with this work. He adds, “Who would have thought that we are still learning about something as quotidian as water evaporating?”

    “I think this work is very significant scientifically because it presents a new mechanism,” says University of Alberta Distinguished Professor Janet A.W. Elliott, who also was not associated with this work. “It may also turn out to be practically important for technology and our understanding of nature, because evaporation of water is ubiquitous and the effect appears to deliver significantly higher evaporation rates than the known thermal mechanism. …  My overall impression is this work is outstanding. It appears to be carefully done with many precise experiments lending support for one another.”

    The work was partly supported by an MIT Bose Award. More

  • in

    Using deep learning to image the Earth’s planetary boundary layer

    Although the troposphere is often thought of as the closest layer of the atmosphere to the Earth’s surface, the planetary boundary layer (PBL) — the lowest layer of the troposphere — is actually the part that most significantly influences weather near the surface. In the 2018 planetary science decadal survey, the PBL was raised as an important scientific issue that has the potential to enhance storm forecasting and improve climate projections.  

    “The PBL is where the surface interacts with the atmosphere, including exchanges of moisture and heat that help lead to severe weather and a changing climate,” says Adam Milstein, a technical staff member in Lincoln Laboratory’s Applied Space Systems Group. “The PBL is also where humans live, and the turbulent movement of aerosols throughout the PBL is important for air quality that influences human health.” 

    Although vital for studying weather and climate, important features of the PBL, such as its height, are difficult to resolve with current technology. In the past four years, Lincoln Laboratory staff have been studying the PBL, focusing on two different tasks: using machine learning to make 3D-scanned profiles of the atmosphere, and resolving the vertical structure of the atmosphere more clearly in order to better predict droughts.  

    This PBL-focused research effort builds on more than a decade of related work on fast, operational neural network algorithms developed by Lincoln Laboratory for NASA missions. These missions include the Time-Resolved Observations of Precipitation structure and storm Intensity with a Constellation of Smallsats (TROPICS) mission as well as Aqua, a satellite that collects data about Earth’s water cycle and observes variables such as ocean temperature, precipitation, and water vapor in the atmosphere. These algorithms retrieve temperature and humidity from the satellite instrument data and have been shown to significantly improve the accuracy and usable global coverage of the observations over previous approaches. For TROPICS, the algorithms help retrieve data that are used to characterize a storm’s rapidly evolving structures in near-real time, and for Aqua, it has helped increase forecasting models, drought monitoring, and fire prediction. 

    These operational algorithms for TROPICS and Aqua are based on classic “shallow” neural networks to maximize speed and simplicity, creating a one-dimensional vertical profile for each spectral measurement collected by the instrument over each location. While this approach has improved observations of the atmosphere down to the surface overall, including the PBL, laboratory staff determined that newer “deep” learning techniques that treat the atmosphere over a region of interest as a three-dimensional image are needed to improve PBL details further.

    “We hypothesized that deep learning and artificial intelligence (AI) techniques could improve on current approaches by incorporating a better statistical representation of 3D temperature and humidity imagery of the atmosphere into the solutions,” Milstein says. “But it took a while to figure out how to create the best dataset — a mix of real and simulated data; we needed to prepare to train these techniques.”

    The team collaborated with Joseph Santanello of the NASA Goddard Space Flight Center and William Blackwell, also of the Applied Space Systems Group, in a recent NASA-funded effort showing that these retrieval algorithms can improve PBL detail, including more accurate determination of the PBL height than the previous state of the art. 

    While improved knowledge of the PBL is broadly useful for increasing understanding of climate and weather, one key application is prediction of droughts. According to a Global Drought Snapshot report released last year, droughts are a pressing planetary issue that the global community needs to address. Lack of humidity near the surface, specifically at the level of the PBL, is the leading indicator of drought. While previous studies using remote-sensing techniques have examined the humidity of soil to determine drought risk, studying the atmosphere can help predict when droughts will happen.  

    In an effort funded by Lincoln Laboratory’s Climate Change Initiative, Milstein, along with laboratory staff member Michael Pieper, are working with scientists at NASA’s Jet Propulsion Laboratory (JPL) to use neural network techniques to improve drought prediction over the continental United States. While the work builds off of existing operational work JPL has done incorporating (in part) the laboratory’s operational “shallow” neural network approach for Aqua, the team believes that this work and the PBL-focused deep learning research work can be combined to further improve the accuracy of drought prediction. 

    “Lincoln Laboratory has been working with NASA for more than a decade on neural network algorithms for estimating temperature and humidity in the atmosphere from space-borne infrared and microwave instruments, including those on the Aqua spacecraft,” Milstein says. “Over that time, we have learned a lot about this problem by working with the science community, including learning about what scientific challenges remain. Our long experience working on this type of remote sensing with NASA scientists, as well as our experience with using neural network techniques, gave us a unique perspective.”

    According to Milstein, the next step for this project is to compare the deep learning results to datasets from the National Oceanic and Atmospheric Administration, NASA, and the Department of Energy collected directly in the PBL using radiosondes, a type of instrument flown on a weather balloon. “These direct measurements can be considered a kind of ‘ground truth’ to quantify the accuracy of the techniques we have developed,” Milstein says.

    This improved neural network approach holds promise to demonstrate drought prediction that can exceed the capabilities of existing indicators, Milstein says, and to be a tool that scientists can rely on for decades to come. More

  • in

    Propelling atomically layered magnets toward green computers

    Globally, computation is booming at an unprecedented rate, fueled by the boons of artificial intelligence. With this, the staggering energy demand of the world’s computing infrastructure has become a major concern, and the development of computing devices that are far more energy-efficient is a leading challenge for the scientific community. 

    Use of magnetic materials to build computing devices like memories and processors has emerged as a promising avenue for creating “beyond-CMOS” computers, which would use far less energy compared to traditional computers. Magnetization switching in magnets can be used in computation the same way that a transistor switches from open or closed to represent the 0s and 1s of binary code. 

    While much of the research along this direction has focused on using bulk magnetic materials, a new class of magnetic materials — called two-dimensional van der Waals magnets — provides superior properties that can improve the scalability and energy efficiency of magnetic devices to make them commercially viable. 

    Although the benefits of shifting to 2D magnetic materials are evident, their practical induction into computers has been hindered by some fundamental challenges. Until recently, 2D magnetic materials could operate only at very low temperatures, much like superconductors. So bringing their operating temperatures above room temperature has remained a primary goal. Additionally, for use in computers, it is important that they can be controlled electrically, without the need for magnetic fields. Bridging this fundamental gap, where 2D magnetic materials can be electrically switched above room temperature without any magnetic fields, could potentially catapult the translation of 2D magnets into the next generation of “green” computers.

    A team of MIT researchers has now achieved this critical milestone by designing a “van der Waals atomically layered heterostructure” device where a 2D van der Waals magnet, iron gallium telluride, is interfaced with another 2D material, tungsten ditelluride. In an open-access paper published March 15 in Science Advances, the team shows that the magnet can be toggled between the 0 and 1 states simply by applying pulses of electrical current across their two-layer device. 

    Play video

    The Future of Spintronics: Manipulating Spins in Atomic Layers without External Magnetic FieldsVideo: Deblina Sarkar

    “Our device enables robust magnetization switching without the need for an external magnetic field, opening up unprecedented opportunities for ultra-low power and environmentally sustainable computing technology for big data and AI,” says lead author Deblina Sarkar, the AT&T Career Development Assistant Professor at the MIT Media Lab and Center for Neurobiological Engineering, and head of the Nano-Cybernetic Biotrek research group. “Moreover, the atomically layered structure of our device provides unique capabilities including improved interface and possibilities of gate voltage tunability, as well as flexible and transparent spintronic technologies.”

    Sarkar is joined on the paper by first author Shivam Kajale, a graduate student in Sarkar’s research group at the Media Lab; Thanh Nguyen, a graduate student in the Department of Nuclear Science and Engineering (NSE); Nguyen Tuan Hung, an MIT visiting scholar in NSE and an assistant professor at Tohoku University in Japan; and Mingda Li, associate professor of NSE.

    Breaking the mirror symmetries 

    When electric current flows through heavy metals like platinum or tantalum, the electrons get segregated in the materials based on their spin component, a phenomenon called the spin Hall effect, says Kajale. The way this segregation happens depends on the material, and particularly its symmetries.

    “The conversion of electric current to spin currents in heavy metals lies at the heart of controlling magnets electrically,” Kajale notes. “The microscopic structure of conventionally used materials, like platinum, have a kind of mirror symmetry, which restricts the spin currents only to in-plane spin polarization.”

    Kajale explains that two mirror symmetries must be broken to produce an “out-of-plane” spin component that can be transferred to a magnetic layer to induce field-free switching. “Electrical current can ‘break’ the mirror symmetry along one plane in platinum, but its crystal structure prevents the mirror symmetry from being broken in a second plane.”

    In their earlier experiments, the researchers used a small magnetic field to break the second mirror plane. To get rid of the need for a magnetic nudge, Kajale and Sarkar and colleagues looked instead for a material with a structure that could break the second mirror plane without outside help. This led them to another 2D material, tungsten ditelluride. The tungsten ditelluride that the researchers used has an orthorhombic crystal structure. The material itself has one broken mirror plane. Thus, by applying current along its low-symmetry axis (parallel to the broken mirror plane), the resulting spin current has an out-of-plane spin component that can directly induce switching in the ultra-thin magnet interfaced with the tungsten ditelluride. 

    “Because it’s also a 2D van der Waals material, it can also ensure that when we stack the two materials together, we get pristine interfaces and a good flow of electron spins between the materials,” says Kajale. 

    Becoming more energy-efficient 

    Computer memory and processors built from magnetic materials use less energy than traditional silicon-based devices. And the van der Waals magnets can offer higher energy efficiency and better scalability compared to bulk magnetic material, the researchers note. 

    The electrical current density used for switching the magnet translates to how much energy is dissipated during switching. A lower density means a much more energy-efficient material. “The new design has one of the lowest current densities in van der Waals magnetic materials,” Kajale says. “This new design has an order of magnitude lower in terms of the switching current required in bulk materials. This translates to something like two orders of magnitude improvement in energy efficiency.”

    The research team is now looking at similar low-symmetry van der Waals materials to see if they can reduce current density even further. They are also hoping to collaborate with other researchers to find ways to manufacture the 2D magnetic switch devices at commercial scale. 

    This work was carried out, in part, using the facilities at MIT.nano. It was funded by the Media Lab, the U.S. National Science Foundation, and the U.S. Department of Energy. More

  • in

    Atmospheric observations in China show rise in emissions of a potent greenhouse gas

    To achieve the aspirational goal of the Paris Agreement on climate change — limiting the increase in global average surface temperature to 1.5 degrees Celsius above preindustrial levels — will require its 196 signatories to dramatically reduce their greenhouse gas (GHG) emissions. Those greenhouse gases differ widely in their global warming potential (GWP), or ability to absorb radiative energy and thereby warm the Earth’s surface. For example, measured over a 100-year period, the GWP of methane is about 28 times that of carbon dioxide (CO2), and the GWP of sulfur hexafluoride (SF6) is 24,300 times that of CO2, according to the Intergovernmental Panel on Climate Change (IPCC) Sixth Assessment Report. 

    Used primarily in high-voltage electrical switchgear in electric power grids, SF6 is one of the most potent greenhouse gases on Earth. In the 21st century, atmospheric concentrations of SF6 have risen sharply along with global electric power demand, threatening the world’s efforts to stabilize the climate. This heightened demand for electric power is particularly pronounced in China, which has dominated the expansion of the global power industry in the past decade. Quantifying China’s contribution to global SF6 emissions — and pinpointing its sources in the country — could lead that nation to implement new measures to reduce them, and thereby reduce, if not eliminate, an impediment to the Paris Agreement’s aspirational goal. 

    To that end, a new study by researchers at the MIT Joint Program on the Science and Policy of Global Change, Fudan University, Peking University, University of Bristol, and Meteorological Observation Center of China Meteorological Administration determined total SF6 emissions in China over 2011-21 from atmospheric observations collected from nine stations within a Chinese network, including one station from the Advanced Global Atmospheric Gases Experiment (AGAGE) network. For comparison, global total emissions were determined from five globally distributed, relatively unpolluted “background” AGAGE stations, involving additional researchers from the Scripps Institution of Oceanography and CSIRO, Australia’s National Science Agency.

    The researchers found that SF6 emissions in China almost doubled from 2.6 gigagrams (Gg) per year in 2011, when they accounted for 34 percent of global SF6 emissions, to 5.1 Gg per year in 2021, when they accounted for 57 percent of global total SF6 emissions. This increase from China over the 10-year period — some of it emerging from the country’s less-populated western regions — was larger than the global total SF6 emissions rise, highlighting the importance of lowering SF6 emissions from China in the future.

    The open-access study, which appears in the journal Nature Communications, explores prospects for future SF6 emissions reduction in China.

    “Adopting maintenance practices that minimize SF6 leakage rates or using SF6-free equipment or SF6 substitutes in the electric power grid will benefit greenhouse-gas mitigation in China,” says Minde An, a postdoc at the MIT Center for Global Change Science (CGCS) and the study’s lead author. “We see our findings as a first step in quantifying the problem and identifying how it can be addressed.”

    Emissions of SF6 are expected to last more than 1,000 years in the atmosphere, raising the stakes for policymakers in China and around the world.

    “Any increase in SF6 emissions this century will effectively alter our planet’s radiative budget — the balance between incoming energy from the sun and outgoing energy from the Earth — far beyond the multi-decadal time frame of current climate policies,” says MIT Joint Program and CGCS Director Ronald Prinn, a coauthor of the study. “So it’s imperative that China and all other nations take immediate action to reduce, and ultimately eliminate, their SF6 emissions.”

    The study was supported by the National Key Research and Development Program of China and Shanghai B&R Joint Laboratory Project, the U.S. National Aeronautics and Space Administration, and other funding agencies.   More

  • in

    Engineers find a new way to convert carbon dioxide into useful products

    MIT chemical engineers have devised an efficient way to convert carbon dioxide to carbon monoxide, a chemical precursor that can be used to generate useful compounds such as ethanol and other fuels.

    If scaled up for industrial use, this process could help to remove carbon dioxide from power plants and other sources, reducing the amount of greenhouse gases that are released into the atmosphere.

    “This would allow you to take carbon dioxide from emissions or dissolved in the ocean, and convert it into profitable chemicals. It’s really a path forward for decarbonization because we can take CO2, which is a greenhouse gas, and turn it into things that are useful for chemical manufacture,” says Ariel Furst, the Paul M. Cook Career Development Assistant Professor of Chemical Engineering and the senior author of the study.

    The new approach uses electricity to perform the chemical conversion, with help from a catalyst that is tethered to the electrode surface by strands of DNA. This DNA acts like Velcro to keep all the reaction components in close proximity, making the reaction much more efficient than if all the components were floating in solution.

    Furst has started a company called Helix Carbon to further develop the technology. Former MIT postdoc Gang Fan is the lead author of the paper, which appears in the Journal of the American Chemical Society Au. Other authors include Nathan Corbin PhD ’21, Minju Chung PhD ’23, former MIT postdocs Thomas Gill and Amruta Karbelkar, and Evan Moore ’23.

    Breaking down CO2

    Converting carbon dioxide into useful products requires first turning it into carbon monoxide. One way to do this is with electricity, but the amount of energy required for that type of electrocatalysis is prohibitively expensive.

    To try to bring down those costs, researchers have tried using electrocatalysts, which can speed up the reaction and reduce the amount of energy that needs to be added to the system. One type of catalyst used for this reaction is a class of molecules known as porphyrins, which contain metals such as iron or cobalt and are similar in structure to the heme molecules that carry oxygen in blood. 

    During this type of electrochemical reaction, carbon dioxide is dissolved in water within an electrochemical device, which contains an electrode that drives the reaction. The catalysts are also suspended in the solution. However, this setup isn’t very efficient because the carbon dioxide and the catalysts need to encounter each other at the electrode surface, which doesn’t happen very often.

    To make the reaction occur more frequently, which would boost the efficiency of the electrochemical conversion, Furst began working on ways to attach the catalysts to the surface of the electrode. DNA seemed to be the ideal choice for this application.

    “DNA is relatively inexpensive, you can modify it chemically, and you can control the interaction between two strands by changing the sequences,” she says. “It’s like a sequence-specific Velcro that has very strong but reversible interactions that you can control.”

    To attach single strands of DNA to a carbon electrode, the researchers used two “chemical handles,” one on the DNA and one on the electrode. These handles can be snapped together, forming a permanent bond. A complementary DNA sequence is then attached to the porphyrin catalyst, so that when the catalyst is added to the solution, it will bind reversibly to the DNA that’s already attached to the electrode — just like Velcro.

    Once this system is set up, the researchers apply a potential (or bias) to the electrode, and the catalyst uses this energy to convert carbon dioxide in the solution into carbon monoxide. The reaction also generates a small amount of hydrogen gas, from the water. After the catalysts wear out, they can be released from the surface by heating the system to break the reversible bonds between the two DNA strands, and replaced with new ones.

    An efficient reaction

    Using this approach, the researchers were able to boost the Faradaic efficiency of the reaction to 100 percent, meaning that all of the electrical energy that goes into the system goes directly into the chemical reactions, with no energy wasted. When the catalysts are not tethered by DNA, the Faradaic efficiency is only about 40 percent.

    This technology could be scaled up for industrial use fairly easily, Furst says, because the carbon electrodes the researchers used are much less expensive than conventional metal electrodes. The catalysts are also inexpensive, as they don’t contain any precious metals, and only a small concentration of the catalyst is needed on the electrode surface.

    By swapping in different catalysts, the researchers plan to try making other products such as methanol and ethanol using this approach. Helix Carbon, the company started by Furst, is also working on further developing the technology for potential commercial use.

    The research was funded by the U.S. Army Research Office, the CIFAR Azrieli Global Scholars Program, the MIT Energy Initiative, and the MIT Deshpande Center. More

  • in

    MIT-derived algorithm helps forecast the frequency of extreme weather

    To assess a community’s risk of extreme weather, policymakers rely first on global climate models that can be run decades, and even centuries, forward in time, but only at a coarse resolution. These models might be used to gauge, for instance, future climate conditions for the northeastern U.S., but not specifically for Boston.

    To estimate Boston’s future risk of extreme weather such as flooding, policymakers can combine a coarse model’s large-scale predictions with a finer-resolution model, tuned to estimate how often Boston is likely to experience damaging floods as the climate warms. But this risk analysis is only as accurate as the predictions from that first, coarser climate model.

    “If you get those wrong for large-scale environments, then you miss everything in terms of what extreme events will look like at smaller scales, such as over individual cities,” says Themistoklis Sapsis, the William I. Koch Professor and director of the Center for Ocean Engineering in MIT’s Department of Mechanical Engineering.

    Sapsis and his colleagues have now developed a method to “correct” the predictions from coarse climate models. By combining machine learning with dynamical systems theory, the team’s approach “nudges” a climate model’s simulations into more realistic patterns over large scales. When paired with smaller-scale models to predict specific weather events such as tropical cyclones or floods, the team’s approach produced more accurate predictions for how often specific locations will experience those events over the next few decades, compared to predictions made without the correction scheme.

    Play video

    This animation shows the evolution of storms around the northern hemisphere, as a result of a high-resolution storm model, combined with the MIT team’s corrected global climate model. The simulation improves the modeling of extreme values for wind, temperature, and humidity, which typically have significant errors in coarse scale models. Credit: Courtesy of Ruby Leung and Shixuan Zhang, PNNL

    Sapsis says the new correction scheme is general in form and can be applied to any global climate model. Once corrected, the models can help to determine where and how often extreme weather will strike as global temperatures rise over the coming years. 

    “Climate change will have an effect on every aspect of human life, and every type of life on the planet, from biodiversity to food security to the economy,” Sapsis says. “If we have capabilities to know accurately how extreme weather will change, especially over specific locations, it can make a lot of difference in terms of preparation and doing the right engineering to come up with solutions. This is the method that can open the way to do that.”

    The team’s results appear today in the Journal of Advances in Modeling Earth Systems. The study’s MIT co-authors include postdoc Benedikt Barthel Sorensen and Alexis-Tzianni Charalampopoulos SM ’19, PhD ’23, with Shixuan Zhang, Bryce Harrop, and Ruby Leung of the Pacific Northwest National Laboratory in Washington state.

    Over the hood

    Today’s large-scale climate models simulate weather features such as the average temperature, humidity, and precipitation around the world, on a grid-by-grid basis. Running simulations of these models takes enormous computing power, and in order to simulate how weather features will interact and evolve over periods of decades or longer, models average out features every 100 kilometers or so.

    “It’s a very heavy computation requiring supercomputers,” Sapsis notes. “But these models still do not resolve very important processes like clouds or storms, which occur over smaller scales of a kilometer or less.”

    To improve the resolution of these coarse climate models, scientists typically have gone under the hood to try and fix a model’s underlying dynamical equations, which describe how phenomena in the atmosphere and oceans should physically interact.

    “People have tried to dissect into climate model codes that have been developed over the last 20 to 30 years, which is a nightmare, because you can lose a lot of stability in your simulation,” Sapsis explains. “What we’re doing is a completely different approach, in that we’re not trying to correct the equations but instead correct the model’s output.”

    The team’s new approach takes a model’s output, or simulation, and overlays an algorithm that nudges the simulation toward something that more closely represents real-world conditions. The algorithm is based on a machine-learning scheme that takes in data, such as past information for temperature and humidity around the world, and learns associations within the data that represent fundamental dynamics among weather features. The algorithm then uses these learned associations to correct a model’s predictions.

    “What we’re doing is trying to correct dynamics, as in how an extreme weather feature, such as the windspeeds during a Hurricane Sandy event, will look like in the coarse model, versus in reality,” Sapsis says. “The method learns dynamics, and dynamics are universal. Having the correct dynamics eventually leads to correct statistics, for example, frequency of rare extreme events.”

    Climate correction

    As a first test of their new approach, the team used the machine-learning scheme to correct simulations produced by the Energy Exascale Earth System Model (E3SM), a climate model run by the U.S. Department of Energy, that simulates climate patterns around the world at a resolution of 110 kilometers. The researchers used eight years of past data for temperature, humidity, and wind speed to train their new algorithm, which learned dynamical associations between the measured weather features and the E3SM model. They then ran the climate model forward in time for about 36 years and applied the trained algorithm to the model’s simulations. They found that the corrected version produced climate patterns that more closely matched real-world observations from the last 36 years, not used for training.

    “We’re not talking about huge differences in absolute terms,” Sapsis says. “An extreme event in the uncorrected simulation might be 105 degrees Fahrenheit, versus 115 degrees with our corrections. But for humans experiencing this, that is a big difference.”

    When the team then paired the corrected coarse model with a specific, finer-resolution model of tropical cyclones, they found the approach accurately reproduced the frequency of extreme storms in specific locations around the world.

    “We now have a coarse model that can get you the right frequency of events, for the present climate. It’s much more improved,” Sapsis says. “Once we correct the dynamics, this is a relevant correction, even when you have a different average global temperature, and it can be used for understanding how forest fires, flooding events, and heat waves will look in a future climate. Our ongoing work is focusing on analyzing future climate scenarios.”

    “The results are particularly impressive as the method shows promising results on E3SM, a state-of-the-art climate model,” says Pedram Hassanzadeh, an associate professor who leads the Climate Extremes Theory and Data group at the University of Chicago and was not involved with the study. “It would be interesting to see what climate change projections this framework yields once future greenhouse-gas emission scenarios are incorporated.”

    This work was supported, in part, by the U.S. Defense Advanced Research Projects Agency. More

  • in

    Artificial reef designed by MIT engineers could protect marine life, reduce storm damage

    The beautiful, gnarled, nooked-and-crannied reefs that surround tropical islands serve as a marine refuge and natural buffer against stormy seas. But as the effects of climate change bleach and break down coral reefs around the world, and extreme weather events become more common, coastal communities are left increasingly vulnerable to frequent flooding and erosion.

    An MIT team is now hoping to fortify coastlines with “architected” reefs — sustainable, offshore structures engineered to mimic the wave-buffering effects of natural reefs while also providing pockets for fish and other marine life.

    The team’s reef design centers on a cylindrical structure surrounded by four rudder-like slats. The engineers found that when this structure stands up against a wave, it efficiently breaks the wave into turbulent jets that ultimately dissipate most of the wave’s total energy. The team has calculated that the new design could reduce as much wave energy as existing artificial reefs, using 10 times less material.

    The researchers plan to fabricate each cylindrical structure from sustainable cement, which they would mold in a pattern of “voxels” that could be automatically assembled, and would provide pockets for fish to explore and other marine life to settle in. The cylinders could be connected to form a long, semipermeable wall, which the engineers could erect along a coastline, about half a mile from shore. Based on the team’s initial experiments with lab-scale prototypes, the architected reef could reduce the energy of incoming waves by more than 95 percent.

    “This would be like a long wave-breaker,” says Michael Triantafyllou, the Henry L. and Grace Doherty Professor in Ocean Science and Engineering in the Department of Mechanical Engineering. “If waves are 6 meters high coming toward this reef structure, they would be ultimately less than a meter high on the other side. So, this kills the impact of the waves, which could prevent erosion and flooding.”

    Details of the architected reef design are reported today in a study appearing in the open-access journal PNAS Nexus. Triantafyllou’s MIT co-authors are Edvard Ronglan SM ’23; graduate students Alfonso Parra Rubio, Jose del Auila Ferrandis, and Erik Strand; research scientists Patricia Maria Stathatou and Carolina Bastidas; and Professor Neil Gershenfeld, director of the Center for Bits and Atoms; along with Alexis Oliveira Da Silva at the Polytechnic Institute of Paris, Dixia Fan of Westlake University, and Jeffrey Gair Jr. of Scinetics, Inc.

    Leveraging turbulence

    Some regions have already erected artificial reefs to protect their coastlines from encroaching storms. These structures are typically sunken ships, retired oil and gas platforms, and even assembled configurations of concrete, metal, tires, and stones. However, there’s variability in the types of artificial reefs that are currently in place, and no standard for engineering such structures. What’s more, the designs that are deployed tend to have a low wave dissipation per unit volume of material used. That is, it takes a huge amount of material to break enough wave energy to adequately protect coastal communities.

    The MIT team instead looked for ways to engineer an artificial reef that would efficiently dissipate wave energy with less material, while also providing a refuge for fish living along any vulnerable coast.

    “Remember, natural coral reefs are only found in tropical waters,” says Triantafyllou, who is director of the MIT Sea Grant. “We cannot have these reefs, for instance, in Massachusetts. But architected reefs don’t depend on temperature, so they can be placed in any water, to protect more coastal areas.”

    MIT researchers test the wave-breaking performance of two artificial reef structures in the MIT Towing Tank.Credit: Courtesy of the researchers

    The new effort is the result of a collaboration between researchers in MIT Sea Grant, who developed the reef structure’s hydrodynamic design, and researchers at the Center for Bits and Atoms (CBA), who worked to make the structure modular and easy to fabricate on location. The team’s architected reef design grew out of two seemingly unrelated problems. CBA researchers were developing ultralight cellular structures for the aerospace industry, while Sea Grant researchers were assessing the performance of blowout preventers in offshore oil structures — cylindrical valves that are used to seal off oil and gas wells and prevent them from leaking.

    The team’s tests showed that the structure’s cylindrical arrangement generated a high amount of drag. In other words, the structure appeared to be especially efficient in dissipating high-force flows of oil and gas. They wondered: Could the same arrangement dissipate another type of flow, in ocean waves?

    The researchers began to play with the general structure in simulations of water flow, tweaking its dimensions and adding certain elements to see whether and how waves changed as they crashed against each simulated design. This iterative process ultimately landed on an optimized geometry: a vertical cylinder flanked by four long slats, each attached to the cylinder in a way that leaves space for water to flow through the resulting structure. They found this setup essentially breaks up any incoming wave energy, causing parts of the wave-induced flow to spiral to the sides rather than crashing ahead.

    “We’re leveraging this turbulence and these powerful jets to ultimately dissipate wave energy,” Ferrandis says.

    Standing up to storms

    Once the researchers identified an optimal wave-dissipating structure, they fabricated a laboratory-scale version of an architected reef made from a series of the cylindrical structures, which they 3D-printed from plastic. Each test cylinder measured about 1 foot wide and 4 feet tall. They assembled a number of cylinders, each spaced about a foot apart, to form a fence-like structure, which they then lowered into a wave tank at MIT. They then generated waves of various heights and measured them before and after passing through the architected reef.

    “We saw the waves reduce substantially, as the reef destroyed their energy,” Triantafyllou says.

    The team has also looked into making the structures more porous, and friendly to fish. They found that, rather than making each structure from a solid slab of plastic, they could use a more affordable and sustainable type of cement.

    “We’ve worked with biologists to test the cement we intend to use, and it’s benign to fish, and ready to go,” he adds.

    They identified an ideal pattern of “voxels,” or microstructures, that cement could be molded into, in order to fabricate the reefs while creating pockets in which fish could live. This voxel geometry resembles individual egg cartons, stacked end to end, and appears to not affect the structure’s overall wave-dissipating power.

    “These voxels still maintain a big drag while allowing fish to move inside,” Ferrandis says.

    The team is currently fabricating cement voxel structures and assembling them into a lab-scale architected reef, which they will test under various wave conditions. They envision that the voxel design could be modular, and scalable to any desired size, and easy to transport and install in various offshore locations. “Now we’re simulating actual sea patterns, and testing how these models will perform when we eventually have to deploy them,” says Anjali Sinha, a graduate student at MIT who recently joined the group.

    Going forward, the team hopes to work with beach towns in Massachusetts to test the structures on a pilot scale.

    “These test structures would not be small,” Triantafyllou emphasizes. “They would be about a mile long, and about 5 meters tall, and would cost something like 6 million dollars per mile. So it’s not cheap. But it could prevent billions of dollars in storm damage. And with climate change, protecting the coasts will become a big issue.”

    This work was funded, in part, by the U.S. Defense Advanced Research Projects Agency. More