More stories

  • in

    Satellite-based method measures carbon in peat bogs

    Peat bogs in the tropics store vast amounts of carbon, but logging, plantations, road building, and other activities have destroyed large swaths of these ecosystems in places like Indonesia and Malaysia. Peat formations are essentially permanently flooded forestland, where dead leaves and branches accumulate because the water table prevents their decomposition.

    The pileup of organic material gives these formations a distinctive domed shape, somewhat raised in the center and tapering toward the edges. Determining how much carbon is contained in each formation has required laborious on-the-ground sampling, and so has been limited in its coverage.

    Now, researchers from MIT and Singapore have developed a mathematical analysis of how peat formations build and develop, that makes it possible to evaluate their carbon content and dynamics mostly from simple elevation measurements. These can be carried out by satellites, without requiring ground-based sampling. This analysis, the team says, should make it possible to make more precise and accurate assessments of the amount of carbon that would be released by any proposed draining of peatlands — and, inversely, how much carbon emissions could be avoided by protecting them.

    The research is being reported today in the journal Nature, in a paper by Alexander Cobb, a postdoc with the Singapore-MIT Alliance for Research and Technology (SMART); Charles Harvey, an MIT professor of civil and environmental engineering; and six others.

    Although it is the tropical peatlands that are at greatest risk — because they are the ones most often drained for timber harvesting or the creation of plantations for palm oil, acacia, and other crops — the new formulas the team derived apply to peatlands all over the globe, from Siberia to New Zealand. The formula requires just two inputs. The first is elevation data from a single transect of a given peat dome — that is, a series of elevation measurements along an arbitrary straight line cutting across from one edge of the formation to the other. The second input is a site-specific factor the team devised that relates to the type of peat bog involved and the internal structure of the formation, which together determine how much of the carbon within remains safely submerged in water, where it can’t be oxidized.

    “The saturation by water prevents oxygen from getting in, and if oxygen gets in, microbes breathe it and eat the peat and turn it into carbon dioxide,” Harvey explains.

    “There is an internal surface inside the peat dome below which the carbon is safe because it can’t be drained, because the bounding rivers and water bodies are such that it will keep saturated up to that level even if you cut canals and try to drain it,” he adds. In between the visible surface of the bog and this internal layer is the “vulnerable zone” of peat that can rapidly decompose and release its carbon compounds or become dry enough to promote fires that also release the carbon and pollute the air.

    Through years of on-the-ground sampling and testing, and detailed analysis comparing the ground data with satellite lidar data on surface elevations, the team was able to figure out a kind of universal mathematical formula that describes the structure of peat domes of all kinds and in all locations. They tested it by comparing their predicted results with field measurements from several widely distributed locations, including Alaska, Maine, Quebec, Estonia, Finland, Brunei, and New Zealand.

    These bogs contain carbon that has in many cases accumulated over thousands of years but can be released in just a few years when the bogs are drained. “If we could have policies to preserve these, it is a tremendous opportunity to reduce carbon fluxes to the atmosphere. This framework or model gives us the understanding, the intellectual framework, to figure out how to do that,” Harvey says.

    Many people assume that the biggest greenhouse gas emissions from cutting down these forested lands is from the decomposition of the trees themselves. “The misconception is that that’s the carbon that goes to the atmosphere,” Harvey says. “It’s actually a small amount, because the real fluxes to the atmosphere come from draining” the peat bogs. “Then, the much larger pool of carbon, which is underground beneath the forest, oxidizes and goes to the air, or catches fire and burns.”

    But there is hope, he says, that much of this drained peatland can still be restored before the stored carbon all gets released. First of all, he says, “you’ve got to stop draining it.” That can be accomplished by damming up the drainage canals. “That’s what’s good about this mathematical framework: You need to figure out how to do that, where to put your dams. There’s all sorts of interesting complexities. If you just dam up the canal, the water may flow around it. So, it’s a neat geometric and engineering project to figure out how to do this.”

    While much of the peatland in southeast Asia has already been drained, the new analysis should make it possible to make much more accurate assessments of less-well-studied peatlands in places like the Amazon basin, New Guinea and the Congo basin, which are also threatened by development.

    The new formulation should also help to make some carbon offset programs more reliable, because it is now possible to calculate accurately the carbon content of a given peatland. “It’s quantifiable, because the peat is 100 percent organic carbon. So, if you just measure the change in the surface going up or down, you can say with pretty good certainty how much carbon has been accumulated or lost, whereas if you go to a rainforest, it’s virtually impossible to calculate the amount of underground carbon, and it’s pretty hard to calculate what’s above ground too,” Harvey says. “But this is relatively easy to calculate with satellite measurements of elevation.”

    “We can turn the knob,” he says, “because we have this mathematical framework for how the hydrology, the water table position, affects the growth and loss of peat. We can design a scheme that will change emissions by X amount, for Y dollars.”

    The research team included Rene Dommain, Kimberly Yeap, and Cao Hannan at Nanyang Technical University in Singapore, Nathan Dadap at Stanford University, Bodo Bookhagen at the University of Potsdam, Germany, and Paul Glaser at the University of Minnesota. The work was supported by the National Research Foundation Singapore through the SMART program, by the U.S. National Science Foundation, and Singapore’s Office for Space Technology and Industry. More

  • in

    A mineral produced by plate tectonics has a global cooling effect, study finds

    MIT geologists have found that a clay mineral on the seafloor, called smectite, has a surprisingly powerful ability to sequester carbon over millions of years.

    Under a microscope, a single grain of the clay resembles the folds of an accordion. These folds are known to be effective traps for organic carbon.

    Now, the MIT team has shown that the carbon-trapping clays are a product of plate tectonics: When oceanic crust crushes against a continental plate, it can bring rocks to the surface that, over time, can weather into minerals including smectite. Eventually, the clay sediment settles back in the ocean, where the minerals trap bits of dead organisms in their microscopic folds. This keeps the organic carbon from being consumed by microbes and expelled back into the atmosphere as carbon dioxide.

    Over millions of years, smectite can have a global effect, helping to cool the entire planet. Through a series of analyses, the researchers showed that smectite was likely produced after several major tectonic events over the last 500 million years. During each tectonic event, the clays trapped enough carbon to cool the Earth and induce the subsequent ice age.

    The findings are the first to show that plate tectonics can trigger ice ages through the production of carbon-trapping smectite.

    These clays can be found in certain tectonically active regions today, and the scientists believe that smectite continues to sequester carbon, providing a natural, albeit slow-acting, buffer against humans’ climate-warming activities.

    “The influence of these unassuming clay minerals has wide-ranging implications for the habitability of planets,” says Joshua Murray, a graduate student in MIT’s Department of Earth, Atmospheric, and Planetary Sciences. “There may even be a modern application for these clays in offsetting some of the carbon that humanity has placed into the atmosphere.”

    Murray and Oliver Jagoutz, professor of geology at MIT, have published their findings today in Nature Geoscience.

    A clear and present clay

    The new study follows up on the team’s previous work, which showed that each of the Earth’s major ice ages was likely triggered by a tectonic event in the tropics. The researchers found that each of these tectonic events exposed ocean rocks called ophiolites to the atmosphere. They put forth the idea that, when a tectonic collision occurs in a tropical region, ophiolites can undergo certain weathering effects, such as exposure to wind, rain, and chemical interactions, that transform the rocks into various minerals, including clays.

    “Those clay minerals, depending on the kinds you create, influence the climate in different ways,” Murray explains.

    At the time, it was unclear which minerals could come out of this weathering effect, and whether and how these minerals could directly contribute to cooling the planet. So, while it appeared there was a link between plate tectonics and ice ages, the exact mechanism by which one could trigger the other was still in question.

    With the new study, the team looked to see whether their proposed tectonic tropical weathering process would produce carbon-trapping minerals, and in quantities that would be sufficient to trigger a global ice age.

    The team first looked through the geologic literature and compiled data on the ways in which major magmatic minerals weather over time, and on the types of clay minerals this weathering can produce. They then worked these measurements into a weathering simulation of different rock types that are known to be exposed in tectonic collisions.

    “Then we look at what happens to these rock types when they break down due to weathering and the influence of a tropical environment, and what minerals form as a result,” Jagoutz says.

    Next, they plugged each weathered, “end-product” mineral into a simulation of the Earth’s carbon cycle to see what effect a given mineral might have, either in interacting with organic carbon, such as bits of dead organisms, or with inorganic, in the form of carbon dioxide in the atmosphere.

    From these analyses, one mineral had a clear presence and effect: smectite. Not only was the clay a naturally weathered product of tropical tectonics, it was also highly effective at trapping organic carbon. In theory, smectite seemed like a solid connection between tectonics and ice ages.

    But were enough of the clays actually present to trigger the previous four ice ages? Ideally, researchers should confirm this by finding smectite in ancient rock layers dating back to each global cooling period.

    “Unfortunately, as clays are buried by other sediments, they get cooked a bit, so we can’t measure them directly,” Murray says. “But we can look for their fingerprints.”

    A slow build

    The team reasoned that, as smectites are a product of ophiolites, these ocean rocks also bear characteristic elements such as nickel and chromium, which would be preserved in ancient sediments. If smectites were present in the past, nickel and chromium should be as well.

    To test this idea, the team looked through a database containing thousands of oceanic sedimentary rocks that were deposited over the last 500 million years. Over this time period, the Earth experienced four separate ice ages. Looking at rocks around each of these periods, the researchers observed large spikes of nickel and chromium, and inferred from this that smectite must also have been present.

    By their estimates, the clay mineral could have increased the preservation of organic carbon by less than one-tenth of a percent. In absolute terms, this is a miniscule amount. But over millions of years, they calculated that the clay’s accumulated, sequestered carbon was enough to trigger each of the four major ice ages.

    “We found that you really don’t need much of this material to have a huge effect on the climate,” Jagoutz says.

    “These clays also have probably contributed some of the Earth’s cooling in the last 3 to 5 million years, before humans got involved,” Murray adds. “In the absence of humans, these clays are probably making a difference to the climate. It’s just such a slow process.”

    “Jagoutz and Murray’s work is a nice demonstration of how important it is to consider all biotic and physical components of the global carbon cycle,” says Lee Kump, a professor of geosciences at Penn State University, who was not involved with the study. “Feedbacks among all these components control atmospheric greenhouse gas concentrations on all time scales, from the annual rise and fall of atmospheric carbon dioxide levels to the swings from icehouse to greenhouse over millions of years.”

    Could smectites be harnessed intentionally to further bring down the world’s carbon emissions? Murray sees some potential, for instance to shore up carbon reservoirs such as regions of permafrost. Warming temperatures are predicted to melt permafrost and expose long-buried organic carbon. If smectites could be applied to these regions, the clays could prevent this exposed carbon from escaping into and further warming the atmosphere.

    “If you want to understand how nature works, you have to understand it on the mineral and grain scale,” Jagoutz says. “And this is also the way forward for us to find solutions for this climatic catastrophe. If you study these natural processes, there’s a good chance you will stumble on something that will be actually useful.”

    This research was funded, in part, by the National Science Foundation. More

  • in

    Ancient Amazonians intentionally created fertile “dark earth”

    The Amazon river basin is known for its immense and lush tropical forests, so one might assume that the Amazon’s land is equally rich. In fact, the soils underlying the forested vegetation, particularly in the hilly uplands, are surprisingly infertile. Much of the Amazon’s soil is acidic and low in nutrients, making it notoriously difficult to farm.

    But over the years, archaeologists have dug up mysteriously black and fertile patches of ancient soils in hundreds of sites across the Amazon. This “dark earth” has been found in and around human settlements dating back hundreds to thousands of years. And it has been a matter of some debate as to whether the super-rich soil was purposefully created or a coincidental byproduct of these ancient cultures.

    Now, a study led by researchers at MIT, the University of Florida, and in Brazil aims to settle the debate over dark earth’s origins. The team has pieced together results from soil analyses, ethnographic observations, and interviews with modern Indigenous communities, to show that dark earth was intentionally produced by ancient Amazonians as a way to improve the soil and sustain large and complex societies.

    “If you want to have large settlements, you need a nutritional base. But the soil in the Amazon is extensively leached of nutrients, and naturally poor for growing most crops,” says Taylor Perron, the Cecil and Ida Green Professor of Earth, Atmospheric and Planetary Sciences at MIT. “We argue here that people played a role in creating dark earth, and intentionally modified the ancient environment to make it a better place for human populations.”

    And as it turns out, dark earth contains huge amounts of stored carbon. As generations worked the soil, for instance by enriching it with scraps of food, charcoal, and waste, the earth accumulated the carbon-rich detritus and kept it locked up for hundreds to thousands of years. By purposely producing dark earth, then, early Amazonians may have also unintentionally created a powerful, carbon-sequestering soil.

    “The ancient Amazonians put a lot of carbon in the soil, and a lot of that is still there today,” says co-author Samuel Goldberg, who performed the data analysis as a graduate student at MIT and is now an assistant professor at the University of Miami. “That’s exactly what we want for climate change mitigation efforts. Maybe we could adapt some of their indigenous strategies on a larger scale, to lock up carbon in soil, in ways that we now know would stay there for a long time.”

    The team’s study appears today in Science Advances. Other authors include former MIT postdoc and lead author Morgan Schmidt, anthropologist Michael Heckenberger of the University of Florida, and collaborators from multiple institutions across Brazil.

    Modern intent

    In their current study, the team synthesized observations and data that Schmidt, Heckenberger, and others had previously gathered, while working with Indigenous communities in the Amazon since the early 2000s,  with new data collected in 2018-19. The scientists focused their fieldwork in the Kuikuro Indigenous Territory in the Upper Xingu River basin in the southeastern Amazon. This region is home to modern Kuikuro villages as well as archaeological sites where the ancestors of the Kuikuro are thought to have lived. Over multiple visits to the region, Schmidt, then a graduate student at the University of Florida, was struck by the darker soil around some archaeological sites.

    “When I saw this dark earth and how fertile it was, and started digging into what was known about it, I found it was a mysterious thing — no one really knew where it came from,” he says.

    Schmidt and his colleagues began making observations of the modern Kuikuro’s practices of managing the soil. These practices include generating “middens” — piles of waste and food scraps, similar to compost heaps, that are maintained in certain locations around the center of a village. After some time, these waste piles decompose and mix with the soil to form a dark and fertile earth, that residents then use to plant crops. The researchers also observed that Kuikuro farmers spread organic waste and ash on farther fields, which also generates dark earth, where they can then grow more crops.

    “We saw activities they did to modify the soil and increase the elements, like spreading ash on the ground, or spreading charcoal around the base of the tree, which were obviously intentional actions,” Schmidt says.

    In addition to these observations, they also conducted interviews with villagers to document the Kuikuro’s beliefs and practices relating to dark earth. In some of these interviews, villagers referred to dark earth as “eegepe,” and described their daily practices in creating and cultivating the rich soil to improve its agricultural potential.

    Based on these observations and interviews with the Kuikuro, it was clear that Indigenous communities today intentionally produce dark earth, through their practices to improve the soil. But could the dark earth found in nearby archaeological sites have been made through similar intentional practices?

    A bridge in soil

    In search of a connection, Schmidt joined Perron’s group as a postdoc at MIT. Together, he, Perron, and Goldberg carried out a meticulous analysis of soils in both archaeological and modern sites in the Upper Xingu region. They discovered similarities in dark earth’s spatial structure: Deposits of dark earth were found in a radial pattern, concentrating mostly in the center of both modern and ancient settlements, and stretching, like spokes of a wheel, out to the edges. Modern and ancient dark earth was also similar in composition, and was enriched in the same elements, such as carbon, phosphorus, and other nutrients.

    “These are all the elements that are in humans, animals, and plants, and they’re the ones that reduce the aluminum toxicity in soil, which is a notorious problem in the Amazon,” Schmidt says. “All these elements make the soil better for plant growth.”

    “The key bridge between the modern and ancient times is the soil,” Goldberg adds. “Because we see this correspondence between the two time periods, we can infer that these practices that we can observe and ask people about today, were also happening in the past.”

    In other words, the team was able to show for the first time that ancient Amazonians intentionally worked the soil, likely through practices similar to today’s, in order to grow enough crops to sustain large communities.

    Going a step further, the team calculated the amount of carbon in ancient dark earth. They combined their measurements of soil samples, with maps of where dark earth has been found through several ancient settlements. Their estimates revealed that each ancient village contains several thousand tons of carbon that has been sequestered in the soil for hundreds of years as a result of Indigenous, human activities.

    As the team concludes in their paper, “modern sustainable agriculture and climate change mitigation efforts, inspired by the persistent fertility of ancient dark earth, can draw on traditional methods practiced to this day by Indigenous Amazonians.”

    This research at MIT was supported, in part, by the Abdul Latif Jameel Water and Food Systems Lab and the Department of the Air Force Artificial Intelligence Accelerator. Field research was supported by grants to the University of Florida from the National Science Foundation, the Wenner-Gren Foundation and the William Talbott Hillman Foundation, and was sponsored in Brazil by the Museu Goeldi and Museu Nacional. More

  • in

    Devices offers long-distance, low-power underwater communication

    MIT researchers have demonstrated the first system for ultra-low-power underwater networking and communication, which can transmit signals across kilometer-scale distances.

    This technique, which the researchers began developing several years ago, uses about one-millionth the power that existing underwater communication methods use. By expanding their battery-free system’s communication range, the researchers have made the technology more feasible for applications such as aquaculture, coastal hurricane prediction, and climate change modeling.

    “What started as a very exciting intellectual idea a few years ago — underwater communication with a million times lower power — is now practical and realistic. There are still a few interesting technical challenges to address, but there is a clear path from where we are now to deployment,” says Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science and director of the Signal Kinetics group in the MIT Media Lab.

    Underwater backscatter enables low-power communication by encoding data in sound waves that it reflects, or scatters, back toward a receiver. These innovations enable reflected signals to be more precisely directed at their source.

    Due to this “retrodirectivity,” less signal scatters in the wrong directions, allowing for more efficient and longer-range communication.

    When tested in a river and an ocean, the retrodirective device exhibited a communication range that was more than 15 times farther than previous devices. However, the experiments were limited by the length of the docks available to the researchers.

    To better understand the limits of underwater backscatter, the team also developed an analytical model to predict the technology’s maximum range. The model, which they validated using experimental data, showed that their retrodirective system could communicate across kilometer-scale distances.

    The researchers shared these findings in two papers which will be presented at this year’s ACM SIGCOMM and MobiCom conferences. Adib, senior author on both papers, is joined on the SIGCOMM paper by co-lead authors Aline Eid, a former postdoc who is now an assistant professor at the University of Michigan, and Jack Rademacher, a research assistant; as well as research assistants Waleed Akbar and Purui Wang, and postdoc Ahmed Allam. The MobiCom paper is also written by co-lead authors Akbar and Allam.

    Communicating with sound waves

    Underwater backscatter communication devices utilize an array of nodes made from “piezoelectric” materials to receive and reflect sound waves. These materials produce an electric signal when mechanical force is applied to them.

    When sound waves strike the nodes, they vibrate and convert the mechanical energy to an electric charge. The nodes use that charge to scatter some of the acoustic energy back to the source, transmitting data that a receiver decodes based on the sequence of reflections.

    But because the backscattered signal travels in all directions, only a small fraction reaches the source, reducing the signal strength and limiting the communication range.

    To overcome this challenge, the researchers leveraged a 70-year-old radio device called a Van Atta array, in which symmetric pairs of antennas are connected in such a way that the array reflects energy back in the direction it came from.

    But connecting piezoelectric nodes to make a Van Atta array reduces their efficiency. The researchers avoided this problem by placing a transformer between pairs of connected nodes. The transformer, which transfers electric energy from one circuit to another, allows the nodes to reflect the maximum amount of energy back to the source.

    “Both nodes are receiving and both nodes are reflecting, so it is a very interesting system. As you increase the number of elements in that system, you build an array that allows you to achieve much longer communication ranges,” Eid explains.

    In addition, they used a technique called cross-polarity switching to encode binary data in the reflected signal. Each node has a positive and a negative terminal (like a car battery), so when the positive terminals of two nodes are connected and the negative terminals of two nodes are connected, that reflected signal is a “bit one.”

    But if the researchers switch the polarity, and the negative and positive terminals are connected to each other instead, then the reflection is a “bit zero.”

    “Just connecting the piezoelectric nodes together is not enough. By alternating the polarities between the two nodes, we are able to transmit data back to the remote receiver,” Rademacher explains.

    When building the Van Atta array, the researchers found that if the connected nodes were too close, they would block each other’s signals. They devised a new design with staggered nodes that enables signals to reach the array from any direction. With this scalable design, the more nodes an array has, the greater its communication range.

    They tested the array in more than 1,500 experimental trials in the Charles River in Cambridge, Massachusetts, and in the Atlantic Ocean, off the coast of Falmouth, Massachusetts, in collaboration with the Woods Hole Oceanographic Institution. The device achieved communication ranges of 300 meters, more than 15 times longer than they previously demonstrated.

    However, they had to cut the experiments short because they ran out of space on the dock.

    Modeling the maximum

    That inspired the researchers to build an analytical model to determine the theoretical and practical communication limits of this new underwater backscatter technology.

    Building off their group’s work on RFIDs, the team carefully crafted a model that captured the impact of system parameters, like the size of the piezoelectric nodes and the input power of the signal, on the underwater operation range of the device.

    “It is not a traditional communication technology, so you need to understand how you can quantify the reflection. What are the roles of the different components in that process?” Akbar says.

    For instance, the researchers needed to derive a function that captures the amount of signal reflected out of an underwater piezoelectric node with a specific size, which was among the biggest challenges of developing the model, he adds.

    They used these insights to create a plug-and-play model into a which a user could enter information like input power and piezoelectric node dimensions and receive an output that shows the expected range of the system.

    They evaluated the model on data from their experimental trials and found that it could accurately predict the range of retrodirected acoustic signals with an average error of less than one decibel.

    Using this model, they showed that an underwater backscatter array can potentially achieve kilometer-long communication ranges.

    “We are creating a new ocean technology and propelling it into the realm of the things we have been doing for 6G cellular networks. For us, it is very rewarding because we are starting to see this now very close to reality,” Adib says.

    The researchers plan to continue studying underwater backscatter Van Atta arrays, perhaps using boats so they could evaluate longer communication ranges. Along the way, they intend to release tools and datasets so other researchers can build on their work. At the same time, they are beginning to move toward commercialization of this technology.

    “Limited range has been an open problem in underwater backscatter networks, preventing them from being used in real-world applications. This paper takes a significant step forward in the future of underwater communication, by enabling them to operate on minimum energy while achieving long range,” says Omid Abari, assistant professor of computer science at the University of California at Los Angeles, who was not involved with this work. “The paper is the first to bring Van Atta Reflector array technique into underwater backscatter settings and demonstrate its benefits in improving the communication range by orders of magnitude. This can take battery-free underwater communication one step closer to reality, enabling applications such as underwater climate change monitoring and coastal monitoring.”

    This research was funded, in part, by the Office of Naval Research, the Sloan Research Fellowship, the National Science Foundation, the MIT Media Lab, and the Doherty Chair in Ocean Utilization. More

  • in

    Exploring the bow shock and beyond

    For most people, the night sky conjures a sense of stillness, an occasional shooting star the only visible movement. A conversation with Rishabh Datta, however, unveils the supersonic drama crashing above planet Earth. The PhD candidate has focused his recent study on the plasma speeding through space, flung from sources like the sun’s corona and headed toward Earth, halted abruptly by colliding with the planet’s magnetosphere. The resulting shock wave is similar to the “bow shock” that forms around the nose cone of a supersonic jet, which manifests as the familiar sonic boom.

    The bow shock phenomenon has been well studied. “It’s probably one of the things that’s keeping life alive,” says Datta, “protecting us from the solar wind.” While he feels the magnetosphere provides “a very interesting space laboratory,” Datta’s main focus is, “Can we create this high-energy plasma that is moving supersonically in a laboratory, and can we study it? And can we learn things that are hard to diagnose in an astrophysical plasma?”

    Datta’s research journey to the bow shock and beyond began when he joined a research program for high school students at the National University Singapore. Tasked with culturing bacteria and measuring the amount of methane they produced in a biogas tank, Datta found his first research experience “quite nasty.”

    “I was working with chicken manure, and every day I would come home smelling completely awful,” he says.

    As an undergraduate at Georgia Tech, Datta’s interests turned toward solar power, compelled by a new technology he felt could generate sustainable energy. By the time he joined MIT’s Department of Mechanical Engineering, though, his interests had morphed into researching the heat and mass transfer from airborne droplets. After a year of study, he felt the need to go in a yet another direction.

    The subject of astrophysical plasmas had recently piqued his interest, and he followed his curiosity to Department of Nuclear Science and Engineering Professor Nuno Loureiro’s introductory plasma class. There he encountered Professor Jack Hare, who was sitting in on the class and looking for students to work with him.

    “And that’s how I ended up doing plasma physics and studying bow shocks,” he says, “a long and circuitous route that started with culturing bacteria.”

    Gathering measurements from MAGPIE

    Datta is interested in what he can learn about plasma from gathering measurements of a laboratory-created bow shock, seeking to verify theoretical models. He uses data already collected from experiments on a pulsed-power generator known as MAGPIE (the Mega-Ampere Generator of Plasma Implosion Experiments), located at Imperial College, London. By observing how long it takes a plasma to reach an obstacle, in this case a probe that measures magnetic fields, Datta was able to determine its velocity.   

    With the velocity established, an interferometry system was able to provide images of the probe and the plasma around it, allowing Datta to characterize the structure of the bow shock.

    “The shape depends on how fast sound waves can travel in a plasma,” says Datta. “And this ‘sound speed’ depends on the temperature.”

    The interdependency of these characteristics means that by imaging a shock it’s possible to determine temperature, sound speed, and other measurements more easily and cheaply than with other methods.

    “And knowing more about your plasma allows you to make predictions about, for example, electrical resistivity, which can be important for understanding other physics that might interest you,” says Datta, “like magnetic reconnection.”

    This phenomenon, which controls the evolution of such violent events as solar flares, coronal mass ejections, magnetic storms that drive auroras, and even disruptions in fusion tokamaks, has become the focus of his recent research. It happens when opposing magnetic fields in a plasma break and then reconnect, generating vast quantities of heat and accelerating the plasma to high velocities.

    Onward to Z

    Datta travels to Sandia National Laboratories in Albuquerque, New Mexico, to work on the largest pulsed power facility in the world, informally known as “the Z machine,” to research how the properties of magnetic reconnection change when a plasma emits strong radiation and cools rapidly.

    In future years, Datta will only have to travel across Albany Street on the MIT campus to work on yet another machine, PUFFIN, currently being built at the Plasma Science and Fusion Center (PSFC). Like MAGPIE and Z, PUFFIN is a pulsed power facility, but with the ability to drive the current 10 times longer than other machines, opening up new opportunities in high-energy-density laboratory astrophysics.

    Hare, who leads the PUFFIN team, is pleased with Datta’s increasing experience.

    “Working with Rishabh is a real pleasure,” he says, “He has quickly learned the ins and outs of experimental plasma physics, often analyzing data from machines he hasn’t even yet had the chance to see! While we build PUFFIN it’s really useful for us to carry out experiments at other pulsed-power facilities worldwide, and Rishabh has already written papers on results from MAGPIE, COBRA at Cornell in Ithaca, New York, and the Z Machine.”

    Pursuing climate action at MIT

    Hand-in-hand with Datta’s quest to understand plasma is his pursuit of sustainability, including carbon-free energy solutions. A member of the Graduate Student Council’s Sustainability Committee since he arrived in 2019, he was heartened when MIT, revising their climate action plan, provided him and other students the chance to be involved in decision-making. He led focus groups to provide graduate student input on the plan, raising issues surrounding campus decarbonization, the need to expand hiring of early-career researchers working on climate and sustainability, and waste reduction and management for MIT laboratories.

    When not focused on bringing astrophysics to the laboratory, Datta sometimes experiments in a lab closer to home — the kitchen — where he often challenges himself to duplicate a recipe he has recently tried at a favorite restaurant. His stated ambition could apply to his sustainability work as well as to his pursuit of understanding plasma.

    “The goal is to try and make it better,” he says. “I try my best to get there.”

    Datta’s work has been funded, in part, by the National Science Foundation, National Nuclear Security Administration, and the Department of Energy. More

  • in

    Tackling counterfeit seeds with “unclonable” labels

    Average crop yields in Africa are consistently far below those expected, and one significant reason is the prevalence of counterfeit seeds whose germination rates are far lower than those of the genuine ones. The World Bank estimates that as much as half of all seeds sold in some African countries are fake, which could help to account for crop production that is far below potential.

    There have been many attempts to prevent this counterfeiting through tracking labels, but none have proved effective; among other issues, such labels have been vulnerable to hacking because of the deterministic nature of their encoding systems. But now, a team of MIT researchers has come up with a kind of tiny, biodegradable tag that can be applied directly to the seeds themselves, and that provides a unique randomly created code that cannot be duplicated.

    The new system, which uses minuscule dots of silk-based material, each containing a unique combination of different chemical signatures, is described today in the journal Science Advances in a paper by MIT’s dean of engineering Anantha Chandrakasan, professor of civil and environmental engineering Benedetto Marelli, postdoc Hui Sun, and graduate student Saurav Maji.

    The problem of counterfeiting is an enormous one globally, the researchers point out, affecting everything from drugs to luxury goods, and many different systems have been developed to try to combat this. But there has been less attention to the problem in the area of agriculture, even though the consequences can be severe. In sub-Saharan Africa, for example, the World Bank estimates that counterfeit seeds are a significant factor in crop yields that average less than one-fifth of the potential for maize, and less than one-third for rice.

    Marelli explains that a key to the new system is creating a randomly-produced physical object whose exact composition is virtually impossible to duplicate. The labels they create “leverage randomness and uncertainty in the process of application, to generate unique signature features that can be read, and that cannot be replicated,” he says.

    What they’re dealing with, Sun adds, “is the very old job of trying, basically, not to get your stuff stolen. And you can try as much as you can, but eventually somebody is always smart enough to figure out how to do it, so nothing is really unbreakable. But the idea is, it’s almost impossible, if not impossible, to replicate it, or it takes so much effort that it’s not worth it anymore.”

    The idea of an “unclonable” code was originally developed as a way of protecting the authenticity of computer chips, explains Chandrakasan, who is the Vannevar Bush Professor of Electrical Engineering and Computer Science. “In integrated circuits, individual transistors have slightly different properties coined device variations,” he explains, “and you could then use that variability and combine that variability with higher-level circuits to create a unique ID for the device. And once you have that, then you can use that unique ID as a part of a security protocol. Something like transistor variability is hard to replicate from device to device, so that’s what gives it its uniqueness, versus storing a particular fixed ID.” The concept is based on what are known as physically unclonable functions, or PUFs.

    The team decided to try to apply that PUF principle to the problem of fake seeds, and the use of silk proteins was a natural choice because the material is not only harmless to the environment but also classified by the Food and Drug Administration in the “generally recognized as safe” category, so it requires no special approval for use on food products.

    “You could coat it on top of seeds,” Maji says, “and if you synthesize silk in a certain way, it will also have natural random variations. So that’s the idea, that every seed or every bag could have a unique signature.”

    Developing effective secure system solutions has long been one of Chandrakasan’s specialties, while Marelli has spent many years developing systems for applying silk coatings to a variety of fruits, vegetables, and seeds, so their collaboration was a natural for developing such a silk-based coding system toward enhanced security.

    “The challenge was what type of form factor to give to silk,” Sun says, “so that it can be fabricated very easily.” They developed a simple drop-casting approach that produces tags that are less than one-tenth of an inch in diameter. The second challenge was to develop “a way where we can read the uniqueness, in also a very high throughput and easy way.”

    For the unique silk-based codes, Marelli says, “eventually we found a way to add a color to these microparticles so that they assemble in random structures.” The resulting unique patterns can be read out not only by a spectrograph or a portable microscope, but even by an ordinary cellphone camera with a macro lens. This image can be processed locally to generate the PUF code and then sent to the cloud and compared with a secure database to ensure the authenticity of the product. “It’s random so that people cannot easily replicate it,” says Sun. “People cannot predict it without measuring it.”

    And the number of possible permutations that could result from the way they mix four basic types of colored silk nanoparticles is astronomical. “We were able to show that with a minimal amount of silk, we were able to generate 128 random bits of security,” Maji says. “So this gives rise to 2 to the power 128 possible combinations, which is extremely difficult to crack given the computational capabilities of the state-of-the-art computing systems.”

    Marelli says that “for us, it’s a good test bed in order to think out-of-the-box, and how we can have a path that somehow is more democratic.” In this case, that means “something that you can literally read with your phone, and you can fabricate by simply drop casting a solution, without using any advanced manufacturing technique, without going in a clean room.”

    Some additional work will be needed to make this a practical commercial product, Chandrakasan says. “There will have to be a development for at-scale reading” via smartphones. “So, that’s clearly a future opportunity.” But the principle now shows a clear path to the day when “a farmer could at least, maybe not every seed, but could maybe take some random seeds in a particular batch and verify them,” he says.

    The research was partially supported by the U.S. Office of Naval research and the National Science Foundation, Analog Devices Inc., an EECS Mathworks fellowship, and a Paul M. Cook Career Development Professorship. More

  • in

    MIT-led teams win National Science Foundation grants to research sustainable materials

    Three MIT-led teams are among 16 nationwide to receive funding awards to address sustainable materials for global challenges through the National Science Foundation’s Convergence Accelerator program. Launched in 2019, the program targets solutions to especially compelling societal or scientific challenges at an accelerated pace, by incorporating a multidisciplinary research approach.

    “Solutions for today’s national-scale societal challenges are hard to solve within a single discipline. Instead, these challenges require convergence to merge ideas, approaches, and technologies from a wide range of diverse sectors, disciplines, and experts,” the NSF explains in its description of the Convergence Accelerator program. Phase 1 of the award involves planning to expand initial concepts, identify new team members, participate in an NSF development curriculum, and create an early prototype.

    Sustainable microchips

    One of the funded projects, “Building a Sustainable, Innovative Ecosystem for Microchip Manufacturing,” will be led by Anuradha Murthy Agarwal, a principal research scientist at the MIT Materials Research Laboratory. The aim of this project is to help transition the manufacturing of microchips to more sustainable processes that, for example, can reduce e-waste landfills by allowing repair of chips, or enable users to swap out a rogue chip in a motherboard rather than tossing out the entire laptop or cellphone.

    “Our goal is to help transition microchip manufacturing towards a sustainable industry,” says Agarwal. “We aim to do that by partnering with industry in a multimodal approach that prototypes technology designs to minimize energy consumption and waste generation, retrains the semiconductor workforce, and creates a roadmap for a new industrial ecology to mitigate materials-critical limitations and supply-chain constraints.”

    Agarwal’s co-principal investigators are Samuel Serna, an MIT visiting professor and assistant professor of physics at Bridgewater State University, and two MIT faculty affiliated with the Materials Research Laboratory: Juejun Hu, the John Elliott Professor of Materials Science and Engineering; and Lionel Kimerling, the Thomas Lord Professor of Materials Science and Engineering.

    The training component of the project will also create curricula for multiple audiences. “At Bridgewater State University, we will create a new undergraduate course on microchip manufacturing sustainability, and eventually adapt it for audiences from K-12, as well as incumbent employees,” says Serna.

    Sajan Saini and Erik Verlage of the MIT Department of Materials Science and Engineering (DMSE), and Randolph Kirchain from the MIT Materials Systems Laboratory, who have led MIT initiatives in virtual reality digital education, materials criticality, and roadmapping, are key contributors. The project also includes DMSE graduate students Drew Weninger and Luigi Ranno, and undergraduate Samuel Bechtold from Bridgewater State University’s Department of Physics.

    Sustainable topological materials

    Under the direction of Mingda Li, the Class of 1947 Career Development Professor and an Associate Professor of Nuclear Science and Engineering, the “Sustainable Topological Energy Materials (STEM) for Energy-efficient Applications” project will accelerate research in sustainable topological quantum materials.

    Topological materials are ones that retain a particular property through all external disturbances. Such materials could potentially be a boon for quantum computing, which has so far been plagued by instability, and would usher in a post-silicon era for microelectronics. Even better, says Li, topological materials can do their job without dissipating energy even at room temperatures.

    Topological materials can find a variety of applications in quantum computing, energy harvesting, and microelectronics. Despite their promise, and a few thousands of potential candidates, discovery and mass production of these materials has been challenging. Topology itself is not a measurable characteristic so researchers have to first develop ways to find hints of it. Synthesis of materials and related process optimization can take months, if not years, Li adds. Machine learning can accelerate the discovery and vetting stage.

    Given that a best-in-class topological quantum material has the potential to disrupt the semiconductor and computing industries, Li and team are paying special attention to the environmental sustainability of prospective materials. For example, some potential candidates include gold, lead, or cadmium, whose scarcity or toxicity does not lend itself to mass production and have been disqualified.

    Co-principal investigators on the project include Liang Fu, associate professor of physics at MIT; Tomas Palacios, professor of electrical engineering and computer science at MIT and director of the Microsystems Technology Laboratories; Susanne Stemmer of the University of California at Santa Barbara; and Qiong Ma of Boston College. The $750,000 one-year Phase 1 grant will focus on three priorities: building a topological materials database; identifying the most environmentally sustainable candidates for energy-efficient topological applications; and building the foundation for a Center for Sustainable Topological Energy Materials at MIT that will encourage industry-academia collaborations.

    At a time when the size of silicon-based electronic circuit boards is reaching its lower limit, the promise of topological materials whose conductivity increases with decreasing size is especially attractive, Li says. In addition, topological materials can harvest wasted heat: Imagine using your body heat to power your phone. “There are different types of application scenarios, and we can go much beyond the capabilities of existing materials,” Li says, “the possibilities of topological materials are endlessly exciting.”

    Socioresilient materials design

    Researchers in the MIT Department of Materials Science and Engineering (DMSE) have been awarded $750,000 in a cross-disciplinary project that aims to fundamentally redirect materials research and development toward more environmentally, socially, and economically sustainable and resilient materials. This “socioresilient materials design” will serve as the foundation for a new research and development framework that takes into account technical, environmental, and social factors from the beginning of the materials design and development process.

    Christine Ortiz, the Morris Cohen Professor of Materials Science and Engineering, and Ellan Spero PhD ’14, an instructor in DMSE, are leading this research effort, which includes Cornell University, the University of Swansea, Citrine Informatics, Station1, and 14 other organizations in academia, industry, venture capital, the social sector, government, and philanthropy.

    The team’s project, “Mind Over Matter: Socioresilient Materials Design,” emphasizes that circular design approaches, which aim to minimize waste and maximize the reuse, repair, and recycling of materials, are often insufficient to address negative repercussions for the planet and for human health and safety.

    Too often society understands the unintended negative consequences long after the materials that make up our homes and cities and systems have been in production and use for many years. Examples include disparate and negative public health impacts due to industrial scale manufacturing of materials, water and air contamination with harmful materials, and increased risk of fire in lower-income housing buildings due to flawed materials usage and design. Adverse climate events including drought, flood, extreme temperatures, and hurricanes have accelerated materials degradation, for example in critical infrastructure, leading to amplified environmental damage and social injustice. While classical materials design and selection approaches are insufficient to address these challenges, the new research project aims to do just that.

    “The imagination and technical expertise that goes into materials design is too often separated from the environmental and social realities of extraction, manufacturing, and end-of-life for materials,” says Ortiz. 

    Drawing on materials science and engineering, chemistry, and computer science, the project will develop a framework for materials design and development. It will incorporate powerful computational capabilities — artificial intelligence and machine learning with physics-based materials models — plus rigorous methodologies from the social sciences and the humanities to understand what impacts any new material put into production could have on society. More

  • in

    Study: Smoke particles from wildfires can erode the ozone layer

    A wildfire can pump smoke up into the stratosphere, where the particles drift for over a year. A new MIT study has found that while suspended there, these particles can trigger chemical reactions that erode the protective ozone layer shielding the Earth from the sun’s damaging ultraviolet radiation.

    The study, which appears today in Nature, focuses on the smoke from the “Black Summer” megafire in eastern Australia, which burned from December 2019 into January 2020. The fires — the country’s most devastating on record — scorched tens of millions of acres and pumped more than 1 million tons of smoke into the atmosphere.

    The MIT team identified a new chemical reaction by which smoke particles from the Australian wildfires made ozone depletion worse. By triggering this reaction, the fires likely contributed to a 3-5 percent depletion of total ozone at mid-latitudes in the Southern Hemisphere, in regions overlying Australia, New Zealand, and parts of Africa and South America.

    The researchers’ model also indicates the fires had an effect in the polar regions, eating away at the edges of the ozone hole over Antarctica. By late 2020, smoke particles from the Australian wildfires widened the Antarctic ozone hole by 2.5 million square kilometers — 10 percent of its area compared to the previous year.

    It’s unclear what long-term effect wildfires will have on ozone recovery. The United Nations recently reported that the ozone hole, and ozone depletion around the world, is on a recovery track, thanks to a sustained international effort to phase out ozone-depleting chemicals. But the MIT study suggests that as long as these chemicals persist in the atmosphere, large fires could spark a reaction that temporarily depletes ozone.

    “The Australian fires of 2020 were really a wake-up call for the science community,” says Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies at MIT and a leading climate scientist who first identified the chemicals responsible for the Antarctic ozone hole. “The effect of wildfires was not previously accounted for in [projections of] ozone recovery. And I think that effect may depend on whether fires become more frequent and intense as the planet warms.”

    The study is led by Solomon and MIT research scientist Kane Stone, along with collaborators from the Institute for Environmental and Climate Research in Guangzhou, China; the U.S. National Oceanic and Atmospheric Administration; the U.S. National Center for Atmospheric Research; and Colorado State University.

    Chlorine cascade

    The new study expands on a 2022 discovery by Solomon and her colleagues, in which they first identified a chemical link between wildfires and ozone depletion. The researchers found that chlorine-containing compounds, originally emitted by factories in the form of chlorofluorocarbons (CFCs), could react with the surface of fire aerosols. This interaction, they found, set off a chemical cascade that produced chlorine monoxide — the ultimate ozone-depleting molecule. Their results showed that the Australian wildfires likely depleted ozone through this newly identified chemical reaction.

    “But that didn’t explain all the changes that were observed in the stratosphere,” Solomon says. “There was a whole bunch of chlorine-related chemistry that was totally out of whack.”

    In the new study, the team took a closer look at the composition of molecules in the stratosphere following the Australian wildfires. They combed through three independent sets of satellite data and observed that in the months following the fires, concentrations of hydrochloric acid dropped significantly at mid-latitudes, while chlorine monoxide spiked.

    Hydrochloric acid (HCl) is present in the stratosphere as CFCs break down naturally over time. As long as chlorine is bound in the form of HCl, it doesn’t have a chance to destroy ozone. But if HCl breaks apart, chlorine can react with oxygen to form ozone-depleting chlorine monoxide.

    In the polar regions, HCl can break apart when it interacts with the surface of cloud particles at frigid temperatures of about 155 kelvins. However, this reaction was not expected to occur at mid-latitudes, where temperatures are much warmer.

    “The fact that HCl at mid-latitudes dropped by this unprecedented amount was to me kind of a danger signal,” Solomon says.

    She wondered: What if HCl could also interact with smoke particles, at warmer temperatures and in a way that released chlorine to destroy ozone? If such a reaction was possible, it would explain the imbalance of molecules and much of the ozone depletion observed following the Australian wildfires.

    Smoky drift

    Solomon and her colleagues dug through the chemical literature to see what sort of organic molecules could react with HCl at warmer temperatures to break it apart.

    “Lo and behold, I learned that HCl is extremely soluble in a whole broad range of organic species,” Solomon says. “It likes to glom on to lots of compounds.”

    The question then, was whether the Australian wildfires released any of those compounds that could have triggered HCl’s breakup and any subsequent depletion of ozone. When the team looked at the composition of smoke particles in the first days after the fires, the picture was anything but clear.

    “I looked at that stuff and threw up my hands and thought, there’s so much stuff in there, how am I ever going to figure this out?” Solomon recalls. “But then I realized it had actually taken some weeks before you saw the HCl drop, so you really need to look at the data on aged wildfire particles.”

    When the team expanded their search, they found that smoke particles persisted over months, circulating in the stratosphere at mid-latitudes, in the same regions and times when concentrations of HCl dropped.

    “It’s the aged smoke particles that really take up a lot of the HCl,” Solomon says. “And then you get, amazingly, the same reactions that you get in the ozone hole, but over mid-latitudes, at much warmer temperatures.”

    When the team incorporated this new chemical reaction into a model of atmospheric chemistry, and simulated the conditions of the Australian wildfires, they observed a 5 percent depletion of ozone throughout the stratosphere at mid-latitudes, and a 10 percent widening of the ozone hole over Antarctica.

    The reaction with HCl is likely the main pathway by which wildfires can deplete ozone. But Solomon guesses there may be other chlorine-containing compounds drifting in the stratosphere, that wildfires could unlock.

    “There’s now sort of a race against time,” Solomon says. “Hopefully, chlorine-containing compounds will have been destroyed, before the frequency of fires increases with climate change. This is all the more reason to be vigilant about global warming and these chlorine-containing compounds.”

    This research was supported, in part, by NASA and the U.S. National Science Foundation. More