More stories

  • in

    Microbes and minerals may have set off Earth’s oxygenation

    For the first 2 billion years of Earth’s history, there was barely any oxygen in the air. While some microbes were photosynthesizing by the latter part of this period, oxygen had not yet accumulated at levels that would impact the global biosphere.

    But somewhere around 2.3 billion years ago, this stable, low-oxygen equilibrium shifted, and oxygen began building up in the atmosphere, eventually reaching the life-sustaining levels we breathe today. This rapid infusion is known as the Great Oxygenation Event, or GOE. What triggered the event and pulled the planet out of its low-oxygen funk is one of the great mysteries of science.

    A new hypothesis, proposed by MIT scientists, suggests that oxygen finally started accumulating in the atmosphere thanks to interactions between certain marine microbes and minerals in ocean sediments. These interactions helped prevent oxygen from being consumed, setting off a self-amplifying process where more and more oxygen was made available to accumulate in the atmosphere.

    The scientists have laid out their hypothesis using mathematical and evolutionary analyses, showing that there were indeed microbes that existed before the GOE and evolved the ability to interact with sediment in the way that the researchers have proposed.

    Their study, appearing today in Nature Communications, is the first to connect the co-evolution of microbes and minerals to Earth’s oxygenation.

    “Probably the most important biogeochemical change in the history of the planet was oxygenation of the atmosphere,” says study author Daniel Rothman, professor of geophysics in MIT’s Department of Earth, Atmospheric, and Planetary Sciences (EAPS). “We show how the interactions of microbes, minerals, and the geochemical environment acted in concert to increase oxygen in the atmosphere.”

    The study’s co-authors include lead author Haitao Shang, a former MIT graduate student, and Gregory Fournier, associate professor of geobiology in EAPS.

    A step up

    Today’s oxygen levels in the atmosphere are a stable balance between processes that produce oxygen and those that consume it. Prior to the GOE, the atmosphere maintained a different kind of equilibrium, with producers and consumers of oxygen  in balance, but in a way that didn’t leave much extra oxygen for the atmosphere.

    What could have pushed the planet out of one stable, oxygen-deficient state to another stable, oxygen-rich state?

    “If you look at Earth’s history, it appears there were two jumps, where you went from a steady state of low oxygen to a steady state of much higher oxygen, once in the Paleoproterozoic, once in the Neoproterozoic,” Fournier notes. “These jumps couldn’t have been because of a gradual increase in excess oxygen. There had to have been some feedback loop that caused this step-change in stability.”

    He and his colleagues wondered whether such a positive feedback loop could have come from a process in the ocean that made some organic carbon unavailable to its consumers. Organic carbon is mainly consumed through oxidation, usually accompanied by the consumption of oxygen — a process by which microbes in the ocean use oxygen to break down organic matter, such as detritus that has settled in sediment. The team wondered: Could there have been some process by which the presence of oxygen stimulated its further accumulation?

    Shang and Rothman worked out a mathematical model that made the following prediction: If microbes possessed the ability to only partially oxidize organic matter, the partially-oxidized matter, or “POOM,” would effectively become “sticky,” and chemically bind to minerals in sediment in a way that would protect the material from further oxidation. The oxygen that would otherwise have been consumed to fully degrade the material would instead be free to build up in the atmosphere. This process, they found, could serve as a positive feedback, providing a natural pump to push the atmosphere into a new, high-oxygen equilibrium.

    “That led us to ask, is there a microbial metabolism out there that produced POOM?” Fourier says.

    In the genes

    To answer this, the team searched through the scientific literature and identified a group of microbes that partially oxidizes organic matter in the deep ocean today. These microbes belong to the bacterial group SAR202, and their partial oxidation is carried out through an enzyme, Baeyer-Villiger monooxygenase, or BVMO.

    The team carried out a phylogenetic analysis to see how far back the microbe, and the gene for the enzyme, could be traced. They found that the bacteria did indeed have ancestors dating back before the GOE, and that the gene for the enzyme could be traced across various microbial species, as far back as pre-GOE times.

    What’s more, they found that the gene’s diversification, or the number of species that acquired the gene, increased significantly during times when the atmosphere experienced spikes in oxygenation, including once during the GOE’s Paleoproterozoic, and again in the Neoproterozoic.

    “We found some temporal correlations between diversification of POOM-producing genes, and the oxygen levels in the atmosphere,” Shang says. “That supports our overall theory.”

    To confirm this hypothesis will require far more follow-up, from experiments in the lab to surveys in the field, and everything in between. With their new study, the team has introduced a new suspect in the age-old case of what oxygenated Earth’s atmosphere.

    “Proposing a novel method, and showing evidence for its plausibility, is the first but important step,” Fournier says. “We’ve identified this as a theory worthy of study.”

    This work was supported in part by the mTerra Catalyst Fund and the National Science Foundation. More

  • in

    Study: Ice flow is more sensitive to stress than previously thought

    The rate of glacier ice flow is more sensitive to stress than previously calculated, according to a new study by MIT researchers that upends a decades-old equation used to describe ice flow.

    Stress in this case refers to the forces acting on Antarctic glaciers, which are primarily influenced by gravity that drags the ice down toward lower elevations. Viscous glacier ice flows “really similarly to honey,” explains Joanna Millstein, a PhD student in the Glacier Dynamics and Remote Sensing Group and lead author of the study. “If you squeeze honey in the center of a piece of toast, and it piles up there before oozing outward, that’s the exact same motion that’s happening for ice.”

    The revision to the equation proposed by Millstein and her colleagues should improve models for making predictions about the ice flow of glaciers. This could help glaciologists predict how Antarctic ice flow might contribute to future sea level rise, although Millstein said the equation change is unlikely to raise estimates of sea level rise beyond the maximum levels already predicted under climate change models.

    “Almost all our uncertainties about sea level rise coming from Antarctica have to do with the physics of ice flow, though, so this will hopefully be a constraint on that uncertainty,” she says.

    Other authors on the paper, published in Nature Communications Earth and Environment, include Brent Minchew, the Cecil and Ida Green Career Development Professor in MIT’s Department of Earth, Atmospheric, and Planetary Sciences, and Samuel Pegler, a university academic fellow at the University of Leeds.

    Benefits of big data

    The equation in question, called Glen’s Flow Law, is the most widely used equation to describe viscous ice flow. It was developed in 1958 by British scientist J.W. Glen, one of the few glaciologists working on the physics of ice flow in the 1950s, according to Millstein.

    With relatively few scientists working in the field until recently, along with the remoteness and inaccessibility of most large glacier ice sheets, there were few attempts to calibrate Glen’s Flow Law outside the lab until recently. In the recent study, Millstein and her colleagues took advantage of a new wealth of satellite imagery over Antarctic ice shelves, the floating extensions of the continent’s ice sheet, to revise the stress exponent of the flow law.

    “In 2002, this major ice shelf [Larsen B] collapsed in Antarctica, and all we have from that collapse is two satellite images that are a month apart,” she says. “Now, over that same area we can get [imagery] every six days.”

    The new analysis shows that “the ice flow in the most dynamic, fastest-changing regions of Antarctica — the ice shelves, which basically hold back and hug the interior of the continental ice — is more sensitive to stress than commonly assumed,” Millstein says. She’s optimistic that the growing record of satellite data will help capture rapid changes on Antarctica in the future, providing insights into the underlying physical processes of glaciers.   

    But stress isn’t the only thing that affects ice flow, the researchers note. Other parts of the flow law equation represent differences in temperature, ice grain size and orientation, and impurities and water contained in the ice — all of which can alter flow velocity. Factors like temperature could be especially important in understanding how ice flow impacts sea level rise in the future, Millstein says.

    Cracking under strain

    Millstein and colleagues are also studying the mechanics of ice sheet collapse, which involves different physical models than those used to understand the ice flow problem. “The cracking and breaking of ice is what we’re working on now, using strain rate observations,” Millstein says.

    The researchers use InSAR, radar images of the Earth’s surface collected by satellites, to observe deformations of the ice sheets that can be used to make precise measurements of strain. By observing areas of ice with high strain rates, they hope to better understand the rate at which crevasses and rifts propagate to trigger collapse.

    The research was supported by the National Science Foundation. More

  • in

    Study reveals chemical link between wildfire smoke and ozone depletion

    The Australian wildfires in 2019 and 2020 were historic for how far and fast they spread, and for how long and powerfully they burned. All told, the devastating “Black Summer” fires blazed across more than 43 million acres of land, and extinguished or displaced nearly 3 billion animals. The fires also injected over 1 million tons of smoke particles into the atmosphere, reaching up to 35 kilometers above Earth’s surface — a mass and reach comparable to that of an erupting volcano.

    Now, atmospheric chemists at MIT have found that the smoke from those fires set off chemical reactions in the stratosphere that contributed to the destruction of ozone, which shields the Earth from incoming ultraviolet radiation. The team’s study, appearing this week in the Proceedings of the National Academy of Sciences, is the first to establish a chemical link between wildfire smoke and ozone depletion.

    In March 2020, shortly after the fires subsided, the team observed a sharp drop in nitrogen dioxide in the stratosphere, which is the first step in a chemical cascade that is known to end in ozone depletion. The researchers found that this drop in nitrogen dioxide directly correlates with the amount of smoke that the fires released into the stratosphere. They estimate that this smoke-induced chemistry depleted the column of ozone by 1 percent.

    To put this in context, they note that the phaseout of ozone-depleting gases under a worldwide agreement to stop their production has led to about a 1 percent ozone recovery from earlier ozone decreases over the past 10 years — meaning that the wildfires canceled those hard-won diplomatic gains for a short period. If future wildfires grow stronger and more frequent, as they are predicted to do with climate change, ozone’s projected recovery could be delayed by years. 

    “The Australian fires look like the biggest event so far, but as the world continues to warm, there is every reason to think these fires will become more frequent and more intense,” says lead author Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies at MIT. “It’s another wakeup call, just as the Antarctic ozone hole was, in the sense of showing how bad things could actually be.”

    The study’s co-authors include Kane Stone, a research scientist in MIT’s Department of Earth, Atmospheric, and Planetary Sciences, along with collaborators at multiple institutions including the University of Saskatchewan, Jinan University, the National Center for Atmospheric Research, and the University of Colorado at Boulder.

    Chemical trace

    Massive wildfires are known to generate pyrocumulonimbus — towering clouds of smoke that can reach into the stratosphere, the layer of the atmosphere that lies between about 15 and 50 kilometers above the Earth’s surface. The smoke from Australia’s wildfires reached well into the stratosphere, as high as 35 kilometers.

    In 2021, Solomon’s co-author, Pengfei Yu at Jinan University, carried out a separate study of the fires’ impacts and found that the accumulated smoke warmed parts of the stratosphere by as much as 2 degrees Celsius — a warming that persisted for six months. The study also found hints of ozone destruction in the Southern Hemisphere following the fires.

    Solomon wondered whether smoke from the fires could have depleted ozone through a chemistry similar to volcanic aerosols. Major volcanic eruptions can also reach into the stratosphere, and in 1989, Solomon discovered that the particles in these eruptions can destroy ozone through a series of chemical reactions. As the particles form in the atmosphere, they gather moisture on their surfaces. Once wet, the particles can react with circulating chemicals in the stratosphere, including dinitrogen pentoxide, which reacts with the particles to form nitric acid.

    Normally, dinitrogen pentoxide reacts with the sun to form various nitrogen species, including nitrogen dioxide, a compound that binds with chlorine-containing chemicals in the stratosphere. When volcanic smoke converts dinitrogen pentoxide into nitric acid, nitrogen dioxide drops, and the chlorine compounds take another path, morphing into chlorine monoxide, the main human-made agent that destroys ozone.

    “This chemistry, once you get past that point, is well-established,” Solomon says. “Once you have less nitrogen dioxide, you have to have more chlorine monoxide, and that will deplete ozone.”

    Cloud injection

    In the new study, Solomon and her colleagues looked at how concentrations of nitrogen dioxide in the stratosphere changed following the Australian fires. If these concentrations dropped significantly, it would signal that wildfire smoke depletes ozone through the same chemical reactions as some volcanic eruptions.

    The team looked to observations of nitrogen dioxide taken by three independent satellites that have surveyed the Southern Hemisphere for varying lengths of time. They compared each satellite’s record in the months and years leading up to and following the Australian fires. All three records showed a significant drop in nitrogen dioxide in March 2020. For one satellite’s record, the drop represented a record low among observations spanning the last 20 years.

    To check that the nitrogen dioxide decrease was a direct chemical effect of the fires’ smoke, the researchers carried out atmospheric simulations using a global, three-dimensional model that simulates hundreds of chemical reactions in the atmosphere, from the surface on up through the stratosphere.

    The team injected a cloud of smoke particles into the model, simulating what was observed from the Australian wildfires. They assumed that the particles, like volcanic aerosols, gathered moisture. They then ran the model multiple times and compared the results to simulations without the smoke cloud.

    In every simulation incorporating wildfire smoke, the team found that as the amount of smoke particles increased in the stratosphere, concentrations of nitrogen dioxide decreased, matching the observations of the three satellites.

    “The behavior we saw, of more and more aerosols, and less and less nitrogen dioxide, in both the model and the data, is a fantastic fingerprint,” Solomon says. “It’s the first time that science has established a chemical mechanism linking wildfire smoke to ozone depletion. It may only be one chemical mechanism among several, but it’s clearly there. It tells us these particles are wet and they had to have caused some ozone depletion.”

    She and her collaborators are looking into other reactions triggered by wildfire smoke that might further contribute to stripping ozone. For the time being, the major driver of ozone depletion remains chlorofluorocarbons, or CFCs — chemicals such as old refrigerants that have been banned under the Montreal Protocol, though they continue to linger in the stratosphere. But as global warming leads to stronger, more frequent wildfires, their smoke could have a serious, lasting impact on ozone.

    “Wildfire smoke is a toxic brew of organic compounds that are complex beasts,” Solomon says. “And I’m afraid ozone is getting pummeled by a whole series of reactions that we are now furiously working to unravel.”

    This research was supported in part by the National Science Foundation and NASA. More

  • in

    Solar-powered system offers a route to inexpensive desalination

    An estimated two-thirds of humanity is affected by shortages of water, and many such areas in the developing world also face a lack of dependable electricity. Widespread research efforts have thus focused on ways to desalinate seawater or brackish water using just solar heat. Many such efforts have run into problems with fouling of equipment caused by salt buildup, however, which often adds complexity and expense.

    Now, a team of researchers at MIT and in China has come up with a solution to the problem of salt accumulation — and in the process developed a desalination system that is both more efficient and less expensive than previous solar desalination methods. The process could also be used to treat contaminated wastewater or to generate steam for sterilizing medical instruments, all without requiring any power source other than sunlight itself.

    The findings are described today in the journal Nature Communications, in a paper by MIT graduate student Lenan Zhang, postdoc Xiangyu Li, professor of mechanical engineering Evelyn Wang, and four others.

    “There have been a lot of demonstrations of really high-performing, salt-rejecting, solar-based evaporation designs of various devices,” Wang says. “The challenge has been the salt fouling issue, that people haven’t really addressed. So, we see these very attractive performance numbers, but they’re often limited because of longevity. Over time, things will foul.”

    Many attempts at solar desalination systems rely on some kind of wick to draw the saline water through the device, but these wicks are vulnerable to salt accumulation and relatively difficult to clean. The team focused on developing a wick-free system instead. The result is a layered system, with dark material at the top to absorb the sun’s heat, then a thin layer of water above a perforated layer of material, sitting atop a deep reservoir of the salty water such as a tank or a pond. After careful calculations and experiments, the researchers determined the optimal size for the holes drilled through the perforated material, which in their tests was made of polyurethane. At 2.5 millimeters across, these holes can be easily made using commonly available waterjets.

    The holes are large enough to allow for a natural convective circulation between the warmer upper layer of water and the colder reservoir below. That circulation naturally draws the salt from the thin layer above down into the much larger body of water below, where it becomes well-diluted and no longer a problem. “It allows us to achieve high performance and yet also prevent this salt accumulation,” says Wang, who is the Ford Professor of Engineering and head of the Department of Mechanical Engineering.

    Li says that the advantages of this system are “both the high performance and the reliable operation, especially under extreme conditions, where we can actually work with near-saturation saline water. And that means it’s also very useful for wastewater treatment.”

    He adds that much work on such solar-powered desalination has focused on novel materials. “But in our case, we use really low-cost, almost household materials.” The key was analyzing and understanding the convective flow that drives this entirely passive system, he says. “People say you always need new materials, expensive ones, or complicated structures or wicking structures to do that. And this is, I believe, the first one that does this without wicking structures.”

    This new approach “provides a promising and efficient path for desalination of high salinity solutions, and could be a game changer in solar water desalination,” says Hadi Ghasemi, a professor of chemical and biomolecular engineering at the University of Houston, who was not associated with this work. “Further work is required for assessment of this concept in large settings and in long runs,” he adds.

    Just as hot air rises and cold air falls, Zhang explains, natural convection drives the desalination process in this device. In the confined water layer near the top, “the evaporation happens at the very top interface. Because of the salt, the density of water at the very top interface is higher, and the bottom water has lower density. So, this is an original driving force for this natural convection because the higher density at the top drives the salty liquid to go down.” The water evaporated from the top of the system can then be collected on a condensing surface, providing pure fresh water.

    The rejection of salt to the water below could also cause heat to be lost in the process, so preventing that required careful engineering, including making the perforated layer out of highly insulating material to keep the heat concentrated above. The solar heating at the top is accomplished through a simple layer of black paint.

    This gif shows fluid flow visualized by food dye. The left-side shows the slow transport of colored de-ionized water from the top to the bottom bulk water. The right-side shows the fast transport of colored saline water from the top to the bottom bulk water driven by the natural convection effect.

    So far, the team has proven the concept using small benchtop devices, so the next step will be starting to scale up to devices that could have practical applications. Based on their calculations, a system with just 1 square meter (about a square yard) of collecting area should be sufficient to provide a family’s daily needs for drinking water, they say. Zhang says they calculated that the necessary materials for a 1-square-meter device would cost only about $4.

    Their test apparatus operated for a week with no signs of any salt accumulation, Li says. And the device is remarkably stable. “Even if we apply some extreme perturbation, like waves on the seawater or the lake,” where such a device could be installed as a floating platform, “it can return to its original equilibrium position very fast,” he says.

    The necessary work to translate this lab-scale proof of concept into workable commercial devices, and to improve the overall water production rate, should be possible within a few years, Zhang says. The first applications are likely to be providing safe water in remote off-grid locations, or for disaster relief after hurricanes, earthquakes, or other disruptions of normal water supplies.

    Zhang adds that “if we can concentrate the sunlight a little bit, we could use this passive device to generate high-temperature steam to do medical sterilization” for off-grid rural areas.

    “I think a real opportunity is the developing world,” Wang says. “I think that is where there’s most probable impact near-term, because of the simplicity of the design.” But, she adds, “if we really want to get it out there, we also need to work with the end users, to really be able to adopt the way we design it so that they’re willing to use it.”

    “This is a new strategy toward solving the salt accumulation problem in solar evaporation,” says Peng Wang, a professor at King Abdullah University of Science and Technology in Saudi Arabia, who was not associated with this research. “This elegant design will inspire new innovations in the design of advanced solar evaporators. The strategy is very promising due to its high energy efficiency, operation durability, and low cost, which contributes to low-cost and passive water desalination to produce fresh water from various source water with high salinity, e.g., seawater, brine, or brackish groundwater.”

    The team also included Yang Zhong, Arny Leroy, and Lin Zhao at MIT, and Zhenyuan Xu at Shanghai Jiao Tong University in China. The work was supported by the Singapore-MIT Alliance for Research and Technology, the U.S.-Egypt Science and Technology Joint Fund, and used facilities supported by the National Science Foundation. More

  • in

    How marsh grass protects shorelines

    Marsh plants, which are ubiquitous along the world’s shorelines, can play a major role in mitigating the damage to coastlines as sea levels rise and storm surges increase. Now, a new MIT study provides greater detail about how these protective benefits work under real-world conditions shaped by waves and currents.

    The study combined laboratory experiments using simulated plants in a large wave tank along with mathematical modeling. It appears in the journal Physical Review — Fluids, in a paper by former MIT visiting doctoral student Xiaoxia Zhang, now a postdoc at Dalian University of Technology, and professor of civil and environmental engineering Heidi Nepf.

    It’s already clear that coastal marsh plants provide significant protection from surges and devastating  storms. For example, it has been estimated that the damage caused by Hurricane Sandy was reduced by $625 million thanks to the damping of wave energy provided by extensive areas of marsh along the affected coasts. But the new MIT analysis incorporates details of plant morphology, such as the number and spacing of flexible leaves versus stiffer stems, and the complex interactions of currents and waves that may be coming from different directions.

    This level of detail could enable coastal restoration planners to determine the area of marsh needed to mitigate expected amounts of storm surge or sea-level rise, and to decide which types of plants to introduce to maximize protection.

    “When you go to a marsh, you often will see that the plants are arranged in zones,” says Nepf, who is the Donald and Martha Harleman Professor of Civil and Environmental Engineering. “Along the edge, you tend to have plants that are more flexible, because they are using their flexibility to reduce the wave forces they feel. In the next zone, the plants are a little more rigid and have a bit more leaves.”

    As the zones progress, the plants become stiffer, leafier, and more effective at absorbing wave energy thanks to their greater leaf area. The new modeling done in this research, which incorporated work with simulated plants in the 24-meter-long wave tank at MIT’s Parsons Lab, can enable coastal planners to take these kinds of details into account when planning protection, mitigation, or restoration projects.

    “If you put the stiffest plants at the edge, they might not survive, because they’re feeling very high wave forces. By describing why Mother Nature organizes plants in this way, we can hopefully design a more sustainable restoration,” Nepf says.

    Once established, the marsh plants provide a positive feedback cycle that helps to not only stabilize but also build up these delicate coastal lands, Zhang says. “After a few years, the marsh grasses start to trap and hold the sediment, and the elevation gets higher and higher, which might keep up with sea level rise,” she says.

    The new MIT analysis incorporates details of plant morphology, such as the number and spacing of flexible leaves versus stiffer stems, and the complex interactions of currents and waves that may be coming from different directions.

    Awareness of the protective effects of marshland has been growing, Nepf says. For example, the Netherlands has been restoring lost marshland outside the dikes that surround much of the nation’s agricultural land, finding that the marsh can protect the dikes from erosion; the marsh and dikes work together much more effectively than the dikes alone at preventing flooding.

    But most such efforts so far have been largely empirical, trial-and-error plans, Nepf says. Now, they could take advantage of this modeling to know just how much marshland with what types of plants would be needed to provide the desired level of protection.

    It also provides a more quantitative way to estimate the value provided by marshes, she says. “It could allow you to more accurately say, ‘40 meters of marsh will reduce waves this much and therefore will reduce overtopping of your levee by this much.’ Someone could use that to say, ‘I’m going to save this much money over the next 10 years if I reduce flooding by maintaining this marsh.’ It might help generate some political motivation for restoration efforts.”

    Nepf herself is already trying to get some of these findings included in coastal planning processes. She serves on a practitioner panel led by Chris Esposito of the Water Institute of the Gulf, which serves the storm-battered Louisiana coastline. “We’d like to get this work into the coatal simulations that are used for large-scale restoration and coastal planning,” she says.

    “Understanding the wave damping process in real vegetation wetlands is of critical value, as it is needed in the assessment of the coastal defense value of these wetlands,” says Zhan Hu, an associate professor of marine sciences at Sun Yat-Sen University, who was not associated with this work. “The challenge, however, lies in the quantitative representation of the wave damping process, in which many factors are at play, such as plant flexibility, morphology, and coexisting currents.”

    The new study, Hu says, “neatly combines experimental findings and analytical modeling to reveal the impact of each factor in the wave damping process. … Overall, this work is a solid step forward toward a more accurate assessment of wave damping capacity of real coastal wetlands, which is needed for science-based design and management of nature-based coastal protection.”

    The work was partly supported by the National Science Foundation and the China Scholarship Council.  More

  • in

    A robot that finds lost items

    A busy commuter is ready to walk out the door, only to realize they’ve misplaced their keys and must search through piles of stuff to find them. Rapidly sifting through clutter, they wish they could figure out which pile was hiding the keys.

    Researchers at MIT have created a robotic system that can do just that. The system, RFusion, is a robotic arm with a camera and radio frequency (RF) antenna attached to its gripper. It fuses signals from the antenna with visual input from the camera to locate and retrieve an item, even if the item is buried under a pile and completely out of view.

    The RFusion prototype the researchers developed relies on RFID tags, which are cheap, battery-less tags that can be stuck to an item and reflect signals sent by an antenna. Because RF signals can travel through most surfaces (like the mound of dirty laundry that may be obscuring the keys), RFusion is able to locate a tagged item within a pile.

    Using machine learning, the robotic arm automatically zeroes-in on the object’s exact location, moves the items on top of it, grasps the object, and verifies that it picked up the right thing. The camera, antenna, robotic arm, and AI are fully integrated, so RFusion can work in any environment without requiring a special set up.

    While finding lost keys is helpful, RFusion could have many broader applications in the future, like sorting through piles to fulfill orders in a warehouse, identifying and installing components in an auto manufacturing plant, or helping an elderly individual perform daily tasks in the home, though the current prototype isn’t quite fast enough yet for these uses.

    “This idea of being able to find items in a chaotic world is an open problem that we’ve been working on for a few years. Having robots that are able to search for things under a pile is a growing need in industry today. Right now, you can think of this as a Roomba on steroids, but in the near term, this could have a lot of applications in manufacturing and warehouse environments,” said senior author Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science and director of the Signal Kinetics group in the MIT Media Lab.

    Co-authors include research assistant Tara Boroushaki, the lead author; electrical engineering and computer science graduate student Isaac Perper; research associate Mergen Nachin; and Alberto Rodriguez, the Class of 1957 Associate Professor in the Department of Mechanical Engineering. The research will be presented at the Association for Computing Machinery Conference on Embedded Networked Senor Systems next month.

    Play video

    Sending signals

    RFusion begins searching for an object using its antenna, which bounces signals off the RFID tag (like sunlight being reflected off a mirror) to identify a spherical area in which the tag is located. It combines that sphere with the camera input, which narrows down the object’s location. For instance, the item can’t be located on an area of a table that is empty.

    But once the robot has a general idea of where the item is, it would need to swing its arm widely around the room taking additional measurements to come up with the exact location, which is slow and inefficient.

    The researchers used reinforcement learning to train a neural network that can optimize the robot’s trajectory to the object. In reinforcement learning, the algorithm is trained through trial and error with a reward system.

    “This is also how our brain learns. We get rewarded from our teachers, from our parents, from a computer game, etc. The same thing happens in reinforcement learning. We let the agent make mistakes or do something right and then we punish or reward the network. This is how the network learns something that is really hard for it to model,” Boroushaki explains.

    In the case of RFusion, the optimization algorithm was rewarded when it limited the number of moves it had to make to localize the item and the distance it had to travel to pick it up.

    Once the system identifies the exact right spot, the neural network uses combined RF and visual information to predict how the robotic arm should grasp the object, including the angle of the hand and the width of the gripper, and whether it must remove other items first. It also scans the item’s tag one last time to make sure it picked up the right object.

    Cutting through clutter

    The researchers tested RFusion in several different environments. They buried a keychain in a box full of clutter and hid a remote control under a pile of items on a couch.

    But if they fed all the camera data and RF measurements to the reinforcement learning algorithm, it would have overwhelmed the system. So, drawing on the method a GPS uses to consolidate data from satellites, they summarized the RF measurements and limited the visual data to the area right in front of the robot.

    Their approach worked well — RFusion had a 96 percent success rate when retrieving objects that were fully hidden under a pile.

    “Sometimes, if you only rely on RF measurements, there is going to be an outlier, and if you rely only on vision, there is sometimes going to be a mistake from the camera. But if you combine them, they are going to correct each other. That is what made the system so robust,” Boroushaki says.

    In the future, the researchers hope to increase the speed of the system so it can move smoothly, rather than stopping periodically to take measurements. This would enable RFusion to be deployed in a fast-paced manufacturing or warehouse setting.

    Beyond its potential industrial uses, a system like this could even be incorporated into future smart homes to assist people with any number of household tasks, Boroushaki says.

    “Every year, billions of RFID tags are used to identify objects in today’s complex supply chains, including clothing and lots of other consumer goods. The RFusion approach points the way to autonomous robots that can dig through a pile of mixed items and sort them out using the data stored in the RFID tags, much more efficiently than having to inspect each item individually, especially when the items look similar to a computer vision system,” says Matthew S. Reynolds, CoMotion Presidential Innovation Fellow and associate professor of electrical and computer engineering at the University of Washington, who was not involved in the research. “The RFusion approach is a great step forward for robotics operating in complex supply chains where identifying and ‘picking’ the right item quickly and accurately is the key to getting orders fulfilled on time and keeping demanding customers happy.”

    The research is sponsored by the National Science Foundation, a Sloan Research Fellowship, NTT DATA, Toppan, Toppan Forms, and the Abdul Latif Jameel Water and Food Systems Lab. More

  • in

    Zeroing in on the origins of Earth’s “single most important evolutionary innovation”

    Some time in Earth’s early history, the planet took a turn toward habitability when a group of enterprising microbes known as cyanobacteria evolved oxygenic photosynthesis — the ability to turn light and water into energy, releasing oxygen in the process.

    This evolutionary moment made it possible for oxygen to eventually accumulate in the atmosphere and oceans, setting off a domino effect of diversification and shaping the uniquely habitable planet we know today.  

    Now, MIT scientists have a precise estimate for when cyanobacteria, and oxygenic photosynthesis, first originated. Their results appear today in the Proceedings of the Royal Society B.

    They developed a new gene-analyzing technique that shows that all the species of cyanobacteria living today can be traced back to a common ancestor that evolved around 2.9 billion years ago. They also found that the ancestors of cyanobacteria branched off from other bacteria around 3.4 billion years ago, with oxygenic photosynthesis likely evolving during the intervening half-billion years, during the Archean Eon.

    Interestingly, this estimate places the appearance of oxygenic photosynthesis at least 400 million years before the Great Oxidation Event, a period in which the Earth’s atmosphere and oceans first experienced a rise in oxygen. This suggests that cyanobacteria may have evolved the ability to produce oxygen early on, but that it took a while for this oxygen to really take hold in the environment.

    “In evolution, things always start small,” says lead author Greg Fournier, associate professor of geobiology in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “Even though there’s evidence for early oxygenic photosynthesis — which is the single most important and really amazing evolutionary innovation on Earth — it still took hundreds of millions of years for it to take off.”

    Fournier’s MIT co-authors include Kelsey Moore, Luiz Thiberio Rangel, Jack Payette, Lily Momper, and Tanja Bosak.

    Slow fuse, or wildfire?

    Estimates for the origin of oxygenic photosynthesis vary widely, along with the methods to trace its evolution.

    For instance, scientists can use geochemical tools to look for traces of oxidized elements in ancient rocks. These methods have found hints that oxygen was present as early as 3.5 billion years ago — a sign that oxygenic photosynthesis may have been the source, although other sources are also possible.

    Researchers have also used molecular clock dating, which uses the genetic sequences of microbes today to trace back changes in genes through evolutionary history. Based on these sequences, researchers then use models to estimate the rate at which genetic changes occur, to trace when groups of organisms first evolved. But molecular clock dating is limited by the quality of ancient fossils, and the chosen rate model, which can produce different age estimates, depending on the rate that is assumed.

    Fournier says different age estimates can imply conflicting evolutionary narratives. For instance, some analyses suggest oxygenic photosynthesis evolved very early on and progressed “like a slow fuse,” while others indicate it appeared much later and then “took off like wildfire” to trigger the Great Oxidation Event and the accumulation of oxygen in the biosphere.

    “In order for us to understand the history of habitability on Earth, it’s important for us to distinguish between these hypotheses,” he says.

    Horizontal genes

    To precisely date the origin of cyanobacteria and oxygenic photosynthesis, Fournier and his colleagues paired molecular clock dating with horizontal gene transfer — an independent method that doesn’t rely entirely on fossils or rate assumptions.

    Normally, an organism inherits a gene “vertically,” when it is passed down from the organism’s parent. In rare instances, a gene can also jump from one species to another, distantly related species. For instance, one cell may eat another, and in the process incorporate some new genes into its genome.

    When such a horizontal gene transfer history is found, it’s clear that the group of organisms that acquired the gene is evolutionarily younger than the group from which the gene originated. Fournier reasoned that such instances could be used to determine the relative ages between certain bacterial groups. The ages for these groups could then be compared with the ages that various molecular clock models predict. The model that comes closest would likely be the most accurate, and could then be used to precisely estimate the age of other bacterial species — specifically, cyanobacteria.

    Following this reasoning, the team looked for instances of horizontal gene transfer across the genomes of thousands of bacterial species, including cyanobacteria. They also used new cultures of modern cyanobacteria taken by Bosak and Moore, to more precisely use fossil cyanobacteria as calibrations. In the end, they identified 34 clear instances of horizontal gene transfer. They then found that one out of six molecular clock models consistently matched the relative ages identified in the team’s horizontal gene transfer analysis.

    Fournier ran this model to estimate the age of the “crown” group of cyanobacteria, which encompasses all the species living today and known to exhibit oxygenic photosynthesis. They found that, during the Archean eon, the crown group originated around 2.9 billion years ago, while cyanobacteria as a whole branched off from other bacteria around 3.4 billion years ago. This strongly suggests that oxygenic photosynthesis was already happening 500 million years before the Great Oxidation Event (GOE), and that cyanobacteria were producing oxygen for quite a long time before it accumulated in the atmosphere.

    The analysis also revealed that, shortly before the GOE, around 2.4 billion years ago, cyanobacteria experienced a burst of diversification. This implies that a rapid expansion of cyanobacteria may have tipped the Earth into the GOE and launched oxygen into the atmosphere.

    Fournier plans to apply horizontal gene transfer beyond cyanobacteria to pin down the origins of other elusive species.

    “This work shows that molecular clocks incorporating horizontal gene transfers (HGTs) promise to reliably provide the ages of groups across the entire tree of life, even for ancient microbes that have left no fossil record … something that was previously impossible,” Fournier says. 

    This research was supported, in part, by the Simons Foundation and the National Science Foundation. More

  • in

    Making catalytic surfaces more active to help decarbonize fuels and chemicals

    Electrochemical reactions that are accelerated using catalysts lie at the heart of many processes for making and using fuels, chemicals, and materials — including storing electricity from renewable energy sources in chemical bonds, an important capability for decarbonizing transportation fuels. Now, research at MIT could open the door to ways of making certain catalysts more active, and thus enhancing the efficiency of such processes.

    A new production process yielded catalysts that increased the efficiency of the chemical reactions by fivefold, potentially enabling useful new processes in biochemistry, organic chemistry, environmental chemistry, and electrochemistry. The findings are described today in the journal Nature Catalysis, in a paper by Yang Shao-Horn, an MIT professor of mechanical engineering and of materials science and engineering, and a member of the Research Lab of Electronics (RLE); Tao Wang, a postdoc in RLE; Yirui Zhang, a graduate student in the Department of Mechanical Engineering; and five others.

    The process involves adding a layer of what’s called an ionic liquid in between a gold or platinum catalyst and a chemical feedstock. Catalysts produced with this method could potentially enable much more efficient conversion of hydrogen fuel to power devices such as fuel cells, or more efficient conversion of carbon dioxide into fuels.

    “There is an urgent need to decarbonize how we power transportation beyond light-duty vehicles, how we make fuels, and how we make materials and chemicals,” says Shao-Horn, emphasizing the pressing call to reduce carbon emissions highlighted in the latest IPCC report on climate change. This new approach to enhancing catalytic activity could provide an important step in that direction, she says.

    Using hydrogen in electrochemical devices such as fuel cells is one promising approach to decarbonizing fields such as aviation and heavy-duty vehicles, and the new process may help to make such uses practical. At present, the oxygen reduction reaction that powers such fuel cells is limited by its inefficiency. Previous attempts to improve that efficiency have focused on choosing different catalyst materials or modifying their surface compositions and structure.

    In this research, however, instead of modifying the solid surfaces, the team added a thin layer in between the catalyst and the electrolyte, the active material that participates in the chemical reaction. The ionic liquid layer, they found, regulates the activity of protons that help to increase the rate of the chemical reactions taking place on the interface.

    Because there is a great variety of such ionic liquids to choose from, it’s possible to “tune” proton activity and the reaction rates to match the energetics needed for processes involving proton transfer, which can be used to make fuels and chemicals through reactions with oxygen.

    “The proton activity and the barrier for proton transfer is governed by the ionic liquid layer, and so there’s a great tuneability in terms of catalytic activity for reactions involving proton and electron transfer,” Shao-Horn says. And the effect is produced by a vanishingly thin layer of the liquid, just a few nanometers thick, above which is a much thicker layer of the liquid that is to undergo the reaction.

    “I think this concept is novel and important,” says Wang, the paper’s first author, “because people know the proton activity is important in many electrochemistry reactions, but it’s very challenging to study.” That’s because in a water environment, there are so many interactions between neighboring water molecules involved that it’s very difficult to separate out which reactions are taking place. By using an ionic liquid, whose ions can each only form a single bond with the intermediate material, it became possible to study the reactions in detail, using infrared spectroscopy.

    As a result, Wang says, “Our finding highlights the critical role that interfacial electrolytes, in particular the intermolecular hydrogen bonding, can play in enhancing the activity of the electro-catalytic process. It also provides fundamental insights into proton transfer mechanisms at a quantum mechanical level, which can push the frontiers of knowing how protons and electrons interact at catalytic interfaces.”

    “The work is also exciting because it gives people a design principle for how they can tune the catalysts,” says Zhang. “We need some species right at a ‘sweet spot’ — not too active or too inert — to enhance the reaction rate.”

    With some of these techniques, says Reshma Rao, a recent doctoral graduate from MIT and now a postdoc at Imperial College, London, who is also a co-author of the paper, “we see up to a five-times increase in activity. I think the most exciting part of this research is the way it opens up a whole new dimension in the way we think about catalysis.” The field had hit “a kind of roadblock,” she says, in finding ways to design better materials. By focusing on the liquid layer rather than the surface of the material, “that’s kind of a whole different way of looking at this problem, and opens up a whole new dimension, a whole new axis along which we can change things and optimize some of these reaction rates.”

    The team also included Botao Huang, Bin Cai, and Livia Giordano in the MIT’s Research Laboratory of Electronics, and Shi-Gang Sun at Xiamen University in China. The work was supported by the Toyota Research Institute, and used the National Science Foundation’s Extreme Science and Engineering Environment. More