More stories

  • in

    MIT-led teams win National Science Foundation grants to research sustainable materials

    Three MIT-led teams are among 16 nationwide to receive funding awards to address sustainable materials for global challenges through the National Science Foundation’s Convergence Accelerator program. Launched in 2019, the program targets solutions to especially compelling societal or scientific challenges at an accelerated pace, by incorporating a multidisciplinary research approach.

    “Solutions for today’s national-scale societal challenges are hard to solve within a single discipline. Instead, these challenges require convergence to merge ideas, approaches, and technologies from a wide range of diverse sectors, disciplines, and experts,” the NSF explains in its description of the Convergence Accelerator program. Phase 1 of the award involves planning to expand initial concepts, identify new team members, participate in an NSF development curriculum, and create an early prototype.

    Sustainable microchips

    One of the funded projects, “Building a Sustainable, Innovative Ecosystem for Microchip Manufacturing,” will be led by Anuradha Murthy Agarwal, a principal research scientist at the MIT Materials Research Laboratory. The aim of this project is to help transition the manufacturing of microchips to more sustainable processes that, for example, can reduce e-waste landfills by allowing repair of chips, or enable users to swap out a rogue chip in a motherboard rather than tossing out the entire laptop or cellphone.

    “Our goal is to help transition microchip manufacturing towards a sustainable industry,” says Agarwal. “We aim to do that by partnering with industry in a multimodal approach that prototypes technology designs to minimize energy consumption and waste generation, retrains the semiconductor workforce, and creates a roadmap for a new industrial ecology to mitigate materials-critical limitations and supply-chain constraints.”

    Agarwal’s co-principal investigators are Samuel Serna, an MIT visiting professor and assistant professor of physics at Bridgewater State University, and two MIT faculty affiliated with the Materials Research Laboratory: Juejun Hu, the John Elliott Professor of Materials Science and Engineering; and Lionel Kimerling, the Thomas Lord Professor of Materials Science and Engineering.

    The training component of the project will also create curricula for multiple audiences. “At Bridgewater State University, we will create a new undergraduate course on microchip manufacturing sustainability, and eventually adapt it for audiences from K-12, as well as incumbent employees,” says Serna.

    Sajan Saini and Erik Verlage of the MIT Department of Materials Science and Engineering (DMSE), and Randolph Kirchain from the MIT Materials Systems Laboratory, who have led MIT initiatives in virtual reality digital education, materials criticality, and roadmapping, are key contributors. The project also includes DMSE graduate students Drew Weninger and Luigi Ranno, and undergraduate Samuel Bechtold from Bridgewater State University’s Department of Physics.

    Sustainable topological materials

    Under the direction of Mingda Li, the Class of 1947 Career Development Professor and an Associate Professor of Nuclear Science and Engineering, the “Sustainable Topological Energy Materials (STEM) for Energy-efficient Applications” project will accelerate research in sustainable topological quantum materials.

    Topological materials are ones that retain a particular property through all external disturbances. Such materials could potentially be a boon for quantum computing, which has so far been plagued by instability, and would usher in a post-silicon era for microelectronics. Even better, says Li, topological materials can do their job without dissipating energy even at room temperatures.

    Topological materials can find a variety of applications in quantum computing, energy harvesting, and microelectronics. Despite their promise, and a few thousands of potential candidates, discovery and mass production of these materials has been challenging. Topology itself is not a measurable characteristic so researchers have to first develop ways to find hints of it. Synthesis of materials and related process optimization can take months, if not years, Li adds. Machine learning can accelerate the discovery and vetting stage.

    Given that a best-in-class topological quantum material has the potential to disrupt the semiconductor and computing industries, Li and team are paying special attention to the environmental sustainability of prospective materials. For example, some potential candidates include gold, lead, or cadmium, whose scarcity or toxicity does not lend itself to mass production and have been disqualified.

    Co-principal investigators on the project include Liang Fu, associate professor of physics at MIT; Tomas Palacios, professor of electrical engineering and computer science at MIT and director of the Microsystems Technology Laboratories; Susanne Stemmer of the University of California at Santa Barbara; and Qiong Ma of Boston College. The $750,000 one-year Phase 1 grant will focus on three priorities: building a topological materials database; identifying the most environmentally sustainable candidates for energy-efficient topological applications; and building the foundation for a Center for Sustainable Topological Energy Materials at MIT that will encourage industry-academia collaborations.

    At a time when the size of silicon-based electronic circuit boards is reaching its lower limit, the promise of topological materials whose conductivity increases with decreasing size is especially attractive, Li says. In addition, topological materials can harvest wasted heat: Imagine using your body heat to power your phone. “There are different types of application scenarios, and we can go much beyond the capabilities of existing materials,” Li says, “the possibilities of topological materials are endlessly exciting.”

    Socioresilient materials design

    Researchers in the MIT Department of Materials Science and Engineering (DMSE) have been awarded $750,000 in a cross-disciplinary project that aims to fundamentally redirect materials research and development toward more environmentally, socially, and economically sustainable and resilient materials. This “socioresilient materials design” will serve as the foundation for a new research and development framework that takes into account technical, environmental, and social factors from the beginning of the materials design and development process.

    Christine Ortiz, the Morris Cohen Professor of Materials Science and Engineering, and Ellan Spero PhD ’14, an instructor in DMSE, are leading this research effort, which includes Cornell University, the University of Swansea, Citrine Informatics, Station1, and 14 other organizations in academia, industry, venture capital, the social sector, government, and philanthropy.

    The team’s project, “Mind Over Matter: Socioresilient Materials Design,” emphasizes that circular design approaches, which aim to minimize waste and maximize the reuse, repair, and recycling of materials, are often insufficient to address negative repercussions for the planet and for human health and safety.

    Too often society understands the unintended negative consequences long after the materials that make up our homes and cities and systems have been in production and use for many years. Examples include disparate and negative public health impacts due to industrial scale manufacturing of materials, water and air contamination with harmful materials, and increased risk of fire in lower-income housing buildings due to flawed materials usage and design. Adverse climate events including drought, flood, extreme temperatures, and hurricanes have accelerated materials degradation, for example in critical infrastructure, leading to amplified environmental damage and social injustice. While classical materials design and selection approaches are insufficient to address these challenges, the new research project aims to do just that.

    “The imagination and technical expertise that goes into materials design is too often separated from the environmental and social realities of extraction, manufacturing, and end-of-life for materials,” says Ortiz. 

    Drawing on materials science and engineering, chemistry, and computer science, the project will develop a framework for materials design and development. It will incorporate powerful computational capabilities — artificial intelligence and machine learning with physics-based materials models — plus rigorous methodologies from the social sciences and the humanities to understand what impacts any new material put into production could have on society. More

  • in

    Study: Smoke particles from wildfires can erode the ozone layer

    A wildfire can pump smoke up into the stratosphere, where the particles drift for over a year. A new MIT study has found that while suspended there, these particles can trigger chemical reactions that erode the protective ozone layer shielding the Earth from the sun’s damaging ultraviolet radiation.

    The study, which appears today in Nature, focuses on the smoke from the “Black Summer” megafire in eastern Australia, which burned from December 2019 into January 2020. The fires — the country’s most devastating on record — scorched tens of millions of acres and pumped more than 1 million tons of smoke into the atmosphere.

    The MIT team identified a new chemical reaction by which smoke particles from the Australian wildfires made ozone depletion worse. By triggering this reaction, the fires likely contributed to a 3-5 percent depletion of total ozone at mid-latitudes in the Southern Hemisphere, in regions overlying Australia, New Zealand, and parts of Africa and South America.

    The researchers’ model also indicates the fires had an effect in the polar regions, eating away at the edges of the ozone hole over Antarctica. By late 2020, smoke particles from the Australian wildfires widened the Antarctic ozone hole by 2.5 million square kilometers — 10 percent of its area compared to the previous year.

    It’s unclear what long-term effect wildfires will have on ozone recovery. The United Nations recently reported that the ozone hole, and ozone depletion around the world, is on a recovery track, thanks to a sustained international effort to phase out ozone-depleting chemicals. But the MIT study suggests that as long as these chemicals persist in the atmosphere, large fires could spark a reaction that temporarily depletes ozone.

    “The Australian fires of 2020 were really a wake-up call for the science community,” says Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies at MIT and a leading climate scientist who first identified the chemicals responsible for the Antarctic ozone hole. “The effect of wildfires was not previously accounted for in [projections of] ozone recovery. And I think that effect may depend on whether fires become more frequent and intense as the planet warms.”

    The study is led by Solomon and MIT research scientist Kane Stone, along with collaborators from the Institute for Environmental and Climate Research in Guangzhou, China; the U.S. National Oceanic and Atmospheric Administration; the U.S. National Center for Atmospheric Research; and Colorado State University.

    Chlorine cascade

    The new study expands on a 2022 discovery by Solomon and her colleagues, in which they first identified a chemical link between wildfires and ozone depletion. The researchers found that chlorine-containing compounds, originally emitted by factories in the form of chlorofluorocarbons (CFCs), could react with the surface of fire aerosols. This interaction, they found, set off a chemical cascade that produced chlorine monoxide — the ultimate ozone-depleting molecule. Their results showed that the Australian wildfires likely depleted ozone through this newly identified chemical reaction.

    “But that didn’t explain all the changes that were observed in the stratosphere,” Solomon says. “There was a whole bunch of chlorine-related chemistry that was totally out of whack.”

    In the new study, the team took a closer look at the composition of molecules in the stratosphere following the Australian wildfires. They combed through three independent sets of satellite data and observed that in the months following the fires, concentrations of hydrochloric acid dropped significantly at mid-latitudes, while chlorine monoxide spiked.

    Hydrochloric acid (HCl) is present in the stratosphere as CFCs break down naturally over time. As long as chlorine is bound in the form of HCl, it doesn’t have a chance to destroy ozone. But if HCl breaks apart, chlorine can react with oxygen to form ozone-depleting chlorine monoxide.

    In the polar regions, HCl can break apart when it interacts with the surface of cloud particles at frigid temperatures of about 155 kelvins. However, this reaction was not expected to occur at mid-latitudes, where temperatures are much warmer.

    “The fact that HCl at mid-latitudes dropped by this unprecedented amount was to me kind of a danger signal,” Solomon says.

    She wondered: What if HCl could also interact with smoke particles, at warmer temperatures and in a way that released chlorine to destroy ozone? If such a reaction was possible, it would explain the imbalance of molecules and much of the ozone depletion observed following the Australian wildfires.

    Smoky drift

    Solomon and her colleagues dug through the chemical literature to see what sort of organic molecules could react with HCl at warmer temperatures to break it apart.

    “Lo and behold, I learned that HCl is extremely soluble in a whole broad range of organic species,” Solomon says. “It likes to glom on to lots of compounds.”

    The question then, was whether the Australian wildfires released any of those compounds that could have triggered HCl’s breakup and any subsequent depletion of ozone. When the team looked at the composition of smoke particles in the first days after the fires, the picture was anything but clear.

    “I looked at that stuff and threw up my hands and thought, there’s so much stuff in there, how am I ever going to figure this out?” Solomon recalls. “But then I realized it had actually taken some weeks before you saw the HCl drop, so you really need to look at the data on aged wildfire particles.”

    When the team expanded their search, they found that smoke particles persisted over months, circulating in the stratosphere at mid-latitudes, in the same regions and times when concentrations of HCl dropped.

    “It’s the aged smoke particles that really take up a lot of the HCl,” Solomon says. “And then you get, amazingly, the same reactions that you get in the ozone hole, but over mid-latitudes, at much warmer temperatures.”

    When the team incorporated this new chemical reaction into a model of atmospheric chemistry, and simulated the conditions of the Australian wildfires, they observed a 5 percent depletion of ozone throughout the stratosphere at mid-latitudes, and a 10 percent widening of the ozone hole over Antarctica.

    The reaction with HCl is likely the main pathway by which wildfires can deplete ozone. But Solomon guesses there may be other chlorine-containing compounds drifting in the stratosphere, that wildfires could unlock.

    “There’s now sort of a race against time,” Solomon says. “Hopefully, chlorine-containing compounds will have been destroyed, before the frequency of fires increases with climate change. This is all the more reason to be vigilant about global warming and these chlorine-containing compounds.”

    This research was supported, in part, by NASA and the U.S. National Science Foundation. More

  • in

    A new way to assess radiation damage in reactors

    A new method could greatly reduce the time and expense needed for certain important safety checks in nuclear power reactors. The approach could save money and increase total power output in the short run, and it might increase plants’ safe operating lifetimes in the long run.

    One of the most effective ways to control greenhouse gas emissions, many analysts argue, is to prolong the lifetimes of existing nuclear power plants. But extending these plants beyond their originally permitted operating lifetimes requires monitoring the condition of many of their critical components to ensure that damage from heat and radiation has not led, and will not lead, to unsafe cracking or embrittlement.

    Today, testing of a reactor’s stainless steel components — which make up much of the plumbing systems that prevent heat buildup, as well as many other parts — requires removing test pieces, known as coupons, of the same kind of steel that are left adjacent to the actual components so they experience the same conditions. Or, it requires the removal of a tiny piece of the actual operating component. Both approaches are done during costly shutdowns of the reactor, prolonging these scheduled outages and costing millions of dollars per day.

    Now, researchers at MIT and elsewhere have come up with a new, inexpensive, hands-off test that can produce similar information about the condition of these reactor components, with far less time required during a shutdown. The findings are reported today in the journal Acta Materiala in a paper by MIT professor of nuclear science and engineering Michael Short; Saleem Al Dajani ’19 SM ’20, who did his master’s work at MIT on this project and is now a doctoral student at the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia; and 13 others at MIT and other institutions.

    The test involves aiming laser beams at the stainless steel material, which generates surface acoustic waves (SAWs) on the surface. Another set of laser beams is then used to detect and measure the frequencies of these SAWs. Tests on material aged identically to nuclear power plants showed that the waves produced a distinctive double-peaked spectral signature when the material was degraded.

    Short and Al Dajani embarked on the process in 2018, looking for a more rapid way to detect a specific kind of degradation, called spinodal decomposition, that can take place in austenitic stainless steel, which is used for components such as the 2- to 3-foot wide pipes that carry coolant water to and from the reactor core. This process can lead to embrittlement, cracking, and potential failure in the event of an emergency.

    While spinodal decomposition is not the only type of degradation that can occur in reactor components, it is a primary concern for the lifetime and sustainability of nuclear reactors, Short says.

    “We were looking for a signal that can link material embrittlement with properties we can measure, that can be used to estimate lifetimes of structural materials,” Al Dajani says.

    They decided to try a technique Short and his students and collaborators had expanded upon, called transient grating spectroscopy, or TGS, on samples of reactor materials known to have experienced spinodal decomposition as a result of their reactor-like thermal aging history. The method uses laser beams to stimulate, and then measure, SAWs on a material. The idea was that the decomposition should slow down the rate of heat flow through the material, that slowdown would be detectable by the TGS method.

    However, it turns out there was no such slowdown. “We went in with a hypothesis about what we would see, and we were wrong,” Short says.

    That’s often the way things work out in science, he says. “You go in guns blazing, looking for a certain thing, for a great reason, and you turn out to be wrong. But if you look carefully, you find other patterns in the data that reveal what nature actually has to say.”

    Instead, what showed up in the data was that, while a material would usually produce a single frequency peak for the material’s SAWs, in the degraded samples there was a splitting into two peaks.

    “It was a very clear pattern in the data,” Short recalls. “We just didn’t expect it, but it was right there screaming at us in the measurements.”

    Cast austenitic stainless steels like those used in reactor components are what’s known as duplex steels, actually a mixture of two different crystal structures in the same material by design. But while one of the two types is quite impervious to spinodal decomposition, the other is quite vulnerable to it. When the material starts to degrade, the difference shows up in the different frequency responses of the material, which is what the team found in their data.

    That finding was a total surprise, though. “Some of my current and former students didn’t believe it was happening,” Short says. “We were unable to convince our own team this was happening, with the initial statistics we had.” So, they went back and carried out further tests, which continued to strengthen the significance of the results. They reached a point where the confidence level was 99.9 percent that spinodal decomposition was indeed coincident with the wave peak separation.

    “Our discussions with those who opposed our initial hypotheses ended up taking our work to the next level,” Al Dajani says.

    The tests they did used large lab-based lasers and optical systems, so the next step, which the researchers are hard at work on, is miniaturizing the whole system into something that can be an easily portable test kit to use to check reactor components on-site, reducing the length of shutdowns. “We’re making great strides, but we still have some way to go,” he says.

    But when they achieve that next step, he says, it could make a significant difference. “Every day that your nuclear plant goes down, for a typical gigawatt-scale reactor, you lose about $2 million a day in lost electricity,” Al Dajani says, “so shortening outages is a huge thing in the industry right now.”

    He adds that the team’s goal was to find ways to enable existing plants to operate longer: “Let them be down for less time and be as safe or safer than they are right now — not cutting corners, but using smart science to get us the same information with far less effort.” And that’s what this new technique seems to offer.

    Short hopes that this could help to enable the extension of power plant operating licenses for some additional decades without compromising safety, by enabling frequent, simple and inexpensive testing of the key components. Existing, large-scale plants “generate just shy of a billion dollars in carbon-free electricity per plant each year,” he says, whereas bringing a new plant online can take more than a decade. “To bridge that gap, keeping our current nukes online is the single biggest thing we can do to fight climate change.”

    The team included researchers at MIT, Idaho National Laboratory, Manchester University and Imperial College London in the UK, Oak Ridge National Laboratory, the Electric Power Research Institute, Northeastern University, the University of California at Berkeley, and KAUST. The work was supported by the International Design Center at MIT and the Singapore University of Technology and Design, the U.S. Nuclear Regulatory Commission, and the U.S. National Science Foundation. More

  • in

    Engineers solve a mystery on the path to smaller, lighter batteries

    A discovery by MIT researchers could finally unlock the door to the design of a new kind of rechargeable lithium battery that is more lightweight, compact, and safe than current versions, and that has been pursued by labs around the world for years.

    The key to this potential leap in battery technology is replacing the liquid electrolyte that sits between the positive and negative electrodes with a much thinner, lighter layer of solid ceramic material, and replacing one of the electrodes with solid lithium metal. This would greatly reduce the overall size and weight of the battery and remove the safety risk associated with liquid electrolytes, which are flammable. But that quest has been beset with one big problem: dendrites.

    Dendrites, whose name comes from the Latin for branches, are projections of metal that can build up on the lithium surface and penetrate into the solid electrolyte, eventually crossing from one electrode to the other and shorting out the battery cell. Researchers haven’t been able to agree on what gives rise to these metal filaments, nor has there been much progress on how to prevent them and thus make lightweight solid-state batteries a practical option.

    The new research, being published today in the journal Joule in a paper by MIT Professor Yet-Ming Chiang, graduate student Cole Fincher, and five others at MIT and Brown University, seems to resolve the question of what causes dendrite formation. It also shows how dendrites can be prevented from crossing through the electrolyte.

    Chiang says in the group’s earlier work, they made a “surprising and unexpected” finding, which was that the hard, solid electrolyte material used for a solid-state battery can be penetrated by lithium, which is a very soft metal, during the process of charging and discharging the battery, as ions of lithium move between the two sides.

    This shuttling back and forth of ions causes the volume of the electrodes to change. That inevitably causes stresses in the solid electrolyte, which has to remain fully in contact with both of the electrodes that it is sandwiched between. “To deposit this metal, there has to be an expansion of the volume because you’re adding new mass,” Chiang says. “So, there’s an increase in volume on the side of the cell where the lithium is being deposited. And if there are even microscopic flaws present, this will generate a pressure on those flaws that can cause cracking.”

    Those stresses, the team has now shown, cause the cracks that allow dendrites to form. The solution to the problem turns out to be more stress, applied in just the right direction and with the right amount of force.

    While previously, some researchers thought that dendrites formed by a purely electrochemical process, rather than a mechanical one, the team’s experiments demonstrate that it is mechanical stresses that cause the problem.

    The process of dendrite formation normally takes place deep within the opaque materials of the battery cell and cannot be observed directly, so Fincher developed a way of making thin cells using a transparent electrolyte, allowing the whole process to be directly seen and recorded. “You can see what happens when you put a compression on the system, and you can see whether or not the dendrites behave in a way that’s commensurate with a corrosion process or a fracture process,” he says.

    The team demonstrated that they could directly manipulate the growth of dendrites simply by applying and releasing pressure, causing the dendrites to zig and zag in perfect alignment with the direction of the force.

    Applying mechanical stresses to the solid electrolyte doesn’t eliminate the formation of dendrites, but it does control the direction of their growth. This means they can be directed to remain parallel to the two electrodes and prevented from ever crossing to the other side, and thus rendered harmless.

    In their tests, the researchers used pressure induced by bending the material, which was formed into a beam with a weight at one end. But they say that in practice, there could be many different ways of producing the needed stress. For example, the electrolyte could be made with two layers of material that have different amounts of thermal expansion, so that there is an inherent bending of the material, as is done in some thermostats.

    Another approach would be to “dope” the material with atoms that would become embedded in it, distorting it and leaving it in a permanently stressed state. This is the same method used to produce the super-hard glass used in the screens of smart phones and tablets, Chiang explains. And the amount of pressure needed is not extreme: The experiments showed that pressures of 150 to 200 megapascals were sufficient to stop the dendrites from crossing the electrolyte.

    The required pressure is “commensurate with stresses that are commonly induced in commercial film growth processes and many other manufacturing processes,” so should not be difficult to implement in practice, Fincher adds.

    In fact, a different kind of stress, called stack pressure, is often applied to battery cells, by essentially squishing the material in the direction perpendicular to the battery’s plates — somewhat like compressing a sandwich by putting a weight on top of it. It was thought that this might help prevent the layers from separating. But the experiments have now demonstrated that pressure in that direction actually exacerbates dendrite formation. “We showed that this type of stack pressure actually accelerates dendrite-induced failure,” Fincher says.

    What is needed instead is pressure along the plane of the plates, as if the sandwich were being squeezed from the sides. “What we have shown in this work is that when you apply a compressive force you can force the dendrites to travel in the direction of the compression,” Fincher says, and if that direction is along the plane of the plates, the dendrites “will never get to the other side.”

    That could finally make it practical to produce batteries using solid electrolyte and metallic lithium electrodes. Not only would these pack more energy into a given volume and weight, but they would eliminate the need for liquid electrolytes, which are flammable materials.

    Having demonstrated the basic principles involved, the team’s next step will be to try to apply these to the creation of a functional prototype battery, Chiang says, and then to figure out exactly what manufacturing processes would be needed to produce such batteries in quantity. Though they have filed for a patent, the researchers don’t plan to commercialize the system themselves, he says, as there are already companies working on the development of solid-state batteries. “I would say this is an understanding of failure modes in solid-state batteries that we believe the industry needs to be aware of and try to use in designing better products,” he says.

    The research team included Christos Athanasiou and Brian Sheldon at Brown University, and Colin Gilgenbach, Michael Wang, and W. Craig Carter at MIT. The work was supported by the U.S. National Science Foundation, the U.S. Department of Defense, the U.S. Defense Advanced Research Projects Agency, and the U.S. Department of Energy. More

  • in

    Earth can regulate its own temperature over millennia, new study finds

    The Earth’s climate has undergone some big changes, from global volcanism to planet-cooling ice ages and dramatic shifts in solar radiation. And yet life, for the last 3.7 billion years, has kept on beating.

    Now, a study by MIT researchers in Science Advances confirms that the planet harbors a “stabilizing feedback” mechanism that acts over hundreds of thousands of years to pull the climate back from the brink, keeping global temperatures within a steady, habitable range.

    Just how does it accomplish this? A likely mechanism is “silicate weathering” — a geological process by which the slow and steady weathering of silicate rocks involves chemical reactions that ultimately draw carbon dioxide out of the atmosphere and into ocean sediments, trapping the gas in rocks.

    Scientists have long suspected that silicate weathering plays a major role in regulating the Earth’s carbon cycle. The mechanism of silicate weathering could provide a geologically constant force in keeping carbon dioxide — and global temperatures — in check. But there’s never been direct evidence for the continual operation of such a feedback, until now.

    The new findings are based on a study of paleoclimate data that record changes in average global temperatures over the last 66 million years. The MIT team applied a mathematical analysis to see whether the data revealed any patterns characteristic of stabilizing phenomena that reined in global temperatures on a  geologic timescale.

    They found that indeed there appears to be a consistent pattern in which the Earth’s temperature swings are dampened over timescales of hundreds of thousands of years. The duration of this effect is similar to the timescales over which silicate weathering is predicted to act.

    The results are the first to use actual data to confirm the existence of a stabilizing feedback, the mechanism of which is likely silicate weathering. This stabilizing feedback would explain how the Earth has remained habitable through dramatic climate events in the geologic past.

    “On the one hand, it’s good because we know that today’s global warming will eventually be canceled out through this stabilizing feedback,” says Constantin Arnscheidt, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “But on the other hand, it will take hundreds of thousands of years to happen, so not fast enough to solve our present-day issues.”

    The study is co-authored by Arnscheidt and Daniel Rothman, professor of geophysics at MIT.

    Stability in data

    Scientists have previously seen hints of a climate-stabilizing effect in the Earth’s carbon cycle: Chemical analyses of ancient rocks have shown that the flux of carbon in and out of Earth’s surface environment has remained relatively balanced, even through dramatic swings in global temperature. Furthermore, models of silicate weathering predict that the process should have some stabilizing effect on the global climate. And finally, the fact of the Earth’s enduring habitability points to some inherent, geologic check on extreme temperature swings.

    “You have a planet whose climate was subjected to so many dramatic external changes. Why did life survive all this time? One argument is that we need some sort of stabilizing mechanism to keep temperatures suitable for life,” Arnscheidt says. “But it’s never been demonstrated from data that such a mechanism has consistently controlled Earth’s climate.”

    Arnscheidt and Rothman sought to confirm whether a stabilizing feedback has indeed been at work, by looking at data of global temperature fluctuations through geologic history. They worked with a range of global temperature records compiled by other scientists, from the chemical composition of ancient marine fossils and shells, as well as preserved Antarctic ice cores.

    “This whole study is only possible because there have been great advances in improving the resolution of these deep-sea temperature records,” Arnscheidt notes. “Now we have data going back 66 million years, with data points at most thousands of years apart.”

    Speeding to a stop

    To the data, the team applied the mathematical theory of stochastic differential equations, which is commonly used to reveal patterns in widely fluctuating datasets.

    “We realized this theory makes predictions for what you would expect Earth’s temperature history to look like if there had been feedbacks acting on certain timescales,” Arnscheidt explains.

    Using this approach, the team analyzed the history of average global temperatures over the last 66 million years, considering the entire period over different timescales, such as tens of thousands of years versus hundreds of thousands, to see whether any patterns of stabilizing feedback emerged within each timescale.

    “To some extent, it’s like your car is speeding down the street, and when you put on the brakes, you slide for a long time before you stop,” Rothman says. “There’s a timescale over which frictional resistance, or a stabilizing feedback, kicks in, when the system returns to a steady state.”

    Without stabilizing feedbacks, fluctuations of global temperature should grow with timescale. But the team’s analysis revealed a regime in which fluctuations did not grow, implying that a stabilizing mechanism reigned in the climate before fluctuations grew too extreme. The timescale for this stabilizing effect — hundreds of thousands of years — coincides with what scientists predict for silicate weathering.

    Interestingly, Arnscheidt and Rothman found that on longer timescales, the data did not reveal any stabilizing feedbacks. That is, there doesn’t appear to be any recurring pull-back of global temperatures on timescales longer than a million years. Over these longer timescales, then, what has kept global temperatures in check?

    “There’s an idea that chance may have played a major role in determining why, after more than 3 billion years, life still exists,” Rothman offers.

    In other words, as the Earth’s temperatures fluctuate over longer stretches, these fluctuations may just happen to be small enough in the geologic sense, to be within a range that a stabilizing feedback, such as silicate weathering, could periodically keep the climate in check, and more to the point, within a habitable zone.

    “There are two camps: Some say random chance is a good enough explanation, and others say there must be a stabilizing feedback,” Arnscheidt says. “We’re able to show, directly from data, that the answer is probably somewhere in between. In other words, there was some stabilization, but pure luck likely also played a role in keeping Earth continuously habitable.”

    This research was supported, in part, by a MathWorks fellowship and the National Science Foundation. More

  • in

    Keeping indoor humidity levels at a “sweet spot” may reduce spread of Covid-19

    We know proper indoor ventilation is key to reducing the spread of Covid-19. Now, a study by MIT researchers finds that indoor relative humidity may also influence transmission of the virus.

    Relative humidity is the amount of moisture in the air compared to the total moisture the air can hold at a given temperature before saturating and forming condensation.

    In a study appearing today in the Journal of the Royal Society Interface, the MIT team reports that maintaining an indoor relative humidity between 40 and 60 percent is associated with relatively lower rates of Covid-19 infections and deaths, while indoor conditions outside this range are associated with worse Covid-19 outcomes. To put this into perspective, most people are comfortable between 30 and 50 percent relative humidity, and an airplane cabin is at around 20 percent relative humidity.

    The findings are based on the team’s analysis of Covid-19 data combined with meteorological measurements from 121 countries, from January 2020 through August 2020. Their study suggests a strong connection between regional outbreaks and indoor relative humidity.

    In general, the researchers found that whenever a region experienced a rise in Covid-19 cases and deaths prevaccination, the estimated indoor relative humidity in that region, on average, was either lower than 40 percent or higher than 60 percent regardless of season. Nearly all regions in the study experienced fewer Covid-19 cases and deaths during periods when estimated indoor relative humidity was within a “sweet spot” between 40 and 60 percent.

    “There’s potentially a protective effect of this intermediate indoor relative humidity,” suggests lead author Connor Verheyen, a PhD student in medical engineering and medical physics in the Harvard-MIT Program in Health Sciences and Technology.

    “Indoor ventilation is still critical,” says co-author Lydia Bourouiba, director of the MIT Fluid Dynamics of Disease Transmission Laboratory and associate professor in the departments of Civil and Environmental Engineering and Mechanical Engineering, and at the Institute for Medical Engineering and Science at MIT. “However, we find that maintaining an indoor relative humidity in that sweet spot — of 40 to 60 percent — is associated with reduced Covid-19 cases and deaths.”

    Seasonal swing?

    Since the start of the Covid-19 pandemic, scientists have considered the possibility that the virus’ virulence swings with the seasons. Infections and associated deaths appear to rise in winter and ebb in summer. But studies looking to link the virus’ patterns to seasonal outdoor conditions have yielded mixed results.

    Verheyen and Bourouiba examined whether Covid-19 is influenced instead by indoor — rather than outdoor — conditions, and, specifically, relative humidity. After all, they note that most societies spend more than 90 percent of their time indoors, where the majority of viral transmission has been shown to occur. What’s more, indoor conditions can be quite different from outdoor conditions as a result of climate control systems, such as heaters that significantly dry out indoor air.

    Could indoor relative humidity have affected the spread and severity of Covid-19 around the world? And could it help explain the differences in health outcomes from region to region?

    Tracking humidity

    For answers, the team focused on the early period of the pandemic when vaccines were not yet available, reasoning that vaccinated populations would obscure the influence of any other factor such as indoor humidity. They gathered global Covid-19 data, including case counts and reported deaths, from January 2020 to August 2020,  and identified countries with at least 50 deaths, indicating at least one outbreak had occurred in those countries.

    In all, they focused on 121 countries where Covid-19 outbreaks occurred. For each country, they also tracked the local Covid-19 related policies, such as isolation, quarantine, and testing measures, and their statistical association with Covid-19 outcomes.

    For each day that Covid-19 data was available, they used meteorological data to calculate a country’s outdoor relative humidity. They then estimated the average indoor relative humidity, based on outdoor relative humidity and guidelines on temperature ranges for human comfort. For instance, guidelines report that humans are comfortable between 66 to 77 degrees Fahrenheit indoors. They also assumed that on average, most populations have the means to heat indoor spaces to comfortable temperatures. Finally, they also collected experimental data, which they used to validate their estimation approach.

    For every instance when outdoor temperatures were below the typical human comfort range, they assumed indoor spaces were heated to reach that comfort range. Based on the added heating, they calculated the associated drop in indoor relative humidity.

    In warmer times, both outdoor and indoor relative humidity for each country was about the same, but they quickly diverged in colder times. While outdoor humidity remained around 50 percent throughout the year, indoor relative humidity for countries in the Northern and Southern Hemispheres dropped below 40 percent in their respective colder periods, when Covid-19 cases and deaths also spiked in these regions.

    For countries in the tropics, relative humidity was about the same indoors and outdoors throughout the year, with a gradual rise indoors during the region’s summer season, when high outdoor humidity likely raised the indoor relative humidity over 60 percent. They found this rise mirrored the gradual increase in Covid-19 deaths in the tropics.

    “We saw more reported Covid-19 deaths on the low and high end of indoor relative humidity, and less in this sweet spot of 40 to 60 percent,” Verheyen says. “This intermediate relative humidity window is associated with a better outcome, meaning fewer deaths and a deceleration of the pandemic.”

    “We were very skeptical initially, especially as the Covid-19 data can be noisy and inconsistent,” Bourouiba says. “We thus were very thorough trying to poke holes in our own analysis, using a range of approaches to test the limits and robustness of the findings, including taking into account factors such as government intervention. Despite all our best efforts, we found that even when considering countries with very strong versus very weak Covid-19 mitigation policies, or wildly different outdoor conditions, indoor — rather than outdoor — relative humidity maintains an underlying strong and robust link with Covid-19 outcomes.”

    It’s still unclear how indoor relative humidity affects Covid-19 outcomes. The team’s follow-up studies suggest that pathogens may survive longer in respiratory droplets in both very dry and very humid conditions.

    “Our ongoing work shows that there are emerging hints of mechanistic links between these factors,” Bourouiba says. “For now however, we can say that indoor relative humidity emerges in a robust manner as another mitigation lever that organizations and individuals can monitor, adjust, and maintain in the optimal 40 to 60 percent range, in addition to proper ventillation.”

    This research was made possible, in part, by an MIT Alumni Class fund, the Richard and Susan Smith Family Foundation, the National Institutes of Health, and the National Science Foundation. More

  • in

    Ocean microbes get their diet through a surprising mix of sources, study finds

    One of the smallest and mightiest organisms on the planet is a plant-like bacterium known to marine biologists as Prochlorococcus. The green-tinted microbe measures less than a micron across, and its populations suffuse through the upper layers of the ocean, where a single teaspoon of seawater can hold millions of the tiny organisms.

    Prochlorococcus grows through photosynthesis, using sunlight to convert the atmosphere’s carbon dioxide into organic carbon molecules. The microbe is responsible for 5 percent of the world’s photosynthesizing activity, and scientists have assumed that photosynthesis is the microbe’s go-to strategy for acquiring the carbon it needs to grow.

    But a new MIT study in Nature Microbiology today has found that Prochlorococcus relies on another carbon-feeding strategy, more than previously thought.

    Organisms that use a mix of strategies to provide carbon are known as mixotrophs. Most marine plankton are mixotrophs. And while Prochlorococcus is known to occasionally dabble in mixotrophy, scientists have assumed the microbe primarily lives a phototrophic lifestyle.

    The new MIT study shows that in fact, Prochlorococcus may be more of a mixotroph than it lets on. The microbe may get as much as one-third of its carbon through a second strategy: consuming the dissolved remains of other dead microbes.

    The new estimate may have implications for climate models, as the microbe is a significant force in capturing and “fixing” carbon in the Earth’s atmosphere and ocean.

    “If we wish to predict what will happen to carbon fixation in a different climate, or predict where Prochlorococcus will or will not live in the future, we probably won’t get it right if we’re missing a process that accounts for one-third of the population’s carbon supply,” says Mick Follows, a professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS), and its Department of Civil and Environmental Engineering.

    The study’s co-authors include first author and MIT postdoc Zhen Wu, along with collaborators from the University of Haifa, the Leibniz-Institute for Baltic Sea Research, the Leibniz-Institute of Freshwater Ecology and Inland Fisheries, and Potsdam University.

    Persistent plankton

    Since Prochlorococcus was first discovered in the Sargasso Sea in 1986, by MIT Institute Professor Sallie “Penny” Chisholm and others, the microbe has been observed throughout the world’s oceans, inhabiting the upper sunlit layers ranging from the surface down to about 160 meters. Within this range, light levels vary, and the microbe has evolved a number of ways to photosynthesize carbon in even low-lit regions.

    The organism has also evolved ways to consume organic compounds including glucose and certain amino acids, which could help the microbe survive for limited periods of time in dark ocean regions. But surviving on organic compounds alone is a bit like only eating junk food, and there is evidence that Prochlorococcus will die after a week in regions where photosynthesis is not an option.

    And yet, researchers including Daniel Sher of the University of Haifa, who is a co-author of the new study, have observed healthy populations of Prochlorococcus that persist deep in the sunlit zone, where the light intensity should be too low to maintain a population. This suggests that the microbes must be switching to a non-photosynthesizing, mixotrophic lifestyle in order to consume other organic sources of carbon.

    “It seems that at least some Prochlorococcus are using existing organic carbon in a mixotrophic way,” Follows says. “That stimulated the question: How much?”

    What light cannot explain

    In their new paper, Follows, Wu, Sher, and their colleagues looked to quantify the amount of carbon that Prochlorococcus is consuming through processes other than photosynthesis.

    The team looked first to measurements taken by Sher’s team, which previously took ocean samples at various depths in the Mediterranean Sea and measured the concentration of phytoplankton, including Prochlorococcus, along with the associated intensity of light and the concentration of nitrogen — an essential nutrient that is richly available in deeper layers of the ocean and that plankton can assimilate to make proteins.

    Wu and Follows used this data, and similar information from the Pacific Ocean, along with previous work from Chisholm’s lab, which established the rate of photosynthesis that Prochlorococcus could carry out in a given intensity of light.

    “We converted that light intensity profile into a potential growth rate — how fast the population of Prochlorococcus could grow if it was acquiring all it’s carbon by photosynthesis, and light is the limiting factor,” Follows explains.

    The team then compared this calculated rate to growth rates that were previously observed in the Pacific Ocean by several other research teams.

    “This data showed that, below a certain depth, there’s a lot of growth happening that photosynthesis simply cannot explain,” Follows says. “Some other process must be at work to make up the difference in carbon supply.”

    The researchers inferred that, in deeper, darker regions of the ocean, Prochlorococcus populations are able to survive and thrive by resorting to mixotrophy, including consuming organic carbon from detritus. Specifically, the microbe may be carrying out osmotrophy — a process by which an organism passively absorbs organic carbon molecules via osmosis.

    Judging by how fast the microbe is estimated to be growing below the sunlit zone, the team calculates that Prochlorococcus obtains up to one-third of its carbon diet through mixotrophic strategies.

    “It’s kind of like going from a specialist to a generalist lifestyle,” Follows says. “If I only eat pizza, then if I’m 20 miles from a pizza place, I’m in trouble, whereas if I eat burgers as well, I could go to the nearby McDonald’s. People had thought of Prochlorococcus as a specialist, where they do this one thing (photosynthesis) really well. But it turns out they may have more of a generalist lifestyle than we previously thought.”

    Chisholm, who has both literally and figuratively written the book on Prochlorococcus, says the group’s findings “expand the range of conditions under which their populations can not only survive, but also thrive. This study changes the way we think about the role of Prochlorococcus in the microbial food web.”

    This research was supported, in part, by the Israel Science Foundation, the U.S. National Science Foundation, and the Simons Foundation. More

  • in

    Ocean scientists measure sediment plume stirred up by deep-sea-mining vehicle

    What will be the impact to the ocean if humans are to mine the deep sea? It’s a question that’s gaining urgency as interest in marine minerals has grown.

    The ocean’s deep-sea bed is scattered with ancient, potato-sized rocks called “polymetallic nodules” that contain nickel and cobalt — minerals that are in high demand for the manufacturing of batteries, such as for powering electric vehicles and storing renewable energy, and in response to factors such as increasing urbanization. The deep ocean contains vast quantities of mineral-laden nodules, but the impact of mining the ocean floor is both unknown and highly contested.

    Now MIT ocean scientists have shed some light on the topic, with a new study on the cloud of sediment that a collector vehicle would stir up as it picks up nodules from the seafloor.

    The study, appearing today in Science Advances, reports the results of a 2021 research cruise to a region of the Pacific Ocean known as the Clarion Clipperton Zone (CCZ), where polymetallic nodules abound. There, researchers equipped a pre-prototype collector vehicle with instruments to monitor sediment plume disturbances as the vehicle maneuvered across the seafloor, 4,500 meters below the ocean’s surface. Through a sequence of carefully conceived maneuvers. the MIT scientists used the vehicle to monitor its own sediment cloud and measure its properties.

    Their measurements showed that the vehicle created a dense plume of sediment in its wake, which spread under its own weight, in a phenomenon known in fluid dynamics as a “turbidity current.” As it gradually dispersed, the plume remained relatively low, staying within 2 meters of the seafloor, as opposed to immediately lofting higher into the water column as had been postulated.

    “It’s quite a different picture of what these plumes look like, compared to some of the conjecture,” says study co-author Thomas Peacock, professor of mechanical engineering at MIT. “Modeling efforts of deep-sea mining plumes will have to account for these processes that we identified, in order to assess their extent.”

    The study’s co-authors include lead author Carlos Muñoz-Royo, Raphael Ouillon, and Souha El Mousadik of MIT; and Matthew Alford of the Scripps Institution of Oceanography.

    Deep-sea maneuvers

    To collect polymetallic nodules, some mining companies are proposing to deploy tractor-sized vehicles to the bottom of the ocean. The vehicles would vacuum up the nodules along with some sediment along their path. The nodules and sediment would then be separated inside of the vehicle, with the nodules sent up through a riser pipe to a surface vessel, while most of the sediment would be discharged immediately behind the vehicle.

    Peacock and his group have previously studied the dynamics of the sediment plume that associated surface operation vessels may pump back into the ocean. In their current study, they focused on the opposite end of the operation, to measure the sediment cloud created by the collectors themselves.

    In April 2021, the team joined an expedition led by Global Sea Mineral Resources NV (GSR), a Belgian marine engineering contractor that is exploring the CCZ for ways to extract metal-rich nodules. A European-based science team, Mining Impacts 2, also conducted separate studies in parallel. The cruise was the first in over 40 years to test a “pre-prototype” collector vehicle in the CCZ. The machine, called Patania II, stands about 3 meters high, spans 4 meters wide, and is about one-third the size of what a commercial-scale vehicle is expected to be.

    While the contractor tested the vehicle’s nodule-collecting performance, the MIT scientists monitored the sediment cloud created in the vehicle’s wake. They did so using two maneuvers that the vehicle was programmed to take: a “selfie,” and a “drive-by.”

    Both maneuvers began in the same way, with the vehicle setting out in a straight line, all its suction systems turned on. The researchers let the vehicle drive along for 100 meters, collecting any nodules in its path. Then, in the “selfie” maneuver, they directed the vehicle to turn off its suction systems and double back around to drive through the cloud of sediment it had just created. The vehicle’s installed sensors measured the concentration of sediment during this “selfie” maneuver, allowing the scientists to monitor the cloud within minutes of the vehicle stirring it up.

    Play video

    A movie of the Patania II pre-prototype collector vehicle entering, driving through, and leaving the low-lying turbidity current plume as part of a selfie operation. For scale, the instrumentation post attached to the front of the vehicle reaches about 3m above the seabed. The movie is sped up by a factor of 20. Credit: Global Sea Mineral Resources

    For the “drive-by” maneuver, the researchers placed a sensor-laden mooring 50 to 100 meters from the vehicle’s planned tracks. As the vehicle drove along collecting nodules, it created a plume that eventually spread past the mooring after an hour or two. This “drive-by” maneuver enabled the team to monitor the sediment cloud over a longer timescale of several hours, capturing the plume evolution.

    Out of steam

    Over multiple vehicle runs, Peacock and his team were able to measure and track the evolution of the sediment plume created by the deep-sea-mining vehicle.

    “We saw that the vehicle would be driving in clear water, seeing the nodules on the seabed,” Peacock says. “And then suddenly there’s this very sharp sediment cloud coming through when the vehicle enters the plume.”

    From the selfie views, the team observed a behavior that was predicted by some of their previous modeling studies: The vehicle stirred up a heavy amount of sediment that was dense enough that, even after some mixing with the surrounding water, it generated a plume that behaved almost as a separate fluid, spreading under its own weight in what’s known as a turbidity current.

    “The turbidity current spreads under its own weight for some time, tens of minutes, but as it does so, it’s depositing sediment on the seabed and eventually running out of steam,” Peacock says. “After that, the ocean currents get stronger than the natural spreading, and the sediment transitions to being carried by the ocean currents.”

    By the time the sediment drifted past the mooring, the researchers estimate that 92 to 98 percent of the sediment either settled back down or remained within 2 meters of the seafloor as a low-lying cloud. There is, however, no guarantee that the sediment always stays there rather than drifting further up in the water column. Recent and future studies by the research team are looking into this question, with the goal of consolidating understanding for deep-sea mining sediment plumes.

    “Our study clarifies the reality of what the initial sediment disturbance looks like when you have a certain type of nodule mining operation,” Peacock says. “The big takeaway is that there are complex processes like turbidity currents that take place when you do this kind of collection. So, any effort to model a deep-sea-mining operation’s impact will have to capture these processes.”

    “Sediment plumes produced by deep-seabed mining are a major concern with regards to environmental impact, as they will spread over potentially large areas beyond the actual site of mining and affect deep-sea life,” says Henko de Stigter, a marine geologist at the Royal Netherlands Institute for Sea Research, who was not involved in the research. “The current paper provides essential insight in the initial development of these plumes.”

    This research was supported, in part, by the National Science Foundation, ARPA-E, the 11th Hour Project, the Benioff Ocean Initiative, and Global Sea Mineral Resources. The funders had no role in any aspects of the research analysis, the research team states. More