More stories

  • in

    Engineers solve a mystery on the path to smaller, lighter batteries

    A discovery by MIT researchers could finally unlock the door to the design of a new kind of rechargeable lithium battery that is more lightweight, compact, and safe than current versions, and that has been pursued by labs around the world for years.

    The key to this potential leap in battery technology is replacing the liquid electrolyte that sits between the positive and negative electrodes with a much thinner, lighter layer of solid ceramic material, and replacing one of the electrodes with solid lithium metal. This would greatly reduce the overall size and weight of the battery and remove the safety risk associated with liquid electrolytes, which are flammable. But that quest has been beset with one big problem: dendrites.

    Dendrites, whose name comes from the Latin for branches, are projections of metal that can build up on the lithium surface and penetrate into the solid electrolyte, eventually crossing from one electrode to the other and shorting out the battery cell. Researchers haven’t been able to agree on what gives rise to these metal filaments, nor has there been much progress on how to prevent them and thus make lightweight solid-state batteries a practical option.

    The new research, being published today in the journal Joule in a paper by MIT Professor Yet-Ming Chiang, graduate student Cole Fincher, and five others at MIT and Brown University, seems to resolve the question of what causes dendrite formation. It also shows how dendrites can be prevented from crossing through the electrolyte.

    Chiang says in the group’s earlier work, they made a “surprising and unexpected” finding, which was that the hard, solid electrolyte material used for a solid-state battery can be penetrated by lithium, which is a very soft metal, during the process of charging and discharging the battery, as ions of lithium move between the two sides.

    This shuttling back and forth of ions causes the volume of the electrodes to change. That inevitably causes stresses in the solid electrolyte, which has to remain fully in contact with both of the electrodes that it is sandwiched between. “To deposit this metal, there has to be an expansion of the volume because you’re adding new mass,” Chiang says. “So, there’s an increase in volume on the side of the cell where the lithium is being deposited. And if there are even microscopic flaws present, this will generate a pressure on those flaws that can cause cracking.”

    Those stresses, the team has now shown, cause the cracks that allow dendrites to form. The solution to the problem turns out to be more stress, applied in just the right direction and with the right amount of force.

    While previously, some researchers thought that dendrites formed by a purely electrochemical process, rather than a mechanical one, the team’s experiments demonstrate that it is mechanical stresses that cause the problem.

    The process of dendrite formation normally takes place deep within the opaque materials of the battery cell and cannot be observed directly, so Fincher developed a way of making thin cells using a transparent electrolyte, allowing the whole process to be directly seen and recorded. “You can see what happens when you put a compression on the system, and you can see whether or not the dendrites behave in a way that’s commensurate with a corrosion process or a fracture process,” he says.

    The team demonstrated that they could directly manipulate the growth of dendrites simply by applying and releasing pressure, causing the dendrites to zig and zag in perfect alignment with the direction of the force.

    Applying mechanical stresses to the solid electrolyte doesn’t eliminate the formation of dendrites, but it does control the direction of their growth. This means they can be directed to remain parallel to the two electrodes and prevented from ever crossing to the other side, and thus rendered harmless.

    In their tests, the researchers used pressure induced by bending the material, which was formed into a beam with a weight at one end. But they say that in practice, there could be many different ways of producing the needed stress. For example, the electrolyte could be made with two layers of material that have different amounts of thermal expansion, so that there is an inherent bending of the material, as is done in some thermostats.

    Another approach would be to “dope” the material with atoms that would become embedded in it, distorting it and leaving it in a permanently stressed state. This is the same method used to produce the super-hard glass used in the screens of smart phones and tablets, Chiang explains. And the amount of pressure needed is not extreme: The experiments showed that pressures of 150 to 200 megapascals were sufficient to stop the dendrites from crossing the electrolyte.

    The required pressure is “commensurate with stresses that are commonly induced in commercial film growth processes and many other manufacturing processes,” so should not be difficult to implement in practice, Fincher adds.

    In fact, a different kind of stress, called stack pressure, is often applied to battery cells, by essentially squishing the material in the direction perpendicular to the battery’s plates — somewhat like compressing a sandwich by putting a weight on top of it. It was thought that this might help prevent the layers from separating. But the experiments have now demonstrated that pressure in that direction actually exacerbates dendrite formation. “We showed that this type of stack pressure actually accelerates dendrite-induced failure,” Fincher says.

    What is needed instead is pressure along the plane of the plates, as if the sandwich were being squeezed from the sides. “What we have shown in this work is that when you apply a compressive force you can force the dendrites to travel in the direction of the compression,” Fincher says, and if that direction is along the plane of the plates, the dendrites “will never get to the other side.”

    That could finally make it practical to produce batteries using solid electrolyte and metallic lithium electrodes. Not only would these pack more energy into a given volume and weight, but they would eliminate the need for liquid electrolytes, which are flammable materials.

    Having demonstrated the basic principles involved, the team’s next step will be to try to apply these to the creation of a functional prototype battery, Chiang says, and then to figure out exactly what manufacturing processes would be needed to produce such batteries in quantity. Though they have filed for a patent, the researchers don’t plan to commercialize the system themselves, he says, as there are already companies working on the development of solid-state batteries. “I would say this is an understanding of failure modes in solid-state batteries that we believe the industry needs to be aware of and try to use in designing better products,” he says.

    The research team included Christos Athanasiou and Brian Sheldon at Brown University, and Colin Gilgenbach, Michael Wang, and W. Craig Carter at MIT. The work was supported by the U.S. National Science Foundation, the U.S. Department of Defense, the U.S. Defense Advanced Research Projects Agency, and the U.S. Department of Energy. More

  • in

    Earth can regulate its own temperature over millennia, new study finds

    The Earth’s climate has undergone some big changes, from global volcanism to planet-cooling ice ages and dramatic shifts in solar radiation. And yet life, for the last 3.7 billion years, has kept on beating.

    Now, a study by MIT researchers in Science Advances confirms that the planet harbors a “stabilizing feedback” mechanism that acts over hundreds of thousands of years to pull the climate back from the brink, keeping global temperatures within a steady, habitable range.

    Just how does it accomplish this? A likely mechanism is “silicate weathering” — a geological process by which the slow and steady weathering of silicate rocks involves chemical reactions that ultimately draw carbon dioxide out of the atmosphere and into ocean sediments, trapping the gas in rocks.

    Scientists have long suspected that silicate weathering plays a major role in regulating the Earth’s carbon cycle. The mechanism of silicate weathering could provide a geologically constant force in keeping carbon dioxide — and global temperatures — in check. But there’s never been direct evidence for the continual operation of such a feedback, until now.

    The new findings are based on a study of paleoclimate data that record changes in average global temperatures over the last 66 million years. The MIT team applied a mathematical analysis to see whether the data revealed any patterns characteristic of stabilizing phenomena that reined in global temperatures on a  geologic timescale.

    They found that indeed there appears to be a consistent pattern in which the Earth’s temperature swings are dampened over timescales of hundreds of thousands of years. The duration of this effect is similar to the timescales over which silicate weathering is predicted to act.

    The results are the first to use actual data to confirm the existence of a stabilizing feedback, the mechanism of which is likely silicate weathering. This stabilizing feedback would explain how the Earth has remained habitable through dramatic climate events in the geologic past.

    “On the one hand, it’s good because we know that today’s global warming will eventually be canceled out through this stabilizing feedback,” says Constantin Arnscheidt, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “But on the other hand, it will take hundreds of thousands of years to happen, so not fast enough to solve our present-day issues.”

    The study is co-authored by Arnscheidt and Daniel Rothman, professor of geophysics at MIT.

    Stability in data

    Scientists have previously seen hints of a climate-stabilizing effect in the Earth’s carbon cycle: Chemical analyses of ancient rocks have shown that the flux of carbon in and out of Earth’s surface environment has remained relatively balanced, even through dramatic swings in global temperature. Furthermore, models of silicate weathering predict that the process should have some stabilizing effect on the global climate. And finally, the fact of the Earth’s enduring habitability points to some inherent, geologic check on extreme temperature swings.

    “You have a planet whose climate was subjected to so many dramatic external changes. Why did life survive all this time? One argument is that we need some sort of stabilizing mechanism to keep temperatures suitable for life,” Arnscheidt says. “But it’s never been demonstrated from data that such a mechanism has consistently controlled Earth’s climate.”

    Arnscheidt and Rothman sought to confirm whether a stabilizing feedback has indeed been at work, by looking at data of global temperature fluctuations through geologic history. They worked with a range of global temperature records compiled by other scientists, from the chemical composition of ancient marine fossils and shells, as well as preserved Antarctic ice cores.

    “This whole study is only possible because there have been great advances in improving the resolution of these deep-sea temperature records,” Arnscheidt notes. “Now we have data going back 66 million years, with data points at most thousands of years apart.”

    Speeding to a stop

    To the data, the team applied the mathematical theory of stochastic differential equations, which is commonly used to reveal patterns in widely fluctuating datasets.

    “We realized this theory makes predictions for what you would expect Earth’s temperature history to look like if there had been feedbacks acting on certain timescales,” Arnscheidt explains.

    Using this approach, the team analyzed the history of average global temperatures over the last 66 million years, considering the entire period over different timescales, such as tens of thousands of years versus hundreds of thousands, to see whether any patterns of stabilizing feedback emerged within each timescale.

    “To some extent, it’s like your car is speeding down the street, and when you put on the brakes, you slide for a long time before you stop,” Rothman says. “There’s a timescale over which frictional resistance, or a stabilizing feedback, kicks in, when the system returns to a steady state.”

    Without stabilizing feedbacks, fluctuations of global temperature should grow with timescale. But the team’s analysis revealed a regime in which fluctuations did not grow, implying that a stabilizing mechanism reigned in the climate before fluctuations grew too extreme. The timescale for this stabilizing effect — hundreds of thousands of years — coincides with what scientists predict for silicate weathering.

    Interestingly, Arnscheidt and Rothman found that on longer timescales, the data did not reveal any stabilizing feedbacks. That is, there doesn’t appear to be any recurring pull-back of global temperatures on timescales longer than a million years. Over these longer timescales, then, what has kept global temperatures in check?

    “There’s an idea that chance may have played a major role in determining why, after more than 3 billion years, life still exists,” Rothman offers.

    In other words, as the Earth’s temperatures fluctuate over longer stretches, these fluctuations may just happen to be small enough in the geologic sense, to be within a range that a stabilizing feedback, such as silicate weathering, could periodically keep the climate in check, and more to the point, within a habitable zone.

    “There are two camps: Some say random chance is a good enough explanation, and others say there must be a stabilizing feedback,” Arnscheidt says. “We’re able to show, directly from data, that the answer is probably somewhere in between. In other words, there was some stabilization, but pure luck likely also played a role in keeping Earth continuously habitable.”

    This research was supported, in part, by a MathWorks fellowship and the National Science Foundation. More

  • in

    Keeping indoor humidity levels at a “sweet spot” may reduce spread of Covid-19

    We know proper indoor ventilation is key to reducing the spread of Covid-19. Now, a study by MIT researchers finds that indoor relative humidity may also influence transmission of the virus.

    Relative humidity is the amount of moisture in the air compared to the total moisture the air can hold at a given temperature before saturating and forming condensation.

    In a study appearing today in the Journal of the Royal Society Interface, the MIT team reports that maintaining an indoor relative humidity between 40 and 60 percent is associated with relatively lower rates of Covid-19 infections and deaths, while indoor conditions outside this range are associated with worse Covid-19 outcomes. To put this into perspective, most people are comfortable between 30 and 50 percent relative humidity, and an airplane cabin is at around 20 percent relative humidity.

    The findings are based on the team’s analysis of Covid-19 data combined with meteorological measurements from 121 countries, from January 2020 through August 2020. Their study suggests a strong connection between regional outbreaks and indoor relative humidity.

    In general, the researchers found that whenever a region experienced a rise in Covid-19 cases and deaths prevaccination, the estimated indoor relative humidity in that region, on average, was either lower than 40 percent or higher than 60 percent regardless of season. Nearly all regions in the study experienced fewer Covid-19 cases and deaths during periods when estimated indoor relative humidity was within a “sweet spot” between 40 and 60 percent.

    “There’s potentially a protective effect of this intermediate indoor relative humidity,” suggests lead author Connor Verheyen, a PhD student in medical engineering and medical physics in the Harvard-MIT Program in Health Sciences and Technology.

    “Indoor ventilation is still critical,” says co-author Lydia Bourouiba, director of the MIT Fluid Dynamics of Disease Transmission Laboratory and associate professor in the departments of Civil and Environmental Engineering and Mechanical Engineering, and at the Institute for Medical Engineering and Science at MIT. “However, we find that maintaining an indoor relative humidity in that sweet spot — of 40 to 60 percent — is associated with reduced Covid-19 cases and deaths.”

    Seasonal swing?

    Since the start of the Covid-19 pandemic, scientists have considered the possibility that the virus’ virulence swings with the seasons. Infections and associated deaths appear to rise in winter and ebb in summer. But studies looking to link the virus’ patterns to seasonal outdoor conditions have yielded mixed results.

    Verheyen and Bourouiba examined whether Covid-19 is influenced instead by indoor — rather than outdoor — conditions, and, specifically, relative humidity. After all, they note that most societies spend more than 90 percent of their time indoors, where the majority of viral transmission has been shown to occur. What’s more, indoor conditions can be quite different from outdoor conditions as a result of climate control systems, such as heaters that significantly dry out indoor air.

    Could indoor relative humidity have affected the spread and severity of Covid-19 around the world? And could it help explain the differences in health outcomes from region to region?

    Tracking humidity

    For answers, the team focused on the early period of the pandemic when vaccines were not yet available, reasoning that vaccinated populations would obscure the influence of any other factor such as indoor humidity. They gathered global Covid-19 data, including case counts and reported deaths, from January 2020 to August 2020,  and identified countries with at least 50 deaths, indicating at least one outbreak had occurred in those countries.

    In all, they focused on 121 countries where Covid-19 outbreaks occurred. For each country, they also tracked the local Covid-19 related policies, such as isolation, quarantine, and testing measures, and their statistical association with Covid-19 outcomes.

    For each day that Covid-19 data was available, they used meteorological data to calculate a country’s outdoor relative humidity. They then estimated the average indoor relative humidity, based on outdoor relative humidity and guidelines on temperature ranges for human comfort. For instance, guidelines report that humans are comfortable between 66 to 77 degrees Fahrenheit indoors. They also assumed that on average, most populations have the means to heat indoor spaces to comfortable temperatures. Finally, they also collected experimental data, which they used to validate their estimation approach.

    For every instance when outdoor temperatures were below the typical human comfort range, they assumed indoor spaces were heated to reach that comfort range. Based on the added heating, they calculated the associated drop in indoor relative humidity.

    In warmer times, both outdoor and indoor relative humidity for each country was about the same, but they quickly diverged in colder times. While outdoor humidity remained around 50 percent throughout the year, indoor relative humidity for countries in the Northern and Southern Hemispheres dropped below 40 percent in their respective colder periods, when Covid-19 cases and deaths also spiked in these regions.

    For countries in the tropics, relative humidity was about the same indoors and outdoors throughout the year, with a gradual rise indoors during the region’s summer season, when high outdoor humidity likely raised the indoor relative humidity over 60 percent. They found this rise mirrored the gradual increase in Covid-19 deaths in the tropics.

    “We saw more reported Covid-19 deaths on the low and high end of indoor relative humidity, and less in this sweet spot of 40 to 60 percent,” Verheyen says. “This intermediate relative humidity window is associated with a better outcome, meaning fewer deaths and a deceleration of the pandemic.”

    “We were very skeptical initially, especially as the Covid-19 data can be noisy and inconsistent,” Bourouiba says. “We thus were very thorough trying to poke holes in our own analysis, using a range of approaches to test the limits and robustness of the findings, including taking into account factors such as government intervention. Despite all our best efforts, we found that even when considering countries with very strong versus very weak Covid-19 mitigation policies, or wildly different outdoor conditions, indoor — rather than outdoor — relative humidity maintains an underlying strong and robust link with Covid-19 outcomes.”

    It’s still unclear how indoor relative humidity affects Covid-19 outcomes. The team’s follow-up studies suggest that pathogens may survive longer in respiratory droplets in both very dry and very humid conditions.

    “Our ongoing work shows that there are emerging hints of mechanistic links between these factors,” Bourouiba says. “For now however, we can say that indoor relative humidity emerges in a robust manner as another mitigation lever that organizations and individuals can monitor, adjust, and maintain in the optimal 40 to 60 percent range, in addition to proper ventillation.”

    This research was made possible, in part, by an MIT Alumni Class fund, the Richard and Susan Smith Family Foundation, the National Institutes of Health, and the National Science Foundation. More

  • in

    Ocean microbes get their diet through a surprising mix of sources, study finds

    One of the smallest and mightiest organisms on the planet is a plant-like bacterium known to marine biologists as Prochlorococcus. The green-tinted microbe measures less than a micron across, and its populations suffuse through the upper layers of the ocean, where a single teaspoon of seawater can hold millions of the tiny organisms.

    Prochlorococcus grows through photosynthesis, using sunlight to convert the atmosphere’s carbon dioxide into organic carbon molecules. The microbe is responsible for 5 percent of the world’s photosynthesizing activity, and scientists have assumed that photosynthesis is the microbe’s go-to strategy for acquiring the carbon it needs to grow.

    But a new MIT study in Nature Microbiology today has found that Prochlorococcus relies on another carbon-feeding strategy, more than previously thought.

    Organisms that use a mix of strategies to provide carbon are known as mixotrophs. Most marine plankton are mixotrophs. And while Prochlorococcus is known to occasionally dabble in mixotrophy, scientists have assumed the microbe primarily lives a phototrophic lifestyle.

    The new MIT study shows that in fact, Prochlorococcus may be more of a mixotroph than it lets on. The microbe may get as much as one-third of its carbon through a second strategy: consuming the dissolved remains of other dead microbes.

    The new estimate may have implications for climate models, as the microbe is a significant force in capturing and “fixing” carbon in the Earth’s atmosphere and ocean.

    “If we wish to predict what will happen to carbon fixation in a different climate, or predict where Prochlorococcus will or will not live in the future, we probably won’t get it right if we’re missing a process that accounts for one-third of the population’s carbon supply,” says Mick Follows, a professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS), and its Department of Civil and Environmental Engineering.

    The study’s co-authors include first author and MIT postdoc Zhen Wu, along with collaborators from the University of Haifa, the Leibniz-Institute for Baltic Sea Research, the Leibniz-Institute of Freshwater Ecology and Inland Fisheries, and Potsdam University.

    Persistent plankton

    Since Prochlorococcus was first discovered in the Sargasso Sea in 1986, by MIT Institute Professor Sallie “Penny” Chisholm and others, the microbe has been observed throughout the world’s oceans, inhabiting the upper sunlit layers ranging from the surface down to about 160 meters. Within this range, light levels vary, and the microbe has evolved a number of ways to photosynthesize carbon in even low-lit regions.

    The organism has also evolved ways to consume organic compounds including glucose and certain amino acids, which could help the microbe survive for limited periods of time in dark ocean regions. But surviving on organic compounds alone is a bit like only eating junk food, and there is evidence that Prochlorococcus will die after a week in regions where photosynthesis is not an option.

    And yet, researchers including Daniel Sher of the University of Haifa, who is a co-author of the new study, have observed healthy populations of Prochlorococcus that persist deep in the sunlit zone, where the light intensity should be too low to maintain a population. This suggests that the microbes must be switching to a non-photosynthesizing, mixotrophic lifestyle in order to consume other organic sources of carbon.

    “It seems that at least some Prochlorococcus are using existing organic carbon in a mixotrophic way,” Follows says. “That stimulated the question: How much?”

    What light cannot explain

    In their new paper, Follows, Wu, Sher, and their colleagues looked to quantify the amount of carbon that Prochlorococcus is consuming through processes other than photosynthesis.

    The team looked first to measurements taken by Sher’s team, which previously took ocean samples at various depths in the Mediterranean Sea and measured the concentration of phytoplankton, including Prochlorococcus, along with the associated intensity of light and the concentration of nitrogen — an essential nutrient that is richly available in deeper layers of the ocean and that plankton can assimilate to make proteins.

    Wu and Follows used this data, and similar information from the Pacific Ocean, along with previous work from Chisholm’s lab, which established the rate of photosynthesis that Prochlorococcus could carry out in a given intensity of light.

    “We converted that light intensity profile into a potential growth rate — how fast the population of Prochlorococcus could grow if it was acquiring all it’s carbon by photosynthesis, and light is the limiting factor,” Follows explains.

    The team then compared this calculated rate to growth rates that were previously observed in the Pacific Ocean by several other research teams.

    “This data showed that, below a certain depth, there’s a lot of growth happening that photosynthesis simply cannot explain,” Follows says. “Some other process must be at work to make up the difference in carbon supply.”

    The researchers inferred that, in deeper, darker regions of the ocean, Prochlorococcus populations are able to survive and thrive by resorting to mixotrophy, including consuming organic carbon from detritus. Specifically, the microbe may be carrying out osmotrophy — a process by which an organism passively absorbs organic carbon molecules via osmosis.

    Judging by how fast the microbe is estimated to be growing below the sunlit zone, the team calculates that Prochlorococcus obtains up to one-third of its carbon diet through mixotrophic strategies.

    “It’s kind of like going from a specialist to a generalist lifestyle,” Follows says. “If I only eat pizza, then if I’m 20 miles from a pizza place, I’m in trouble, whereas if I eat burgers as well, I could go to the nearby McDonald’s. People had thought of Prochlorococcus as a specialist, where they do this one thing (photosynthesis) really well. But it turns out they may have more of a generalist lifestyle than we previously thought.”

    Chisholm, who has both literally and figuratively written the book on Prochlorococcus, says the group’s findings “expand the range of conditions under which their populations can not only survive, but also thrive. This study changes the way we think about the role of Prochlorococcus in the microbial food web.”

    This research was supported, in part, by the Israel Science Foundation, the U.S. National Science Foundation, and the Simons Foundation. More

  • in

    Ocean scientists measure sediment plume stirred up by deep-sea-mining vehicle

    What will be the impact to the ocean if humans are to mine the deep sea? It’s a question that’s gaining urgency as interest in marine minerals has grown.

    The ocean’s deep-sea bed is scattered with ancient, potato-sized rocks called “polymetallic nodules” that contain nickel and cobalt — minerals that are in high demand for the manufacturing of batteries, such as for powering electric vehicles and storing renewable energy, and in response to factors such as increasing urbanization. The deep ocean contains vast quantities of mineral-laden nodules, but the impact of mining the ocean floor is both unknown and highly contested.

    Now MIT ocean scientists have shed some light on the topic, with a new study on the cloud of sediment that a collector vehicle would stir up as it picks up nodules from the seafloor.

    The study, appearing today in Science Advances, reports the results of a 2021 research cruise to a region of the Pacific Ocean known as the Clarion Clipperton Zone (CCZ), where polymetallic nodules abound. There, researchers equipped a pre-prototype collector vehicle with instruments to monitor sediment plume disturbances as the vehicle maneuvered across the seafloor, 4,500 meters below the ocean’s surface. Through a sequence of carefully conceived maneuvers. the MIT scientists used the vehicle to monitor its own sediment cloud and measure its properties.

    Their measurements showed that the vehicle created a dense plume of sediment in its wake, which spread under its own weight, in a phenomenon known in fluid dynamics as a “turbidity current.” As it gradually dispersed, the plume remained relatively low, staying within 2 meters of the seafloor, as opposed to immediately lofting higher into the water column as had been postulated.

    “It’s quite a different picture of what these plumes look like, compared to some of the conjecture,” says study co-author Thomas Peacock, professor of mechanical engineering at MIT. “Modeling efforts of deep-sea mining plumes will have to account for these processes that we identified, in order to assess their extent.”

    The study’s co-authors include lead author Carlos Muñoz-Royo, Raphael Ouillon, and Souha El Mousadik of MIT; and Matthew Alford of the Scripps Institution of Oceanography.

    Deep-sea maneuvers

    To collect polymetallic nodules, some mining companies are proposing to deploy tractor-sized vehicles to the bottom of the ocean. The vehicles would vacuum up the nodules along with some sediment along their path. The nodules and sediment would then be separated inside of the vehicle, with the nodules sent up through a riser pipe to a surface vessel, while most of the sediment would be discharged immediately behind the vehicle.

    Peacock and his group have previously studied the dynamics of the sediment plume that associated surface operation vessels may pump back into the ocean. In their current study, they focused on the opposite end of the operation, to measure the sediment cloud created by the collectors themselves.

    In April 2021, the team joined an expedition led by Global Sea Mineral Resources NV (GSR), a Belgian marine engineering contractor that is exploring the CCZ for ways to extract metal-rich nodules. A European-based science team, Mining Impacts 2, also conducted separate studies in parallel. The cruise was the first in over 40 years to test a “pre-prototype” collector vehicle in the CCZ. The machine, called Patania II, stands about 3 meters high, spans 4 meters wide, and is about one-third the size of what a commercial-scale vehicle is expected to be.

    While the contractor tested the vehicle’s nodule-collecting performance, the MIT scientists monitored the sediment cloud created in the vehicle’s wake. They did so using two maneuvers that the vehicle was programmed to take: a “selfie,” and a “drive-by.”

    Both maneuvers began in the same way, with the vehicle setting out in a straight line, all its suction systems turned on. The researchers let the vehicle drive along for 100 meters, collecting any nodules in its path. Then, in the “selfie” maneuver, they directed the vehicle to turn off its suction systems and double back around to drive through the cloud of sediment it had just created. The vehicle’s installed sensors measured the concentration of sediment during this “selfie” maneuver, allowing the scientists to monitor the cloud within minutes of the vehicle stirring it up.

    Play video

    A movie of the Patania II pre-prototype collector vehicle entering, driving through, and leaving the low-lying turbidity current plume as part of a selfie operation. For scale, the instrumentation post attached to the front of the vehicle reaches about 3m above the seabed. The movie is sped up by a factor of 20. Credit: Global Sea Mineral Resources

    For the “drive-by” maneuver, the researchers placed a sensor-laden mooring 50 to 100 meters from the vehicle’s planned tracks. As the vehicle drove along collecting nodules, it created a plume that eventually spread past the mooring after an hour or two. This “drive-by” maneuver enabled the team to monitor the sediment cloud over a longer timescale of several hours, capturing the plume evolution.

    Out of steam

    Over multiple vehicle runs, Peacock and his team were able to measure and track the evolution of the sediment plume created by the deep-sea-mining vehicle.

    “We saw that the vehicle would be driving in clear water, seeing the nodules on the seabed,” Peacock says. “And then suddenly there’s this very sharp sediment cloud coming through when the vehicle enters the plume.”

    From the selfie views, the team observed a behavior that was predicted by some of their previous modeling studies: The vehicle stirred up a heavy amount of sediment that was dense enough that, even after some mixing with the surrounding water, it generated a plume that behaved almost as a separate fluid, spreading under its own weight in what’s known as a turbidity current.

    “The turbidity current spreads under its own weight for some time, tens of minutes, but as it does so, it’s depositing sediment on the seabed and eventually running out of steam,” Peacock says. “After that, the ocean currents get stronger than the natural spreading, and the sediment transitions to being carried by the ocean currents.”

    By the time the sediment drifted past the mooring, the researchers estimate that 92 to 98 percent of the sediment either settled back down or remained within 2 meters of the seafloor as a low-lying cloud. There is, however, no guarantee that the sediment always stays there rather than drifting further up in the water column. Recent and future studies by the research team are looking into this question, with the goal of consolidating understanding for deep-sea mining sediment plumes.

    “Our study clarifies the reality of what the initial sediment disturbance looks like when you have a certain type of nodule mining operation,” Peacock says. “The big takeaway is that there are complex processes like turbidity currents that take place when you do this kind of collection. So, any effort to model a deep-sea-mining operation’s impact will have to capture these processes.”

    “Sediment plumes produced by deep-seabed mining are a major concern with regards to environmental impact, as they will spread over potentially large areas beyond the actual site of mining and affect deep-sea life,” says Henko de Stigter, a marine geologist at the Royal Netherlands Institute for Sea Research, who was not involved in the research. “The current paper provides essential insight in the initial development of these plumes.”

    This research was supported, in part, by the National Science Foundation, ARPA-E, the 11th Hour Project, the Benioff Ocean Initiative, and Global Sea Mineral Resources. The funders had no role in any aspects of the research analysis, the research team states. More

  • in

    These neurons have food on the brain

    A gooey slice of pizza. A pile of crispy French fries. Ice cream dripping down a cone on a hot summer day. When you look at any of these foods, a specialized part of your visual cortex lights up, according to a new study from MIT neuroscientists.

    This newly discovered population of food-responsive neurons is located in the ventral visual stream, alongside populations that respond specifically to faces, bodies, places, and words. The unexpected finding may reflect the special significance of food in human culture, the researchers say. 

    “Food is central to human social interactions and cultural practices. It’s not just sustenance,” says Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience and a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines. “Food is core to so many elements of our cultural identity, religious practice, and social interactions, and many other things that humans do.”

    The findings, based on an analysis of a large public database of human brain responses to a set of 10,000 images, raise many additional questions about how and why this neural population develops. In future studies, the researchers hope to explore how people’s responses to certain foods might differ depending on their likes and dislikes, or their familiarity with certain types of food.

    MIT postdoc Meenakshi Khosla is the lead author of the paper, along with MIT research scientist N. Apurva Ratan Murty. The study appears today in the journal Current Biology.

    Visual categories

    More than 20 years ago, while studying the ventral visual stream, the part of the brain that recognizes objects, Kanwisher discovered cortical regions that respond selectively to faces. Later, she and other scientists discovered other regions that respond selectively to places, bodies, or words. Most of those areas were discovered when researchers specifically set out to look for them. However, that hypothesis-driven approach can limit what you end up finding, Kanwisher says.

    “There could be other things that we might not think to look for,” she says. “And even when we find something, how do we know that that’s actually part of the basic dominant structure of that pathway, and not something we found just because we were looking for it?”

    To try to uncover the fundamental structure of the ventral visual stream, Kanwisher and Khosla decided to analyze a large, publicly available dataset of full-brain functional magnetic resonance imaging (fMRI) responses from eight human subjects as they viewed thousands of images.

    “We wanted to see when we apply a data-driven, hypothesis-free strategy, what kinds of selectivities pop up, and whether those are consistent with what had been discovered before. A second goal was to see if we could discover novel selectivities that either haven’t been hypothesized before, or that have remained hidden due to the lower spatial resolution of fMRI data,” Khosla says.

    To do that, the researchers applied a mathematical method that allows them to discover neural populations that can’t be identified from traditional fMRI data. An fMRI image is made up of many voxels — three-dimensional units that represent a cube of brain tissue. Each voxel contains hundreds of thousands of neurons, and if some of those neurons belong to smaller populations that respond to one type of visual input, their responses may be drowned out by other populations within the same voxel.

    The new analytical method, which Kanwisher’s lab has previously used on fMRI data from the auditory cortex, can tease out responses of neural populations within each voxel of fMRI data.

    Using this approach, the researchers found four populations that corresponded to previously identified clusters that respond to faces, places, bodies, and words. “That tells us that this method works, and it tells us that the things that we found before are not just obscure properties of that pathway, but major, dominant properties,” Kanwisher says.

    Intriguingly, a fifth population also emerged, and this one appeared to be selective for images of food.

    “We were first quite puzzled by this because food is not a visually homogenous category,” Khosla says. “Things like apples and corn and pasta all look so unlike each other, yet we found a single population that responds similarly to all these diverse food items.”

    The food-specific population, which the researchers call the ventral food component (VFC), appears to be spread across two clusters of neurons, located on either side of the FFA. The fact that the food-specific populations are spread out between other category-specific populations may help explain why they have not been seen before, the researchers say.

    “We think that food selectivity had been harder to characterize before because the populations that are selective for food are intermingled with other nearby populations that have distinct responses to other stimulus attributes. The low spatial resolution of fMRI prevents us from seeing this selectivity because the responses of different neural population get mixed in a voxel,” Khosla says.

    “The technique which the researchers used to identify category-sensitive cells or areas is impressive, and it recovered known category-sensitive systems, making the food category findings most impressive,” says Paul Rozin, a professor of psychology at the University of Pennsylvania, who was not involved in the study. “I can’t imagine a way for the brain to reliably identify the diversity of foods based on sensory features. That makes this all the more fascinating, and likely to clue us in about something really new.”

    Food vs non-food

    The researchers also used the data to train a computational model of the VFC, based on previous models Murty had developed for the brain’s face and place recognition areas. This allowed the researchers to run additional experiments and predict the responses of the VFC. In one experiment, they fed the model matched images of food and non-food items that looked very similar — for example, a banana and a yellow crescent moon.

    “Those matched stimuli have very similar visual properties, but the main attribute in which they differ is edible versus inedible,” Khosla says. “We could feed those arbitrary stimuli through the predictive model and see whether it would still respond more to food than non-food, without having to collect the fMRI data.”

    They could also use the computational model to analyze much larger datasets, consisting of millions of images. Those simulations helped to confirm that the VFC is highly selective for images of food.

    From their analysis of the human fMRI data, the researchers found that in some subjects, the VFC responded slightly more to processed foods such as pizza than unprocessed foods like apples. In the future they hope to explore how factors such as familiarity and like or dislike of a particular food might affect individuals’ responses to that food.

    They also hope to study when and how this region becomes specialized during early childhood, and what other parts of the brain it communicates with. Another question is whether this food-selective population will be seen in other animals such as monkeys, who do not attach the cultural significance to food that humans do.

    The research was funded by the National Institutes of Health, the National Eye Institute, and the National Science Foundation through the MIT Center for Brains, Minds, and Machines. More

  • in

    A better way to quantify radiation damage in materials

    It was just a piece of junk sitting in the back of a lab at the MIT Nuclear Reactor facility, ready to be disposed of. But it became the key to demonstrating a more comprehensive way of detecting atomic-level structural damage in materials — an approach that will aid the development of new materials, and could potentially support the ongoing operation of carbon-emission-free nuclear power plants, which would help alleviate global climate change.

    A tiny titanium nut that had been removed from inside the reactor was just the kind of material needed to prove that this new technique, developed at MIT and at other institutions, provides a way to probe defects created inside materials, including those that have been exposed to radiation, with five times greater sensitivity than existing methods.

    The new approach revealed that much of the damage that takes place inside reactors is at the atomic scale, and as a result is difficult to detect using existing methods. The technique provides a way to directly measure this damage through the way it changes with temperature. And it could be used to measure samples from the currently operating fleet of nuclear reactors, potentially enabling the continued safe operation of plants far beyond their presently licensed lifetimes.

    The findings are reported today in the journal Science Advances in a paper by MIT research specialist and recent graduate Charles Hirst PhD ’22; MIT professors Michael Short, Scott Kemp, and Ju Li; and five others at the University of Helsinki, the Idaho National Laboratory, and the University of California at Irvine.

    Rather than directly observing the physical structure of a material in question, the new approach looks at the amount of energy stored within that structure. Any disruption to the orderly structure of atoms within the material, such as that caused by radiation exposure or by mechanical stresses, actually imparts excess energy to the material. By observing and quantifying that energy difference, it’s possible to calculate the total amount of damage within the material — even if that damage is in the form of atomic-scale defects that are too small to be imaged with microscopes or other detection methods.

    The principle behind this method had been worked out in detail through calculations and simulations. But it was the actual tests on that one titanium nut from the MIT nuclear reactor that provided the proof — and thus opened the door to a new way of measuring damage in materials.

    The method they used is called differential scanning calorimetry. As Hirst explains, this is similar in principle to the calorimetry experiments many students carry out in high school chemistry classes, where they measure how much energy it takes to raise the temperature of a gram of water by one degree. The system the researchers used was “fundamentally the exact same thing, measuring energetic changes. … I like to call it just a fancy furnace with a thermocouple inside.”

    The scanning part has to do with gradually raising the temperature a bit at a time and seeing how the sample responds, and the differential part refers to the fact that two identical chambers are measured at once, one empty, and one containing the sample being studied. The difference between the two reveals details of the energy of the sample, Hirst explains.

    “We raise the temperature from room temperature up to 600 degrees Celsius, at a constant rate of 50 degrees per minute,” he says. Compared to the empty vessel, “your material will naturally lag behind because you need energy to heat your material. But if there are changes in the energy inside the material, that will change the temperature. In our case, there was an energy release when the defects recombine, and then it will get a little bit of a head start on the furnace … and that’s how we are measuring the energy in our sample.”

    Hirst, who carried out the work over a five-year span as his doctoral thesis project, found that contrary to what had been believed, the irradiated material showed that there were two different mechanisms involved in the relaxation of defects in titanium at the studied temperatures, revealed by two separate peaks in calorimetry. “Instead of one process occurring, we clearly saw two, and each of them corresponds to a different reaction that’s happening in the material,” he says.

    They also found that textbook explanations of how radiation damage behaves with temperature weren’t accurate, because previous tests had mostly been carried out at extremely low temperatures and then extrapolated to the higher temperatures of real-life reactor operations. “People weren’t necessarily aware that they were extrapolating, even though they were, completely,” Hirst says.

    “The fact is that our common-knowledge basis for how radiation damage evolves is based on extremely low-temperature electron radiation,” adds Short. “It just became the accepted model, and that’s what’s taught in all the books. It took us a while to realize that our general understanding was based on a very specific condition, designed to elucidate science, but generally not applicable to conditions in which we actually want to use these materials.”

    Now, the new method can be applied “to materials plucked from existing reactors, to learn more about how they are degrading with operation,” Hirst says.

    “The single biggest thing the world can do in order to get cheap, carbon-free power is to keep current reactors on the grid. They’re already paid for, they’re working,” Short adds.  But to make that possible, “the only way we can keep them on the grid is to have more certainty that they will continue to work well.” And that’s where this new way of assessing damage comes into play.

    While most nuclear power plants have been licensed for 40 to 60 years of operation, “we’re now talking about running those same assets out to 100 years, and that depends almost fully on the materials being able to withstand the most severe accidents,” Short says. Using this new method, “we can inspect them and take them out before something unexpected happens.”

    In practice, plant operators could remove a tiny sample of material from critical areas of the reactor, and analyze it to get a more complete picture of the condition of the overall reactor. Keeping existing reactors running is “the single biggest thing we can do to keep the share of carbon-free power high,” Short stresses. “This is one way we think we can do that.”

    Sergei Dudarev, a fellow at the United Kingdom Atomic Energy Authority who was not associated with this work, says this “is likely going to be impactful, as it confirms, in a nice systematic manner, supported both by experiment and simulations, the unexpectedly significant part played by the small invisible defects in microstructural evolution of materials exposed to irradiation.”

    The process is not just limited to the study of metals, nor is it limited to damage caused by radiation, the researchers say. In principle, the method could be used to measure other kinds of defects in materials, such as those caused by stresses or shockwaves, and it could be applied to materials such as ceramics or semiconductors as well.

    In fact, Short says, metals are the most difficult materials to measure with this method, and early on other researchers kept asking why this team was focused on damage to metals. That was partly because reactor components tend to be made of metal, and also because “It’s the hardest, so, if we crack this problem, we have a tool to crack them all!”

    Measuring defects in other kinds of materials can be up to 10,000 times easier than in metals, he says. “If we can do this with metals, we can make this extremely, ubiquitously applicable.” And all of it enabled by a small piece of junk that was sitting at the back of a lab.

    The research team included Fredric Granberg and Kai Nordlund at the University of Helsinki in Finland; Boopathy Kombaiah and Scott Middlemas at Idaho National Laboratory; and Penghui Cao at the University of California at Irvine. The work was supported by the U.S. National Science Foundation, an Idaho National Laboratory research grant, and a Euratom Research and Training program grant. More

  • in

    Structures considered key to gene expression are surprisingly fleeting

    In human chromosomes, DNA is coated by proteins to form an exceedingly long beaded string. This “string” is folded into numerous loops, which are believed to help cells control gene expression and facilitate DNA repair, among other functions. A new study from MIT suggests that these loops are very dynamic and shorter-lived than previously thought.

    In the new study, the researchers were able to monitor the movement of one stretch of the genome in a living cell for about two hours. They saw that this stretch was fully looped for only 3 to 6 percent of the time, with the loop lasting for only about 10 to 30 minutes. The findings suggest that scientists’ current understanding of how loops influence gene expression may need to be revised, the researchers say.

    “Many models in the field have been these pictures of static loops regulating these processes. What our new paper shows is that this picture is not really correct,” says Anders Sejr Hansen, the Underwood-Prescott Career Development Assistant Professor of Biological Engineering at MIT. “We suggest that the functional state of these domains is much more dynamic.”

    Hansen is one of the senior authors of the new study, along with Leonid Mirny, a professor in MIT’s Institute for Medical Engineering and Science and the Department of Physics, and Christoph Zechner, a group leader at the Max Planck Institute of Molecular Cell Biology and Genetics in Dresden, Germany, and the Center for Systems Biology Dresden. MIT postdoc Michele Gabriele, recent Harvard University PhD recipient Hugo Brandão, and MIT graduate student Simon Grosse-Holz are the lead authors of the paper, which appears today in Science.

    Out of the loop

    Using computer simulations and experimental data, scientists including Mirny’s group at MIT have shown that loops in the genome are formed by a process called extrusion, in which a molecular motor promotes the growth of progressively larger loops. The motor stops each time it encounters a “stop sign” on DNA. The motor that extrudes such loops is a protein complex called cohesin, while the DNA-bound protein CTCF serves as the stop sign. These cohesin-mediated loops between CTCF sites were seen in previous experiments.

    However, those experiments only offered a snapshot of a moment in time, with no information on how the loops change over time. In their new study, the researchers developed techniques that allowed them to fluorescently label CTCF DNA sites so they could image the DNA loops over several hours. They also created a new computational method that can infer the looping events from the imaging data.

    “This method was crucial for us to distinguish signal from noise in our experimental data and quantify looping,” Zechner says. “We believe that such approaches will become increasingly important for biology as we continue to push the limits of detection with experiments.”

    The researchers used their method to image a stretch of the genome in mouse embryonic stem cells. “If we put our data in the context of one cell division cycle, which lasts about 12 hours, the fully formed loop only actually exists for about 20 to 45 minutes, or about 3 to 6 percent of the time,” Grosse-Holz says.

    “If the loop is only present for such a tiny period of the cell cycle and very short-lived, we shouldn’t think of this fully looped state as being the primary regulator of gene expression,” Hansen says. “We think we need new models for how the 3D structure of the genome regulates gene expression, DNA repair, and other functional downstream processes.”

    While fully formed loops were rare, the researchers found that partially extruded loops were present about 92 percent of the time. These smaller loops have been difficult to observe with the previous methods of detecting loops in the genome.

    “In this study, by integrating our experimental data with polymer simulations, we have now been able to quantify the relative extents of the unlooped, partially extruded, and fully looped states,” Brandão says.

    “Since these interactions are very short, but very frequent, the previous methodologies were not able to fully capture their dynamics,” Gabriele adds. “With our new technique, we can start to resolve transitions between fully looped and unlooped states.”

    Play video

    The researchers hypothesize that these partial loops may play more important roles in gene regulation than fully formed loops. Strands of DNA run along each other as loops begin to form and then fall apart, and these interactions may help regulatory elements such as enhancers and gene promoters find each other.

    “More than 90 percent of the time, there are some transient loops, and presumably what’s important is having those loops that are being perpetually extruded,” Mirny says. “The process of extrusion itself may be more important than the fully looped state that only occurs for a short period of time.”

    More loops to study

    Since most of the other loops in the genome are weaker than the one the researchers studied in this paper, they suspect that many other loops will also prove to be highly transient. They now plan to use their new technique study some of those other loops, in a variety of cell types.

    “There are about 10,000 of these loops, and we’ve looked at one,” Hansen says. “We have a lot of indirect evidence to suggest that the results would be generalizable, but we haven’t demonstrated that. Using the technology platform we’ve set up, which combines new experimental and computational methods, we can begin to approach other loops in the genome.”

    The researchers also plan to investigate the role of specific loops in disease. Many diseases, including a neurodevelopmental disorder called FOXG1 syndrome, could be linked to faulty loop dynamics. The researchers are now studying how both the normal and mutated form of the FOXG1 gene, as well as the cancer-causing gene MYC, are affected by genome loop formation.

    The research was funded by the National Institutes of Health, the National Science Foundation, the Mathers Foundation, a Pew-Stewart Cancer Research Scholar grant, the Chaires d’excellence Internationale Blaise Pascal, an American-Italian Cancer Foundation research scholarship, and the Max Planck Institute for Molecular Cell Biology and Genetics. More