More stories

  • in

    Keeping indoor humidity levels at a “sweet spot” may reduce spread of Covid-19

    We know proper indoor ventilation is key to reducing the spread of Covid-19. Now, a study by MIT researchers finds that indoor relative humidity may also influence transmission of the virus.

    Relative humidity is the amount of moisture in the air compared to the total moisture the air can hold at a given temperature before saturating and forming condensation.

    In a study appearing today in the Journal of the Royal Society Interface, the MIT team reports that maintaining an indoor relative humidity between 40 and 60 percent is associated with relatively lower rates of Covid-19 infections and deaths, while indoor conditions outside this range are associated with worse Covid-19 outcomes. To put this into perspective, most people are comfortable between 30 and 50 percent relative humidity, and an airplane cabin is at around 20 percent relative humidity.

    The findings are based on the team’s analysis of Covid-19 data combined with meteorological measurements from 121 countries, from January 2020 through August 2020. Their study suggests a strong connection between regional outbreaks and indoor relative humidity.

    In general, the researchers found that whenever a region experienced a rise in Covid-19 cases and deaths prevaccination, the estimated indoor relative humidity in that region, on average, was either lower than 40 percent or higher than 60 percent regardless of season. Nearly all regions in the study experienced fewer Covid-19 cases and deaths during periods when estimated indoor relative humidity was within a “sweet spot” between 40 and 60 percent.

    “There’s potentially a protective effect of this intermediate indoor relative humidity,” suggests lead author Connor Verheyen, a PhD student in medical engineering and medical physics in the Harvard-MIT Program in Health Sciences and Technology.

    “Indoor ventilation is still critical,” says co-author Lydia Bourouiba, director of the MIT Fluid Dynamics of Disease Transmission Laboratory and associate professor in the departments of Civil and Environmental Engineering and Mechanical Engineering, and at the Institute for Medical Engineering and Science at MIT. “However, we find that maintaining an indoor relative humidity in that sweet spot — of 40 to 60 percent — is associated with reduced Covid-19 cases and deaths.”

    Seasonal swing?

    Since the start of the Covid-19 pandemic, scientists have considered the possibility that the virus’ virulence swings with the seasons. Infections and associated deaths appear to rise in winter and ebb in summer. But studies looking to link the virus’ patterns to seasonal outdoor conditions have yielded mixed results.

    Verheyen and Bourouiba examined whether Covid-19 is influenced instead by indoor — rather than outdoor — conditions, and, specifically, relative humidity. After all, they note that most societies spend more than 90 percent of their time indoors, where the majority of viral transmission has been shown to occur. What’s more, indoor conditions can be quite different from outdoor conditions as a result of climate control systems, such as heaters that significantly dry out indoor air.

    Could indoor relative humidity have affected the spread and severity of Covid-19 around the world? And could it help explain the differences in health outcomes from region to region?

    Tracking humidity

    For answers, the team focused on the early period of the pandemic when vaccines were not yet available, reasoning that vaccinated populations would obscure the influence of any other factor such as indoor humidity. They gathered global Covid-19 data, including case counts and reported deaths, from January 2020 to August 2020,  and identified countries with at least 50 deaths, indicating at least one outbreak had occurred in those countries.

    In all, they focused on 121 countries where Covid-19 outbreaks occurred. For each country, they also tracked the local Covid-19 related policies, such as isolation, quarantine, and testing measures, and their statistical association with Covid-19 outcomes.

    For each day that Covid-19 data was available, they used meteorological data to calculate a country’s outdoor relative humidity. They then estimated the average indoor relative humidity, based on outdoor relative humidity and guidelines on temperature ranges for human comfort. For instance, guidelines report that humans are comfortable between 66 to 77 degrees Fahrenheit indoors. They also assumed that on average, most populations have the means to heat indoor spaces to comfortable temperatures. Finally, they also collected experimental data, which they used to validate their estimation approach.

    For every instance when outdoor temperatures were below the typical human comfort range, they assumed indoor spaces were heated to reach that comfort range. Based on the added heating, they calculated the associated drop in indoor relative humidity.

    In warmer times, both outdoor and indoor relative humidity for each country was about the same, but they quickly diverged in colder times. While outdoor humidity remained around 50 percent throughout the year, indoor relative humidity for countries in the Northern and Southern Hemispheres dropped below 40 percent in their respective colder periods, when Covid-19 cases and deaths also spiked in these regions.

    For countries in the tropics, relative humidity was about the same indoors and outdoors throughout the year, with a gradual rise indoors during the region’s summer season, when high outdoor humidity likely raised the indoor relative humidity over 60 percent. They found this rise mirrored the gradual increase in Covid-19 deaths in the tropics.

    “We saw more reported Covid-19 deaths on the low and high end of indoor relative humidity, and less in this sweet spot of 40 to 60 percent,” Verheyen says. “This intermediate relative humidity window is associated with a better outcome, meaning fewer deaths and a deceleration of the pandemic.”

    “We were very skeptical initially, especially as the Covid-19 data can be noisy and inconsistent,” Bourouiba says. “We thus were very thorough trying to poke holes in our own analysis, using a range of approaches to test the limits and robustness of the findings, including taking into account factors such as government intervention. Despite all our best efforts, we found that even when considering countries with very strong versus very weak Covid-19 mitigation policies, or wildly different outdoor conditions, indoor — rather than outdoor — relative humidity maintains an underlying strong and robust link with Covid-19 outcomes.”

    It’s still unclear how indoor relative humidity affects Covid-19 outcomes. The team’s follow-up studies suggest that pathogens may survive longer in respiratory droplets in both very dry and very humid conditions.

    “Our ongoing work shows that there are emerging hints of mechanistic links between these factors,” Bourouiba says. “For now however, we can say that indoor relative humidity emerges in a robust manner as another mitigation lever that organizations and individuals can monitor, adjust, and maintain in the optimal 40 to 60 percent range, in addition to proper ventillation.”

    This research was made possible, in part, by an MIT Alumni Class fund, the Richard and Susan Smith Family Foundation, the National Institutes of Health, and the National Science Foundation. More

  • in

    Ocean microbes get their diet through a surprising mix of sources, study finds

    One of the smallest and mightiest organisms on the planet is a plant-like bacterium known to marine biologists as Prochlorococcus. The green-tinted microbe measures less than a micron across, and its populations suffuse through the upper layers of the ocean, where a single teaspoon of seawater can hold millions of the tiny organisms.

    Prochlorococcus grows through photosynthesis, using sunlight to convert the atmosphere’s carbon dioxide into organic carbon molecules. The microbe is responsible for 5 percent of the world’s photosynthesizing activity, and scientists have assumed that photosynthesis is the microbe’s go-to strategy for acquiring the carbon it needs to grow.

    But a new MIT study in Nature Microbiology today has found that Prochlorococcus relies on another carbon-feeding strategy, more than previously thought.

    Organisms that use a mix of strategies to provide carbon are known as mixotrophs. Most marine plankton are mixotrophs. And while Prochlorococcus is known to occasionally dabble in mixotrophy, scientists have assumed the microbe primarily lives a phototrophic lifestyle.

    The new MIT study shows that in fact, Prochlorococcus may be more of a mixotroph than it lets on. The microbe may get as much as one-third of its carbon through a second strategy: consuming the dissolved remains of other dead microbes.

    The new estimate may have implications for climate models, as the microbe is a significant force in capturing and “fixing” carbon in the Earth’s atmosphere and ocean.

    “If we wish to predict what will happen to carbon fixation in a different climate, or predict where Prochlorococcus will or will not live in the future, we probably won’t get it right if we’re missing a process that accounts for one-third of the population’s carbon supply,” says Mick Follows, a professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS), and its Department of Civil and Environmental Engineering.

    The study’s co-authors include first author and MIT postdoc Zhen Wu, along with collaborators from the University of Haifa, the Leibniz-Institute for Baltic Sea Research, the Leibniz-Institute of Freshwater Ecology and Inland Fisheries, and Potsdam University.

    Persistent plankton

    Since Prochlorococcus was first discovered in the Sargasso Sea in 1986, by MIT Institute Professor Sallie “Penny” Chisholm and others, the microbe has been observed throughout the world’s oceans, inhabiting the upper sunlit layers ranging from the surface down to about 160 meters. Within this range, light levels vary, and the microbe has evolved a number of ways to photosynthesize carbon in even low-lit regions.

    The organism has also evolved ways to consume organic compounds including glucose and certain amino acids, which could help the microbe survive for limited periods of time in dark ocean regions. But surviving on organic compounds alone is a bit like only eating junk food, and there is evidence that Prochlorococcus will die after a week in regions where photosynthesis is not an option.

    And yet, researchers including Daniel Sher of the University of Haifa, who is a co-author of the new study, have observed healthy populations of Prochlorococcus that persist deep in the sunlit zone, where the light intensity should be too low to maintain a population. This suggests that the microbes must be switching to a non-photosynthesizing, mixotrophic lifestyle in order to consume other organic sources of carbon.

    “It seems that at least some Prochlorococcus are using existing organic carbon in a mixotrophic way,” Follows says. “That stimulated the question: How much?”

    What light cannot explain

    In their new paper, Follows, Wu, Sher, and their colleagues looked to quantify the amount of carbon that Prochlorococcus is consuming through processes other than photosynthesis.

    The team looked first to measurements taken by Sher’s team, which previously took ocean samples at various depths in the Mediterranean Sea and measured the concentration of phytoplankton, including Prochlorococcus, along with the associated intensity of light and the concentration of nitrogen — an essential nutrient that is richly available in deeper layers of the ocean and that plankton can assimilate to make proteins.

    Wu and Follows used this data, and similar information from the Pacific Ocean, along with previous work from Chisholm’s lab, which established the rate of photosynthesis that Prochlorococcus could carry out in a given intensity of light.

    “We converted that light intensity profile into a potential growth rate — how fast the population of Prochlorococcus could grow if it was acquiring all it’s carbon by photosynthesis, and light is the limiting factor,” Follows explains.

    The team then compared this calculated rate to growth rates that were previously observed in the Pacific Ocean by several other research teams.

    “This data showed that, below a certain depth, there’s a lot of growth happening that photosynthesis simply cannot explain,” Follows says. “Some other process must be at work to make up the difference in carbon supply.”

    The researchers inferred that, in deeper, darker regions of the ocean, Prochlorococcus populations are able to survive and thrive by resorting to mixotrophy, including consuming organic carbon from detritus. Specifically, the microbe may be carrying out osmotrophy — a process by which an organism passively absorbs organic carbon molecules via osmosis.

    Judging by how fast the microbe is estimated to be growing below the sunlit zone, the team calculates that Prochlorococcus obtains up to one-third of its carbon diet through mixotrophic strategies.

    “It’s kind of like going from a specialist to a generalist lifestyle,” Follows says. “If I only eat pizza, then if I’m 20 miles from a pizza place, I’m in trouble, whereas if I eat burgers as well, I could go to the nearby McDonald’s. People had thought of Prochlorococcus as a specialist, where they do this one thing (photosynthesis) really well. But it turns out they may have more of a generalist lifestyle than we previously thought.”

    Chisholm, who has both literally and figuratively written the book on Prochlorococcus, says the group’s findings “expand the range of conditions under which their populations can not only survive, but also thrive. This study changes the way we think about the role of Prochlorococcus in the microbial food web.”

    This research was supported, in part, by the Israel Science Foundation, the U.S. National Science Foundation, and the Simons Foundation. More

  • in

    Ocean scientists measure sediment plume stirred up by deep-sea-mining vehicle

    What will be the impact to the ocean if humans are to mine the deep sea? It’s a question that’s gaining urgency as interest in marine minerals has grown.

    The ocean’s deep-sea bed is scattered with ancient, potato-sized rocks called “polymetallic nodules” that contain nickel and cobalt — minerals that are in high demand for the manufacturing of batteries, such as for powering electric vehicles and storing renewable energy, and in response to factors such as increasing urbanization. The deep ocean contains vast quantities of mineral-laden nodules, but the impact of mining the ocean floor is both unknown and highly contested.

    Now MIT ocean scientists have shed some light on the topic, with a new study on the cloud of sediment that a collector vehicle would stir up as it picks up nodules from the seafloor.

    The study, appearing today in Science Advances, reports the results of a 2021 research cruise to a region of the Pacific Ocean known as the Clarion Clipperton Zone (CCZ), where polymetallic nodules abound. There, researchers equipped a pre-prototype collector vehicle with instruments to monitor sediment plume disturbances as the vehicle maneuvered across the seafloor, 4,500 meters below the ocean’s surface. Through a sequence of carefully conceived maneuvers. the MIT scientists used the vehicle to monitor its own sediment cloud and measure its properties.

    Their measurements showed that the vehicle created a dense plume of sediment in its wake, which spread under its own weight, in a phenomenon known in fluid dynamics as a “turbidity current.” As it gradually dispersed, the plume remained relatively low, staying within 2 meters of the seafloor, as opposed to immediately lofting higher into the water column as had been postulated.

    “It’s quite a different picture of what these plumes look like, compared to some of the conjecture,” says study co-author Thomas Peacock, professor of mechanical engineering at MIT. “Modeling efforts of deep-sea mining plumes will have to account for these processes that we identified, in order to assess their extent.”

    The study’s co-authors include lead author Carlos Muñoz-Royo, Raphael Ouillon, and Souha El Mousadik of MIT; and Matthew Alford of the Scripps Institution of Oceanography.

    Deep-sea maneuvers

    To collect polymetallic nodules, some mining companies are proposing to deploy tractor-sized vehicles to the bottom of the ocean. The vehicles would vacuum up the nodules along with some sediment along their path. The nodules and sediment would then be separated inside of the vehicle, with the nodules sent up through a riser pipe to a surface vessel, while most of the sediment would be discharged immediately behind the vehicle.

    Peacock and his group have previously studied the dynamics of the sediment plume that associated surface operation vessels may pump back into the ocean. In their current study, they focused on the opposite end of the operation, to measure the sediment cloud created by the collectors themselves.

    In April 2021, the team joined an expedition led by Global Sea Mineral Resources NV (GSR), a Belgian marine engineering contractor that is exploring the CCZ for ways to extract metal-rich nodules. A European-based science team, Mining Impacts 2, also conducted separate studies in parallel. The cruise was the first in over 40 years to test a “pre-prototype” collector vehicle in the CCZ. The machine, called Patania II, stands about 3 meters high, spans 4 meters wide, and is about one-third the size of what a commercial-scale vehicle is expected to be.

    While the contractor tested the vehicle’s nodule-collecting performance, the MIT scientists monitored the sediment cloud created in the vehicle’s wake. They did so using two maneuvers that the vehicle was programmed to take: a “selfie,” and a “drive-by.”

    Both maneuvers began in the same way, with the vehicle setting out in a straight line, all its suction systems turned on. The researchers let the vehicle drive along for 100 meters, collecting any nodules in its path. Then, in the “selfie” maneuver, they directed the vehicle to turn off its suction systems and double back around to drive through the cloud of sediment it had just created. The vehicle’s installed sensors measured the concentration of sediment during this “selfie” maneuver, allowing the scientists to monitor the cloud within minutes of the vehicle stirring it up.

    Play video

    A movie of the Patania II pre-prototype collector vehicle entering, driving through, and leaving the low-lying turbidity current plume as part of a selfie operation. For scale, the instrumentation post attached to the front of the vehicle reaches about 3m above the seabed. The movie is sped up by a factor of 20. Credit: Global Sea Mineral Resources

    For the “drive-by” maneuver, the researchers placed a sensor-laden mooring 50 to 100 meters from the vehicle’s planned tracks. As the vehicle drove along collecting nodules, it created a plume that eventually spread past the mooring after an hour or two. This “drive-by” maneuver enabled the team to monitor the sediment cloud over a longer timescale of several hours, capturing the plume evolution.

    Out of steam

    Over multiple vehicle runs, Peacock and his team were able to measure and track the evolution of the sediment plume created by the deep-sea-mining vehicle.

    “We saw that the vehicle would be driving in clear water, seeing the nodules on the seabed,” Peacock says. “And then suddenly there’s this very sharp sediment cloud coming through when the vehicle enters the plume.”

    From the selfie views, the team observed a behavior that was predicted by some of their previous modeling studies: The vehicle stirred up a heavy amount of sediment that was dense enough that, even after some mixing with the surrounding water, it generated a plume that behaved almost as a separate fluid, spreading under its own weight in what’s known as a turbidity current.

    “The turbidity current spreads under its own weight for some time, tens of minutes, but as it does so, it’s depositing sediment on the seabed and eventually running out of steam,” Peacock says. “After that, the ocean currents get stronger than the natural spreading, and the sediment transitions to being carried by the ocean currents.”

    By the time the sediment drifted past the mooring, the researchers estimate that 92 to 98 percent of the sediment either settled back down or remained within 2 meters of the seafloor as a low-lying cloud. There is, however, no guarantee that the sediment always stays there rather than drifting further up in the water column. Recent and future studies by the research team are looking into this question, with the goal of consolidating understanding for deep-sea mining sediment plumes.

    “Our study clarifies the reality of what the initial sediment disturbance looks like when you have a certain type of nodule mining operation,” Peacock says. “The big takeaway is that there are complex processes like turbidity currents that take place when you do this kind of collection. So, any effort to model a deep-sea-mining operation’s impact will have to capture these processes.”

    “Sediment plumes produced by deep-seabed mining are a major concern with regards to environmental impact, as they will spread over potentially large areas beyond the actual site of mining and affect deep-sea life,” says Henko de Stigter, a marine geologist at the Royal Netherlands Institute for Sea Research, who was not involved in the research. “The current paper provides essential insight in the initial development of these plumes.”

    This research was supported, in part, by the National Science Foundation, ARPA-E, the 11th Hour Project, the Benioff Ocean Initiative, and Global Sea Mineral Resources. The funders had no role in any aspects of the research analysis, the research team states. More

  • in

    These neurons have food on the brain

    A gooey slice of pizza. A pile of crispy French fries. Ice cream dripping down a cone on a hot summer day. When you look at any of these foods, a specialized part of your visual cortex lights up, according to a new study from MIT neuroscientists.

    This newly discovered population of food-responsive neurons is located in the ventral visual stream, alongside populations that respond specifically to faces, bodies, places, and words. The unexpected finding may reflect the special significance of food in human culture, the researchers say. 

    “Food is central to human social interactions and cultural practices. It’s not just sustenance,” says Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience and a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines. “Food is core to so many elements of our cultural identity, religious practice, and social interactions, and many other things that humans do.”

    The findings, based on an analysis of a large public database of human brain responses to a set of 10,000 images, raise many additional questions about how and why this neural population develops. In future studies, the researchers hope to explore how people’s responses to certain foods might differ depending on their likes and dislikes, or their familiarity with certain types of food.

    MIT postdoc Meenakshi Khosla is the lead author of the paper, along with MIT research scientist N. Apurva Ratan Murty. The study appears today in the journal Current Biology.

    Visual categories

    More than 20 years ago, while studying the ventral visual stream, the part of the brain that recognizes objects, Kanwisher discovered cortical regions that respond selectively to faces. Later, she and other scientists discovered other regions that respond selectively to places, bodies, or words. Most of those areas were discovered when researchers specifically set out to look for them. However, that hypothesis-driven approach can limit what you end up finding, Kanwisher says.

    “There could be other things that we might not think to look for,” she says. “And even when we find something, how do we know that that’s actually part of the basic dominant structure of that pathway, and not something we found just because we were looking for it?”

    To try to uncover the fundamental structure of the ventral visual stream, Kanwisher and Khosla decided to analyze a large, publicly available dataset of full-brain functional magnetic resonance imaging (fMRI) responses from eight human subjects as they viewed thousands of images.

    “We wanted to see when we apply a data-driven, hypothesis-free strategy, what kinds of selectivities pop up, and whether those are consistent with what had been discovered before. A second goal was to see if we could discover novel selectivities that either haven’t been hypothesized before, or that have remained hidden due to the lower spatial resolution of fMRI data,” Khosla says.

    To do that, the researchers applied a mathematical method that allows them to discover neural populations that can’t be identified from traditional fMRI data. An fMRI image is made up of many voxels — three-dimensional units that represent a cube of brain tissue. Each voxel contains hundreds of thousands of neurons, and if some of those neurons belong to smaller populations that respond to one type of visual input, their responses may be drowned out by other populations within the same voxel.

    The new analytical method, which Kanwisher’s lab has previously used on fMRI data from the auditory cortex, can tease out responses of neural populations within each voxel of fMRI data.

    Using this approach, the researchers found four populations that corresponded to previously identified clusters that respond to faces, places, bodies, and words. “That tells us that this method works, and it tells us that the things that we found before are not just obscure properties of that pathway, but major, dominant properties,” Kanwisher says.

    Intriguingly, a fifth population also emerged, and this one appeared to be selective for images of food.

    “We were first quite puzzled by this because food is not a visually homogenous category,” Khosla says. “Things like apples and corn and pasta all look so unlike each other, yet we found a single population that responds similarly to all these diverse food items.”

    The food-specific population, which the researchers call the ventral food component (VFC), appears to be spread across two clusters of neurons, located on either side of the FFA. The fact that the food-specific populations are spread out between other category-specific populations may help explain why they have not been seen before, the researchers say.

    “We think that food selectivity had been harder to characterize before because the populations that are selective for food are intermingled with other nearby populations that have distinct responses to other stimulus attributes. The low spatial resolution of fMRI prevents us from seeing this selectivity because the responses of different neural population get mixed in a voxel,” Khosla says.

    “The technique which the researchers used to identify category-sensitive cells or areas is impressive, and it recovered known category-sensitive systems, making the food category findings most impressive,” says Paul Rozin, a professor of psychology at the University of Pennsylvania, who was not involved in the study. “I can’t imagine a way for the brain to reliably identify the diversity of foods based on sensory features. That makes this all the more fascinating, and likely to clue us in about something really new.”

    Food vs non-food

    The researchers also used the data to train a computational model of the VFC, based on previous models Murty had developed for the brain’s face and place recognition areas. This allowed the researchers to run additional experiments and predict the responses of the VFC. In one experiment, they fed the model matched images of food and non-food items that looked very similar — for example, a banana and a yellow crescent moon.

    “Those matched stimuli have very similar visual properties, but the main attribute in which they differ is edible versus inedible,” Khosla says. “We could feed those arbitrary stimuli through the predictive model and see whether it would still respond more to food than non-food, without having to collect the fMRI data.”

    They could also use the computational model to analyze much larger datasets, consisting of millions of images. Those simulations helped to confirm that the VFC is highly selective for images of food.

    From their analysis of the human fMRI data, the researchers found that in some subjects, the VFC responded slightly more to processed foods such as pizza than unprocessed foods like apples. In the future they hope to explore how factors such as familiarity and like or dislike of a particular food might affect individuals’ responses to that food.

    They also hope to study when and how this region becomes specialized during early childhood, and what other parts of the brain it communicates with. Another question is whether this food-selective population will be seen in other animals such as monkeys, who do not attach the cultural significance to food that humans do.

    The research was funded by the National Institutes of Health, the National Eye Institute, and the National Science Foundation through the MIT Center for Brains, Minds, and Machines. More

  • in

    A better way to quantify radiation damage in materials

    It was just a piece of junk sitting in the back of a lab at the MIT Nuclear Reactor facility, ready to be disposed of. But it became the key to demonstrating a more comprehensive way of detecting atomic-level structural damage in materials — an approach that will aid the development of new materials, and could potentially support the ongoing operation of carbon-emission-free nuclear power plants, which would help alleviate global climate change.

    A tiny titanium nut that had been removed from inside the reactor was just the kind of material needed to prove that this new technique, developed at MIT and at other institutions, provides a way to probe defects created inside materials, including those that have been exposed to radiation, with five times greater sensitivity than existing methods.

    The new approach revealed that much of the damage that takes place inside reactors is at the atomic scale, and as a result is difficult to detect using existing methods. The technique provides a way to directly measure this damage through the way it changes with temperature. And it could be used to measure samples from the currently operating fleet of nuclear reactors, potentially enabling the continued safe operation of plants far beyond their presently licensed lifetimes.

    The findings are reported today in the journal Science Advances in a paper by MIT research specialist and recent graduate Charles Hirst PhD ’22; MIT professors Michael Short, Scott Kemp, and Ju Li; and five others at the University of Helsinki, the Idaho National Laboratory, and the University of California at Irvine.

    Rather than directly observing the physical structure of a material in question, the new approach looks at the amount of energy stored within that structure. Any disruption to the orderly structure of atoms within the material, such as that caused by radiation exposure or by mechanical stresses, actually imparts excess energy to the material. By observing and quantifying that energy difference, it’s possible to calculate the total amount of damage within the material — even if that damage is in the form of atomic-scale defects that are too small to be imaged with microscopes or other detection methods.

    The principle behind this method had been worked out in detail through calculations and simulations. But it was the actual tests on that one titanium nut from the MIT nuclear reactor that provided the proof — and thus opened the door to a new way of measuring damage in materials.

    The method they used is called differential scanning calorimetry. As Hirst explains, this is similar in principle to the calorimetry experiments many students carry out in high school chemistry classes, where they measure how much energy it takes to raise the temperature of a gram of water by one degree. The system the researchers used was “fundamentally the exact same thing, measuring energetic changes. … I like to call it just a fancy furnace with a thermocouple inside.”

    The scanning part has to do with gradually raising the temperature a bit at a time and seeing how the sample responds, and the differential part refers to the fact that two identical chambers are measured at once, one empty, and one containing the sample being studied. The difference between the two reveals details of the energy of the sample, Hirst explains.

    “We raise the temperature from room temperature up to 600 degrees Celsius, at a constant rate of 50 degrees per minute,” he says. Compared to the empty vessel, “your material will naturally lag behind because you need energy to heat your material. But if there are changes in the energy inside the material, that will change the temperature. In our case, there was an energy release when the defects recombine, and then it will get a little bit of a head start on the furnace … and that’s how we are measuring the energy in our sample.”

    Hirst, who carried out the work over a five-year span as his doctoral thesis project, found that contrary to what had been believed, the irradiated material showed that there were two different mechanisms involved in the relaxation of defects in titanium at the studied temperatures, revealed by two separate peaks in calorimetry. “Instead of one process occurring, we clearly saw two, and each of them corresponds to a different reaction that’s happening in the material,” he says.

    They also found that textbook explanations of how radiation damage behaves with temperature weren’t accurate, because previous tests had mostly been carried out at extremely low temperatures and then extrapolated to the higher temperatures of real-life reactor operations. “People weren’t necessarily aware that they were extrapolating, even though they were, completely,” Hirst says.

    “The fact is that our common-knowledge basis for how radiation damage evolves is based on extremely low-temperature electron radiation,” adds Short. “It just became the accepted model, and that’s what’s taught in all the books. It took us a while to realize that our general understanding was based on a very specific condition, designed to elucidate science, but generally not applicable to conditions in which we actually want to use these materials.”

    Now, the new method can be applied “to materials plucked from existing reactors, to learn more about how they are degrading with operation,” Hirst says.

    “The single biggest thing the world can do in order to get cheap, carbon-free power is to keep current reactors on the grid. They’re already paid for, they’re working,” Short adds.  But to make that possible, “the only way we can keep them on the grid is to have more certainty that they will continue to work well.” And that’s where this new way of assessing damage comes into play.

    While most nuclear power plants have been licensed for 40 to 60 years of operation, “we’re now talking about running those same assets out to 100 years, and that depends almost fully on the materials being able to withstand the most severe accidents,” Short says. Using this new method, “we can inspect them and take them out before something unexpected happens.”

    In practice, plant operators could remove a tiny sample of material from critical areas of the reactor, and analyze it to get a more complete picture of the condition of the overall reactor. Keeping existing reactors running is “the single biggest thing we can do to keep the share of carbon-free power high,” Short stresses. “This is one way we think we can do that.”

    Sergei Dudarev, a fellow at the United Kingdom Atomic Energy Authority who was not associated with this work, says this “is likely going to be impactful, as it confirms, in a nice systematic manner, supported both by experiment and simulations, the unexpectedly significant part played by the small invisible defects in microstructural evolution of materials exposed to irradiation.”

    The process is not just limited to the study of metals, nor is it limited to damage caused by radiation, the researchers say. In principle, the method could be used to measure other kinds of defects in materials, such as those caused by stresses or shockwaves, and it could be applied to materials such as ceramics or semiconductors as well.

    In fact, Short says, metals are the most difficult materials to measure with this method, and early on other researchers kept asking why this team was focused on damage to metals. That was partly because reactor components tend to be made of metal, and also because “It’s the hardest, so, if we crack this problem, we have a tool to crack them all!”

    Measuring defects in other kinds of materials can be up to 10,000 times easier than in metals, he says. “If we can do this with metals, we can make this extremely, ubiquitously applicable.” And all of it enabled by a small piece of junk that was sitting at the back of a lab.

    The research team included Fredric Granberg and Kai Nordlund at the University of Helsinki in Finland; Boopathy Kombaiah and Scott Middlemas at Idaho National Laboratory; and Penghui Cao at the University of California at Irvine. The work was supported by the U.S. National Science Foundation, an Idaho National Laboratory research grant, and a Euratom Research and Training program grant. More

  • in

    Structures considered key to gene expression are surprisingly fleeting

    In human chromosomes, DNA is coated by proteins to form an exceedingly long beaded string. This “string” is folded into numerous loops, which are believed to help cells control gene expression and facilitate DNA repair, among other functions. A new study from MIT suggests that these loops are very dynamic and shorter-lived than previously thought.

    In the new study, the researchers were able to monitor the movement of one stretch of the genome in a living cell for about two hours. They saw that this stretch was fully looped for only 3 to 6 percent of the time, with the loop lasting for only about 10 to 30 minutes. The findings suggest that scientists’ current understanding of how loops influence gene expression may need to be revised, the researchers say.

    “Many models in the field have been these pictures of static loops regulating these processes. What our new paper shows is that this picture is not really correct,” says Anders Sejr Hansen, the Underwood-Prescott Career Development Assistant Professor of Biological Engineering at MIT. “We suggest that the functional state of these domains is much more dynamic.”

    Hansen is one of the senior authors of the new study, along with Leonid Mirny, a professor in MIT’s Institute for Medical Engineering and Science and the Department of Physics, and Christoph Zechner, a group leader at the Max Planck Institute of Molecular Cell Biology and Genetics in Dresden, Germany, and the Center for Systems Biology Dresden. MIT postdoc Michele Gabriele, recent Harvard University PhD recipient Hugo Brandão, and MIT graduate student Simon Grosse-Holz are the lead authors of the paper, which appears today in Science.

    Out of the loop

    Using computer simulations and experimental data, scientists including Mirny’s group at MIT have shown that loops in the genome are formed by a process called extrusion, in which a molecular motor promotes the growth of progressively larger loops. The motor stops each time it encounters a “stop sign” on DNA. The motor that extrudes such loops is a protein complex called cohesin, while the DNA-bound protein CTCF serves as the stop sign. These cohesin-mediated loops between CTCF sites were seen in previous experiments.

    However, those experiments only offered a snapshot of a moment in time, with no information on how the loops change over time. In their new study, the researchers developed techniques that allowed them to fluorescently label CTCF DNA sites so they could image the DNA loops over several hours. They also created a new computational method that can infer the looping events from the imaging data.

    “This method was crucial for us to distinguish signal from noise in our experimental data and quantify looping,” Zechner says. “We believe that such approaches will become increasingly important for biology as we continue to push the limits of detection with experiments.”

    The researchers used their method to image a stretch of the genome in mouse embryonic stem cells. “If we put our data in the context of one cell division cycle, which lasts about 12 hours, the fully formed loop only actually exists for about 20 to 45 minutes, or about 3 to 6 percent of the time,” Grosse-Holz says.

    “If the loop is only present for such a tiny period of the cell cycle and very short-lived, we shouldn’t think of this fully looped state as being the primary regulator of gene expression,” Hansen says. “We think we need new models for how the 3D structure of the genome regulates gene expression, DNA repair, and other functional downstream processes.”

    While fully formed loops were rare, the researchers found that partially extruded loops were present about 92 percent of the time. These smaller loops have been difficult to observe with the previous methods of detecting loops in the genome.

    “In this study, by integrating our experimental data with polymer simulations, we have now been able to quantify the relative extents of the unlooped, partially extruded, and fully looped states,” Brandão says.

    “Since these interactions are very short, but very frequent, the previous methodologies were not able to fully capture their dynamics,” Gabriele adds. “With our new technique, we can start to resolve transitions between fully looped and unlooped states.”

    Play video

    The researchers hypothesize that these partial loops may play more important roles in gene regulation than fully formed loops. Strands of DNA run along each other as loops begin to form and then fall apart, and these interactions may help regulatory elements such as enhancers and gene promoters find each other.

    “More than 90 percent of the time, there are some transient loops, and presumably what’s important is having those loops that are being perpetually extruded,” Mirny says. “The process of extrusion itself may be more important than the fully looped state that only occurs for a short period of time.”

    More loops to study

    Since most of the other loops in the genome are weaker than the one the researchers studied in this paper, they suspect that many other loops will also prove to be highly transient. They now plan to use their new technique study some of those other loops, in a variety of cell types.

    “There are about 10,000 of these loops, and we’ve looked at one,” Hansen says. “We have a lot of indirect evidence to suggest that the results would be generalizable, but we haven’t demonstrated that. Using the technology platform we’ve set up, which combines new experimental and computational methods, we can begin to approach other loops in the genome.”

    The researchers also plan to investigate the role of specific loops in disease. Many diseases, including a neurodevelopmental disorder called FOXG1 syndrome, could be linked to faulty loop dynamics. The researchers are now studying how both the normal and mutated form of the FOXG1 gene, as well as the cancer-causing gene MYC, are affected by genome loop formation.

    The research was funded by the National Institutes of Health, the National Science Foundation, the Mathers Foundation, a Pew-Stewart Cancer Research Scholar grant, the Chaires d’excellence Internationale Blaise Pascal, an American-Italian Cancer Foundation research scholarship, and the Max Planck Institute for Molecular Cell Biology and Genetics. More

  • in

    Engineers enlist AI to help scale up advanced solar cell manufacturing

    Perovskites are a family of materials that are currently the leading contender to potentially replace today’s silicon-based solar photovoltaics. They hold the promise of panels that are far thinner and lighter, that could be made with ultra-high throughput at room temperature instead of at hundreds of degrees, and that are cheaper and easier to transport and install. But bringing these materials from controlled laboratory experiments into a product that can be manufactured competitively has been a long struggle.

    Manufacturing perovskite-based solar cells involves optimizing at least a dozen or so variables at once, even within one particular manufacturing approach among many possibilities. But a new system based on a novel approach to machine learning could speed up the development of optimized production methods and help make the next generation of solar power a reality.

    The system, developed by researchers at MIT and Stanford University over the last few years, makes it possible to integrate data from prior experiments, and information based on personal observations by experienced workers, into the machine learning process. This makes the outcomes more accurate and has already led to the manufacturing of perovskite cells with an energy conversion efficiency of 18.5 percent, a competitive level for today’s market.

    The research is reported today in the journal Joule, in a paper by MIT professor of mechanical engineering Tonio Buonassisi, Stanford professor of materials science and engineering Reinhold Dauskardt, recent MIT research assistant Zhe Liu, Stanford doctoral graduate Nicholas Rolston, and three others.

    Perovskites are a group of layered crystalline compounds defined by the configuration of the atoms in their crystal lattice. There are thousands of such possible compounds and many different ways of making them. While most lab-scale development of perovskite materials uses a spin-coating technique, that’s not practical for larger-scale manufacturing, so companies and labs around the world have been searching for ways of translating these lab materials into a practical, manufacturable product.

    “There’s always a big challenge when you’re trying to take a lab-scale process and then transfer it to something like a startup or a manufacturing line,” says Rolston, who is now an assistant professor at Arizona State University. The team looked at a process that they felt had the greatest potential, a method called rapid spray plasma processing, or RSPP.

    The manufacturing process would involve a moving roll-to-roll surface, or series of sheets, on which the precursor solutions for the perovskite compound would be sprayed or ink-jetted as the sheet rolled by. The material would then move on to a curing stage, providing a rapid and continuous output “with throughputs that are higher than for any other photovoltaic technology,” Rolston says.

    “The real breakthrough with this platform is that it would allow us to scale in a way that no other material has allowed us to do,” he adds. “Even materials like silicon require a much longer timeframe because of the processing that’s done. Whereas you can think of [this approach as more] like spray painting.”

    Within that process, at least a dozen variables may affect the outcome, some of them more controllable than others. These include the composition of the starting materials, the temperature, the humidity, the speed of the processing path, the distance of the nozzle used to spray the material onto a substrate, and the methods of curing the material. Many of these factors can interact with each other, and if the process is in open air, then humidity, for example, may be uncontrolled. Evaluating all possible combinations of these variables through experimentation is impossible, so machine learning was needed to help guide the experimental process.

    But while most machine-learning systems use raw data such as measurements of the electrical and other properties of test samples, they don’t typically incorporate human experience such as qualitative observations made by the experimenters of the visual and other properties of the test samples, or information from other experiments reported by other researchers. So, the team found a way to incorporate such outside information into the machine learning model, using a probability factor based on a mathematical technique called Bayesian Optimization.

    Using the system, he says, “having a model that comes from experimental data, we can find out trends that we weren’t able to see before.” For example, they initially had trouble adjusting for uncontrolled variations in humidity in their ambient setting. But the model showed them “that we could overcome our humidity challenges by changing the temperature, for instance, and by changing some of the other knobs.”

    The system now allows experimenters to much more rapidly guide their process in order to optimize it for a given set of conditions or required outcomes. In their experiments, the team focused on optimizing the power output, but the system could also be used to simultaneously incorporate other criteria, such as cost and durability — something members of the team are continuing to work on, Buonassisi says.

    The researchers were encouraged by the Department of Energy, which sponsored the work, to commercialize the technology, and they’re currently focusing on tech transfer to existing perovskite manufacturers. “We are reaching out to companies now,” Buonassisi says, and the code they developed has been made freely available through an open-source server. “It’s now on GitHub, anyone can download it, anyone can run it,” he says. “We’re happy to help companies get started in using our code.”

    Already, several companies are gearing up to produce perovskite-based solar panels, even though they are still working out the details of how to produce them, says Liu, who is now at the Northwestern Polytechnical University in Xi’an, China. He says companies there are not yet doing large-scale manufacturing, but instead starting with smaller, high-value applications such as building-integrated solar tiles where appearance is important. Three of these companies “are on track or are being pushed by investors to manufacture 1 meter by 2-meter rectangular modules [comparable to today’s most common solar panels], within two years,” he says.

    ‘The problem is, they don’t have a consensus on what manufacturing technology to use,” Liu says. The RSPP method, developed at Stanford, “still has a good chance” to be competitive, he says. And the machine learning system the team developed could prove to be important in guiding the optimization of whatever process ends up being used.

    “The primary goal was to accelerate the process, so it required less time, less experiments, and less human hours to develop something that is usable right away, for free, for industry,” he says.

    “Existing work on machine-learning-driven perovskite PV fabrication largely focuses on spin-coating, a lab-scale technique,” says Ted Sargent, University Professor at the University of Toronto, who was not associated with this work, which he says demonstrates “a workflow that is readily adapted to the deposition techniques that dominate the thin-film industry. Only a handful of groups have the simultaneous expertise in engineering and computation to drive such advances.” Sargent adds that this approach “could be an exciting advance for the manufacture of a broader family of materials” including LEDs, other PV technologies, and graphene, “in short, any industry that uses some form of vapor or vacuum deposition.” 

    The team also included Austin Flick and Thomas Colburn at Stanford and Zekun Ren at the Singapore-MIT Alliance for Science and Technology (SMART). In addition to the Department of Energy, the work was supported by a fellowship from the MIT Energy Initiative, the Graduate Research Fellowship Program from the National Science Foundation, and the SMART program. More

  • in

    A better way to separate gases

    Industrial processes for chemical separations, including natural gas purification and the production of oxygen and nitrogen for medical or industrial uses, are collectively responsible for about 15 percent of the world’s energy use. They also contribute a corresponding amount to the world’s greenhouse gas emissions. Now, researchers at MIT and Stanford University have developed a new kind of membrane for carrying out these separation processes with roughly 1/10 the energy use and emissions.

    Using membranes for separation of chemicals is known to be much more efficient than processes such as distillation or absorption, but there has always been a tradeoff between permeability — how fast gases can penetrate through the material — and selectivity — the ability to let the desired molecules pass through while blocking all others. The new family of membrane materials, based on “hydrocarbon ladder” polymers, overcomes that tradeoff, providing both high permeability and extremely good selectivity, the researchers say.

    The findings are reported today in the journal Science, in a paper by Yan Xia, an associate professor of chemistry at Stanford; Zachary Smith, an assistant professor of chemical engineering at MIT; Ingo Pinnau, a professor at King Abdullah University of Science and Technology, and five others.

    Gas separation is an important and widespread industrial process whose uses include removing impurities and undesired compounds from natural gas or biogas, separating oxygen and nitrogen from air for medical and industrial purposes, separating carbon dioxide from other gases for carbon capture, and producing hydrogen for use as a carbon-free transportation fuel. The new ladder polymer membranes show promise for drastically improving the performance of such separation processes. For example, separating carbon dioxide from methane, these new membranes have five times the selectivity and 100 times the permeability of existing cellulosic membranes for that purpose. Similarly, they are 100 times more permeable and three times as selective for separating hydrogen gas from methane.

    The new type of polymers, developed over the last several years by the Xia lab, are referred to as ladder polymers because they are formed from double strands connected by rung-like bonds, and these linkages provide a high degree of rigidity and stability to the polymer material. These ladder polymers are synthesized via an efficient and selective chemistry the Xia lab developed called CANAL, an acronym for catalytic arene-norbornene annulation, which stitches readily available chemicals into ladder structures with hundreds or even thousands of rungs. The polymers are synthesized in a solution, where they form rigid and kinked ribbon-like strands that can easily be made into a thin sheet with sub-nanometer-scale pores by using industrially available polymer casting processes. The sizes of the resulting pores can be tuned through the choice of the specific hydrocarbon starting compounds. “This chemistry and choice of chemical building blocks allowed us to make very rigid ladder polymers with different configurations,” Xia says.

    To apply the CANAL polymers as selective membranes, the collaboration made use of Xia’s expertise in polymers and Smith’s specialization in membrane research. Holden Lai, a former Stanford doctoral student, carried out much of the development and exploration of how their structures impact gas permeation properties. “It took us eight years from developing the new chemistry to finding the right polymer structures that bestow the high separation performance,” Xia says.

    The Xia lab spent the past several years varying the structures of CANAL polymers to understand how their structures affect their separation performance. Surprisingly, they found that adding additional kinks to their original CANAL polymers significantly improved the mechanical robustness of their membranes and boosted their selectivity  for molecules of similar sizes, such as oxygen and nitrogen gases, without losing permeability of the more permeable gas. The selectivity actually improves as the material ages. The combination of high selectivity and high permeability makes these materials outperform all other polymer materials in many gas separations, the researchers say.

    Today, 15 percent of global energy use goes into chemical separations, and these separation processes are “often based on century-old technologies,” Smith says. “They work well, but they have an enormous carbon footprint and consume massive amounts of energy. The key challenge today is trying to replace these nonsustainable processes.” Most of these processes require high temperatures for boiling and reboiling solutions, and these often are the hardest processes to electrify, he adds.

    For the separation of oxygen and nitrogen from air, the two molecules only differ in size by about 0.18 angstroms (ten-billionths of a meter), he says. To make a filter capable of separating them efficiently “is incredibly difficult to do without decreasing throughput.” But the new ladder polymers, when manufactured into membranes produce tiny pores that achieve high selectivity, he says. In some cases, 10 oxygen molecules permeate for every nitrogen, despite the razor-thin sieve needed to access this type of size selectivity. These new membrane materials have “the highest combination of permeability and selectivity of all known polymeric materials for many applications,” Smith says.

    “Because CANAL polymers are strong and ductile, and because they are soluble in certain solvents, they could be scaled for industrial deployment within a few years,” he adds. An MIT spinoff company called Osmoses, led by authors of this study, recently won the MIT $100K entrepreneurship competition and has been partly funded by The Engine to commercialize the technology.

    There are a variety of potential applications for these materials in the chemical processing industry, Smith says, including the separation of carbon dioxide from other gas mixtures as a form of emissions reduction. Another possibility is the purification of biogas fuel made from agricultural waste products in order to provide carbon-free transportation fuel. Hydrogen separation for producing a fuel or a chemical feedstock, could also be carried out efficiently, helping with the transition to a hydrogen-based economy.

    The close-knit team of researchers is continuing to refine the process to facilitate the development from laboratory to industrial scale, and to better understand the details on how the macromolecular structures and packing result in the ultrahigh selectivity. Smith says he expects this platform technology to play a role in multiple decarbonization pathways, starting with hydrogen separation and carbon capture, because there is such a pressing need for these technologies in order to transition to a carbon-free economy.

    “These are impressive new structures that have outstanding gas separation performance,” says Ryan Lively, am associate professor of chemical and biomolecular engineering at Georgia Tech, who was not involved in this work. “Importantly, this performance is improved during membrane aging and when the membranes are challenged with concentrated gas mixtures. … If they can scale these materials and fabricate membrane modules, there is significant potential practical impact.”

    The research team also included Jun Myun Ahn and Ashley Robinson at Stanford, Francesco Benedetti at MIT, now the chief executive officer at Osmoses, and Yingge Wang at King Abdullah University of Science and Technology in Saudi Arabia. The work was supported by the Stanford Natural Gas Initiative, the Sloan Research Fellowship, the U.S. Department of Energy Office of Basic Energy Sciences, and the National Science Foundation. More