More stories

  • in

    River erosion can shape fish evolution, study suggests

    If we could rewind the tape of species evolution around the world and play it forward over hundreds of millions of years to the present day, we would see biodiversity clustering around regions of tectonic turmoil. Tectonically active regions such as the Himalayan and Andean mountains are especially rich in flora and fauna due to their shifting landscapes, which act to divide and diversify species over time.

    But biodiversity can also flourish in some geologically quieter regions, where tectonics hasn’t shaken up the land for millennia. The Appalachian Mountains are a prime example: The range has not seen much tectonic activity in hundreds of millions of years, and yet the region is a notable hotspot of freshwater biodiversity.

    Now, an MIT study identifies a geological process that may shape the diversity of species in tectonically inactive regions. In a paper appearing today in Science, the researchers report that river erosion can be a driver of biodiversity in these older, quieter environments.

    They make their case in the southern Appalachians, and specifically the Tennessee River Basin, a region known for its huge diversity of freshwater fishes. The team found that as rivers eroded through different rock types in the region, the changing landscape pushed a species of fish known as the greenfin darter into different tributaries of the river network. Over time, these separated populations developed into their own distinct lineages.

    The team speculates that erosion likely drove the greenfin darter to diversify. Although the separated populations appear outwardly similar, with the greenfin darter’s characteristic green-tinged fins, they differ substantially in their genetic makeup. For now, the separated populations are classified as one single species. 

    “Give this process of erosion more time, and I think these separate lineages will become different species,” says Maya Stokes PhD ’21, who carried out part of the work as a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).

    The greenfin darter may not be the only species to diversify as a consequence of river erosion. The researchers suspect that erosion may have driven many other species to diversify throughout the basin, and possibly other tectonically inactive regions around the world.

    “If we can understand the geologic factors that contribute to biodiversity, we can do a better job of conserving it,” says Taylor Perron, the Cecil and Ida Green Professor of Earth, Atmospheric, and Planetary Sciences at MIT.

    The study’s co-authors include collaborators at Yale University, Colorado State University, the University of Tennessee, the University of Massachusetts at Amherst, and the Tennessee Valley Authority (TVA). Stokes is currently an assistant professor at Florida State University.

    Fish in trees

    The new study grew out of Stokes’ PhD work at MIT, where she and Perron were exploring connections between geomorphology (the study of how landscapes evolve) and biology. They came across work at Yale by Thomas Near, who studies lineages of North American freshwater fishes. Near uses DNA sequence data collected from freshwater fishes across various regions of North America to show how and when certain species evolved and diverged in relation to each other.

    Near brought a curious observation to the team: a habitat distribution map of the greenfin darter showing that the fish was found in the Tennessee River Basin — but only in the southern half. What’s more, Near had mitochondrial DNA sequence data showing that the fish’s populations appeared to be different in their genetic makeup depending on the tributary in which they were found.

    To investigate the reasons for this pattern, Stokes gathered greenfin darter tissue samples from Near’s extensive collection at Yale, as well as from the field with help from TVA colleagues. She then analyzed DNA sequences from across the entire genome, and compared the genes of each individual fish to every other fish in the dataset. The team then created a phylogenetic tree of the greenfin darter, based on the genetic similarity between fish.

    From this tree, they observed that fish within a tributary were more related to each other than to fish in other tributaries. What’s more, fish within neighboring tributaries were more similar to each other than fish from more distant tributaries.

    “Our question was, could there have been a geological mechanism that, over time, took this single species, and splintered it into different, genetically distinct groups?” Perron says.

    A changing landscape

    Stokes and Perron started to observe a “tight correlation” between greenfin darter habitats and the type of rock where they are found. In particular, much of the southern half of the Tennessee River Basin, where the species abounds, is made of metamorphic rock, whereas the northern half consists of sedimentary rock, where the fish are not found.

    They also observed that the rivers running through metamorphic rock are steeper and more narrow, which generally creates more turbulence, a characteristic greenfin darters seem to prefer. The team wondered: Could the distribution of greenfin darter habitat have been shaped by a changing landscape of rock type, as rivers eroded into the land over time?

    To check this idea, the researchers developed a model to simulate how a landscape evolves as rivers erode through various rock types. They fed the model information about the rock types in the Tennessee River Basin today, then ran the simulation back to see how the same region may have looked millions of years ago, when more metamorphic rock was exposed.

    They then ran the model forward and observed how the exposure of metamorphic rock shrank over time. They took special note of where and when connections between tributaries crossed into non-metamorphic rock, blocking fish from passing between those tributaries. They drew up a simple timeline of these blocking events and compared this to the phylogenetic tree of diverging greenfin darters. The two were remarkably similar: The fish seemed to form separate lineages in the same order as when their respective tributaries became separated from the others.

    “It means it’s plausible that erosion through different rock layers caused isolation between different populations of the greenfin darter and caused lineages to diversify,” Stokes says.

    “This study is highly compelling because it reveals a much more subtle but powerful mechanism for speciation in passive margins,” says Josh Roering, professor of Earth sciences at the University of Oregon, who was not involved in the study. “Stokes and Perron have revealed some of the intimate connections between aquatic species and geology that may be much more common than we realize.”

    This research was supported, in part, by the mTerra Catalyst Fund and the U.S. National Science Foundation through the AGeS Geochronology Program and the Graduate Research Fellowship Program. While at MIT, Stokes received support through the Martin Fellowship for Sustainability and the Hugh Hampton Young Fellowship. More

  • in

    Moving water and earth

    As a river cuts through a landscape, it can operate like a conveyer belt, moving truckloads of sediment over time. Knowing how quickly or slowly this sediment flows can help engineers plan for the downstream impact of restoring a river or removing a dam. But the models currently used to estimate sediment flow can be off by a wide margin.

    An MIT team has come up with a better formula to calculate how much sediment a fluid can push across a granular bed — a process known as bed load transport. The key to the new formula comes down to the shape of the sediment grains.

    It may seem intuitive: A smooth, round stone should skip across a river bed faster than an angular pebble. But flowing water also pushes harder on the angular pebble, which could erase the round stone’s advantage. Which effect wins? Existing sediment transport models surprisingly don’t offer an answer, mainly because the problem of measuring grain shape is too unwieldy: How do you quantify a pebble’s contours?

    The MIT researchers found that instead of considering a grain’s exact shape, they could boil the concept of shape down to two related properties: friction and drag. A grain’s drag, or resistance to fluid flow, relative to its internal friction, the resistance to sliding past other grains, can provide an easy way to gauge the effects of a grain’s shape.

    When they incorporated this new mathematical measure of grain shape into a standard model for bed load transport, the new formula made predictions that matched experiments that the team performed in the lab.

    “Sediment transport is a part of life on Earth’s surface, from the impact of storms on beaches to the gravel nests in mountain streams where salmon lay their eggs,” the team writes of their new study, appearing today in Nature. “Damming and sea level rise have already impacted many such terrains and pose ongoing threats. A good understanding of bed load transport is crucial to our ability to maintain these landscapes or restore them to their natural states.”

    The study’s authors are Eric Deal, Santiago Benavides, Qiong Zhang, Ken Kamrin, and Taylor Perron of MIT, and Jeremy Venditti and Ryan Bradley of Simon Fraser University in Canada.

    Figuring flow

    Video of glass spheres (top) and natural river gravel (bottom) undergoing bed load transport in a laboratory flume, slowed down 17x relative to real time. Average grain diameter is about 5 mm. This video shows how rolling and tumbling natural grains interact with one another in a way that is not possible for spheres. What can’t be seen so easily is that natural grains also experience higher drag forces from the flowing water than spheres do.

    Credit: Courtesy of the researchers

    Previous item
    Next item

    Bed load transport is the process by which a fluid such as air or water drags grains across a bed of sediment, causing the grains to hop, skip, and roll along the surface as a fluid flows through. This movement of sediment in a current is what drives rocks to migrate down a river and sand grains to skip across a desert.

    Being able to estimate bed load transport can help scientists prepare for situations such as urban flooding and coastal erosion. Since the 1930s, one formula has been the go-to model for calculating bed load transport; it’s based on a quantity known as the Shields parameter, after the American engineer who originally derived it. This formula sets a relationship between the force of a fluid pushing on a bed of sediment, and how fast the sediment moves in response. Albert Shields incorporated certain variables into this formula, including the average size and density of a sediment’s grains — but not their shape.

    “People may have backed away from accounting for shape because it’s one of these very scary degrees of freedom,” says Kamrin, a professor of mechanical engineering at MIT. “Shape is not a single number.”

    And yet, the existing model has been known to be off by a factor of 10 in its predictions of sediment flow. The team wondered whether grain shape could be a missing ingredient, and if so, how the nebulous property could be mathematically represented.

    “The trick was to focus on characterizing the effect that shape has on sediment transport dynamics, rather than on characterizing the shape itself,” says Deal.

    “It took some thinking to figure that out,” says Perron, a professor of geology in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “But we went back to derive the Shields parameter, and when you do the math, this ratio of drag to friction falls out.”

    Drag and drop

    Their work showed that the Shields parameter — which predicts how much sediment is transported — can be modified to include not just size and density, but also grain shape, and furthermore, that a grain’s shape can be simply represented by a measure of the grain’s drag and its internal friction. The math seemed to make sense. But could the new formula predict how sediment actually flows?

    To answer this, the researchers ran a series of flume experiments, in which they pumped a current of water through an inclined tank with a floor covered in sediment. They ran tests with sediment of various grain shapes, including beds of round glass beads, smooth glass chips, rectangular prisms, and natural gravel. They measured the amount of sediment that was transported through the tank in a fixed amount of time. They then determined the effect of each sediment type’s grain shape by measuring the grains’ drag and friction.

    For drag, the researchers simply dropped individual grains down through a tank of water and gathered statistics for the time it took the grains of each sediment type to reach the bottom. For instance, a flatter grain type takes a longer time on average, and therefore has greater drag, than a round grain type of the same size and density.

    To measure friction, the team poured grains through a funnel and onto a circular tray, then measured the resulting pile’s angle, or slope — an indication of the grains’ friction, or ability to grip onto each other.

    For each sediment type, they then worked the corresponding shape’s drag and friction into the new formula, and found that it could indeed predict the bedload transport, or the amount of moving sediment that the researchers measured in their experiments.

    The team says the new model more accurately represents sediment flow. Going forward, scientists and engineers can use the model to better gauge how a river bed will respond to scenarios such as sudden flooding from severe weather or the removal of a dam.

    “If you were trying to make a prediction of how fast all that sediment will get evacuated after taking a dam out, and you’re wrong by a factor of three or five, that’s pretty bad,” Perron says. “Now we can do a lot better.”

    This research was supported, in part, by the U.S. Army Research Laboratory. More

  • in

    Keeping indoor humidity levels at a “sweet spot” may reduce spread of Covid-19

    We know proper indoor ventilation is key to reducing the spread of Covid-19. Now, a study by MIT researchers finds that indoor relative humidity may also influence transmission of the virus.

    Relative humidity is the amount of moisture in the air compared to the total moisture the air can hold at a given temperature before saturating and forming condensation.

    In a study appearing today in the Journal of the Royal Society Interface, the MIT team reports that maintaining an indoor relative humidity between 40 and 60 percent is associated with relatively lower rates of Covid-19 infections and deaths, while indoor conditions outside this range are associated with worse Covid-19 outcomes. To put this into perspective, most people are comfortable between 30 and 50 percent relative humidity, and an airplane cabin is at around 20 percent relative humidity.

    The findings are based on the team’s analysis of Covid-19 data combined with meteorological measurements from 121 countries, from January 2020 through August 2020. Their study suggests a strong connection between regional outbreaks and indoor relative humidity.

    In general, the researchers found that whenever a region experienced a rise in Covid-19 cases and deaths prevaccination, the estimated indoor relative humidity in that region, on average, was either lower than 40 percent or higher than 60 percent regardless of season. Nearly all regions in the study experienced fewer Covid-19 cases and deaths during periods when estimated indoor relative humidity was within a “sweet spot” between 40 and 60 percent.

    “There’s potentially a protective effect of this intermediate indoor relative humidity,” suggests lead author Connor Verheyen, a PhD student in medical engineering and medical physics in the Harvard-MIT Program in Health Sciences and Technology.

    “Indoor ventilation is still critical,” says co-author Lydia Bourouiba, director of the MIT Fluid Dynamics of Disease Transmission Laboratory and associate professor in the departments of Civil and Environmental Engineering and Mechanical Engineering, and at the Institute for Medical Engineering and Science at MIT. “However, we find that maintaining an indoor relative humidity in that sweet spot — of 40 to 60 percent — is associated with reduced Covid-19 cases and deaths.”

    Seasonal swing?

    Since the start of the Covid-19 pandemic, scientists have considered the possibility that the virus’ virulence swings with the seasons. Infections and associated deaths appear to rise in winter and ebb in summer. But studies looking to link the virus’ patterns to seasonal outdoor conditions have yielded mixed results.

    Verheyen and Bourouiba examined whether Covid-19 is influenced instead by indoor — rather than outdoor — conditions, and, specifically, relative humidity. After all, they note that most societies spend more than 90 percent of their time indoors, where the majority of viral transmission has been shown to occur. What’s more, indoor conditions can be quite different from outdoor conditions as a result of climate control systems, such as heaters that significantly dry out indoor air.

    Could indoor relative humidity have affected the spread and severity of Covid-19 around the world? And could it help explain the differences in health outcomes from region to region?

    Tracking humidity

    For answers, the team focused on the early period of the pandemic when vaccines were not yet available, reasoning that vaccinated populations would obscure the influence of any other factor such as indoor humidity. They gathered global Covid-19 data, including case counts and reported deaths, from January 2020 to August 2020,  and identified countries with at least 50 deaths, indicating at least one outbreak had occurred in those countries.

    In all, they focused on 121 countries where Covid-19 outbreaks occurred. For each country, they also tracked the local Covid-19 related policies, such as isolation, quarantine, and testing measures, and their statistical association with Covid-19 outcomes.

    For each day that Covid-19 data was available, they used meteorological data to calculate a country’s outdoor relative humidity. They then estimated the average indoor relative humidity, based on outdoor relative humidity and guidelines on temperature ranges for human comfort. For instance, guidelines report that humans are comfortable between 66 to 77 degrees Fahrenheit indoors. They also assumed that on average, most populations have the means to heat indoor spaces to comfortable temperatures. Finally, they also collected experimental data, which they used to validate their estimation approach.

    For every instance when outdoor temperatures were below the typical human comfort range, they assumed indoor spaces were heated to reach that comfort range. Based on the added heating, they calculated the associated drop in indoor relative humidity.

    In warmer times, both outdoor and indoor relative humidity for each country was about the same, but they quickly diverged in colder times. While outdoor humidity remained around 50 percent throughout the year, indoor relative humidity for countries in the Northern and Southern Hemispheres dropped below 40 percent in their respective colder periods, when Covid-19 cases and deaths also spiked in these regions.

    For countries in the tropics, relative humidity was about the same indoors and outdoors throughout the year, with a gradual rise indoors during the region’s summer season, when high outdoor humidity likely raised the indoor relative humidity over 60 percent. They found this rise mirrored the gradual increase in Covid-19 deaths in the tropics.

    “We saw more reported Covid-19 deaths on the low and high end of indoor relative humidity, and less in this sweet spot of 40 to 60 percent,” Verheyen says. “This intermediate relative humidity window is associated with a better outcome, meaning fewer deaths and a deceleration of the pandemic.”

    “We were very skeptical initially, especially as the Covid-19 data can be noisy and inconsistent,” Bourouiba says. “We thus were very thorough trying to poke holes in our own analysis, using a range of approaches to test the limits and robustness of the findings, including taking into account factors such as government intervention. Despite all our best efforts, we found that even when considering countries with very strong versus very weak Covid-19 mitigation policies, or wildly different outdoor conditions, indoor — rather than outdoor — relative humidity maintains an underlying strong and robust link with Covid-19 outcomes.”

    It’s still unclear how indoor relative humidity affects Covid-19 outcomes. The team’s follow-up studies suggest that pathogens may survive longer in respiratory droplets in both very dry and very humid conditions.

    “Our ongoing work shows that there are emerging hints of mechanistic links between these factors,” Bourouiba says. “For now however, we can say that indoor relative humidity emerges in a robust manner as another mitigation lever that organizations and individuals can monitor, adjust, and maintain in the optimal 40 to 60 percent range, in addition to proper ventillation.”

    This research was made possible, in part, by an MIT Alumni Class fund, the Richard and Susan Smith Family Foundation, the National Institutes of Health, and the National Science Foundation. More

  • in

    Small eddies play a big role in feeding ocean microbes

    Subtropical gyres are enormous rotating ocean currents that generate sustained circulations in the Earth’s subtropical regions just to the north and south of the equator. These gyres are slow-moving whirlpools that circulate within massive basins around the world, gathering up nutrients, organisms, and sometimes trash, as the currents rotate from coast to coast.

    For years, oceanographers have puzzled over conflicting observations within subtropical gyres. At the surface, these massive currents appear to host healthy populations of phytoplankton — microbes that feed the rest of the ocean food chain and are responsible for sucking up a significant portion of the atmosphere’s carbon dioxide.

    But judging from what scientists know about the dynamics of gyres, they estimated the currents themselves wouldn’t be able to maintain enough nutrients to sustain the phytoplankton they were seeing. How, then, were the microbes able to thrive?

    Now, MIT researchers have found that phytoplankton may receive deliveries of nutrients from outside the gyres, and that the delivery vehicle is in the form of eddies — much smaller currents that swirl at the edges of a gyre. These eddies pull nutrients in from high-nutrient equatorial regions and push them into the center of a gyre, where the nutrients are then taken up by other currents and pumped to the surface to feed phytoplankton.

    Ocean eddies, the team found, appear to be an important source of nutrients in subtropical gyres. Their replenishing effect, which the researchers call a “nutrient relay,” helps maintain populations of phytoplankton, which play a central role in the ocean’s ability to sequester carbon from the atmosphere. While climate models tend to project a decline in the ocean’s ability to sequester carbon over the coming decades, this “nutrient relay” could help sustain carbon storage over the subtropical oceans.

    “There’s a lot of uncertainty about how the carbon cycle of the ocean will evolve as climate continues to change, ” says Mukund Gupta, a postdoc at Caltech who led the study as a graduate student at MIT. “As our paper shows, getting the carbon distribution right is not straightforward, and depends on understanding the role of eddies and other fine-scale motions in the ocean.”

    Gupta and his colleagues report their findings this week in the Proceedings of the National Academy of Sciences. The study’s co-authors are Jonathan Lauderdale, Oliver Jahn, Christopher Hill, Stephanie Dutkiewicz, and Michael Follows at MIT, and Richard Williams at the University of Liverpool.

    A snowy puzzle

    A cross-section of an ocean gyre resembles a stack of nesting bowls that is stratified by density: Warmer, lighter layers lie at the surface, while colder, denser waters make up deeper layers. Phytoplankton live within the ocean’s top sunlit layers, where the microbes require sunlight, warm temperatures, and nutrients to grow.

    When phytoplankton die, they sink through the ocean’s layers as “marine snow.” Some of this snow releases nutrients back into the current, where they are pumped back up to feed new microbes. The rest of the snow sinks out of the gyre, down to the deepest layers of the ocean. The deeper the snow sinks, the more difficult it is for it to be pumped back to the surface. The snow is then trapped, or sequestered, along with any unreleased carbon and nutrients.

    Oceanographers thought that the main source of nutrients in subtropical gyres came from recirculating marine snow. But as a portion of this snow inevitably sinks to the bottom, there must be another source of nutrients to explain the healthy populations of phytoplankton at the surface. Exactly what that source is “has left the oceanography community a little puzzled for some time,” Gupta says.

    Swirls at the edge

    In their new study, the team sought to simulate a subtropical gyre to see what other dynamics may be at work. They focused on the North Pacific gyre, one of the Earth’s five major gyres, which circulates over most of the North Pacific Ocean, and spans more than 20 million square kilometers. 

    The team started with the MITgcm, a general circulation model that simulates the physical circulation patterns in the atmosphere and oceans. To reproduce the North Pacific gyre’s dynamics as realistically as possible, the team used an MITgcm algorithm, previously developed at NASA and MIT, which tunes the model to match actual observations of the ocean, such as ocean currents recorded by satellites, and temperature and salinity measurements taken by ships and drifters.  

    “We use a simulation of the physical ocean that is as realistic as we can get, given the machinery of the model and the available observations,” Lauderdale says.

    Play video

    An animation of the North Pacific Ocean shows phosphate nutrient concentrations at 500 meters below the ocean surface. The swirls represent small eddies transporting phosphate from the nutrient-rich equator (lighter colors), northward toward the nutrient-depleted subtropics (darker colors). This nutrient relay mechanism helps sustain biological activity and carbon sequestration in the subtropical ocean. Credit: Oliver Jahn

    The realistic model captured finer details, at a resolution of less than 20 kilometers per pixel, compared to other models that have a more limited resolution. The team combined the simulation of the ocean’s physical behavior with the Darwin model — a simulation of microbe communities such as phytoplankton, and how they grow and evolve with ocean conditions.

    The team ran the combined simulation of the North Pacific gyre over a decade, and created animations to visualize the pattern of currents and the nutrients they carried, in and around the gyre. What emerged were small eddies that ran along the edges of the enormous gyre and appeared to be rich in nutrients.

    “We were picking up on little eddy motions, basically like weather systems in the ocean,” Lauderdale says. “These eddies were carrying packets of high-nutrient waters, from the equator, north into the center of the gyre and downwards along the sides of the bowls. We wondered if these eddy transfers made an important delivery mechanism.”

    Surprisingly, the nutrients first move deeper, away from the sunlight, before being returned upwards where the phytoplankton live. The team found that ocean eddies could supply up to 50 percent of the nutrients in subtropical gyres.

    “That is very significant,” Gupta says. “The vertical process that recycles nutrients from marine snow is only half the story. The other half is the replenishing effect of these eddies. As subtropical gyres contribute a significant part of the world’s oceans, we think this nutrient relay is of global importance.”

    This research was supported, in part, by the Simons Foundation and NASA. More

  • in

    Ocean scientists measure sediment plume stirred up by deep-sea-mining vehicle

    What will be the impact to the ocean if humans are to mine the deep sea? It’s a question that’s gaining urgency as interest in marine minerals has grown.

    The ocean’s deep-sea bed is scattered with ancient, potato-sized rocks called “polymetallic nodules” that contain nickel and cobalt — minerals that are in high demand for the manufacturing of batteries, such as for powering electric vehicles and storing renewable energy, and in response to factors such as increasing urbanization. The deep ocean contains vast quantities of mineral-laden nodules, but the impact of mining the ocean floor is both unknown and highly contested.

    Now MIT ocean scientists have shed some light on the topic, with a new study on the cloud of sediment that a collector vehicle would stir up as it picks up nodules from the seafloor.

    The study, appearing today in Science Advances, reports the results of a 2021 research cruise to a region of the Pacific Ocean known as the Clarion Clipperton Zone (CCZ), where polymetallic nodules abound. There, researchers equipped a pre-prototype collector vehicle with instruments to monitor sediment plume disturbances as the vehicle maneuvered across the seafloor, 4,500 meters below the ocean’s surface. Through a sequence of carefully conceived maneuvers. the MIT scientists used the vehicle to monitor its own sediment cloud and measure its properties.

    Their measurements showed that the vehicle created a dense plume of sediment in its wake, which spread under its own weight, in a phenomenon known in fluid dynamics as a “turbidity current.” As it gradually dispersed, the plume remained relatively low, staying within 2 meters of the seafloor, as opposed to immediately lofting higher into the water column as had been postulated.

    “It’s quite a different picture of what these plumes look like, compared to some of the conjecture,” says study co-author Thomas Peacock, professor of mechanical engineering at MIT. “Modeling efforts of deep-sea mining plumes will have to account for these processes that we identified, in order to assess their extent.”

    The study’s co-authors include lead author Carlos Muñoz-Royo, Raphael Ouillon, and Souha El Mousadik of MIT; and Matthew Alford of the Scripps Institution of Oceanography.

    Deep-sea maneuvers

    To collect polymetallic nodules, some mining companies are proposing to deploy tractor-sized vehicles to the bottom of the ocean. The vehicles would vacuum up the nodules along with some sediment along their path. The nodules and sediment would then be separated inside of the vehicle, with the nodules sent up through a riser pipe to a surface vessel, while most of the sediment would be discharged immediately behind the vehicle.

    Peacock and his group have previously studied the dynamics of the sediment plume that associated surface operation vessels may pump back into the ocean. In their current study, they focused on the opposite end of the operation, to measure the sediment cloud created by the collectors themselves.

    In April 2021, the team joined an expedition led by Global Sea Mineral Resources NV (GSR), a Belgian marine engineering contractor that is exploring the CCZ for ways to extract metal-rich nodules. A European-based science team, Mining Impacts 2, also conducted separate studies in parallel. The cruise was the first in over 40 years to test a “pre-prototype” collector vehicle in the CCZ. The machine, called Patania II, stands about 3 meters high, spans 4 meters wide, and is about one-third the size of what a commercial-scale vehicle is expected to be.

    While the contractor tested the vehicle’s nodule-collecting performance, the MIT scientists monitored the sediment cloud created in the vehicle’s wake. They did so using two maneuvers that the vehicle was programmed to take: a “selfie,” and a “drive-by.”

    Both maneuvers began in the same way, with the vehicle setting out in a straight line, all its suction systems turned on. The researchers let the vehicle drive along for 100 meters, collecting any nodules in its path. Then, in the “selfie” maneuver, they directed the vehicle to turn off its suction systems and double back around to drive through the cloud of sediment it had just created. The vehicle’s installed sensors measured the concentration of sediment during this “selfie” maneuver, allowing the scientists to monitor the cloud within minutes of the vehicle stirring it up.

    Play video

    A movie of the Patania II pre-prototype collector vehicle entering, driving through, and leaving the low-lying turbidity current plume as part of a selfie operation. For scale, the instrumentation post attached to the front of the vehicle reaches about 3m above the seabed. The movie is sped up by a factor of 20. Credit: Global Sea Mineral Resources

    For the “drive-by” maneuver, the researchers placed a sensor-laden mooring 50 to 100 meters from the vehicle’s planned tracks. As the vehicle drove along collecting nodules, it created a plume that eventually spread past the mooring after an hour or two. This “drive-by” maneuver enabled the team to monitor the sediment cloud over a longer timescale of several hours, capturing the plume evolution.

    Out of steam

    Over multiple vehicle runs, Peacock and his team were able to measure and track the evolution of the sediment plume created by the deep-sea-mining vehicle.

    “We saw that the vehicle would be driving in clear water, seeing the nodules on the seabed,” Peacock says. “And then suddenly there’s this very sharp sediment cloud coming through when the vehicle enters the plume.”

    From the selfie views, the team observed a behavior that was predicted by some of their previous modeling studies: The vehicle stirred up a heavy amount of sediment that was dense enough that, even after some mixing with the surrounding water, it generated a plume that behaved almost as a separate fluid, spreading under its own weight in what’s known as a turbidity current.

    “The turbidity current spreads under its own weight for some time, tens of minutes, but as it does so, it’s depositing sediment on the seabed and eventually running out of steam,” Peacock says. “After that, the ocean currents get stronger than the natural spreading, and the sediment transitions to being carried by the ocean currents.”

    By the time the sediment drifted past the mooring, the researchers estimate that 92 to 98 percent of the sediment either settled back down or remained within 2 meters of the seafloor as a low-lying cloud. There is, however, no guarantee that the sediment always stays there rather than drifting further up in the water column. Recent and future studies by the research team are looking into this question, with the goal of consolidating understanding for deep-sea mining sediment plumes.

    “Our study clarifies the reality of what the initial sediment disturbance looks like when you have a certain type of nodule mining operation,” Peacock says. “The big takeaway is that there are complex processes like turbidity currents that take place when you do this kind of collection. So, any effort to model a deep-sea-mining operation’s impact will have to capture these processes.”

    “Sediment plumes produced by deep-seabed mining are a major concern with regards to environmental impact, as they will spread over potentially large areas beyond the actual site of mining and affect deep-sea life,” says Henko de Stigter, a marine geologist at the Royal Netherlands Institute for Sea Research, who was not involved in the research. “The current paper provides essential insight in the initial development of these plumes.”

    This research was supported, in part, by the National Science Foundation, ARPA-E, the 11th Hour Project, the Benioff Ocean Initiative, and Global Sea Mineral Resources. The funders had no role in any aspects of the research analysis, the research team states. More

  • in

    Engineers use artificial intelligence to capture the complexity of breaking waves

    Waves break once they swell to a critical height, before cresting and crashing into a spray of droplets and bubbles. These waves can be as large as a surfer’s point break and as small as a gentle ripple rolling to shore. For decades, the dynamics of how and when a wave breaks have been too complex to predict.

    Now, MIT engineers have found a new way to model how waves break. The team used machine learning along with data from wave-tank experiments to tweak equations that have traditionally been used to predict wave behavior. Engineers typically rely on such equations to help them design resilient offshore platforms and structures. But until now, the equations have not been able to capture the complexity of breaking waves.

    The updated model made more accurate predictions of how and when waves break, the researchers found. For instance, the model estimated a wave’s steepness just before breaking, and its energy and frequency after breaking, more accurately than the conventional wave equations.

    Their results, published today in the journal Nature Communications, will help scientists understand how a breaking wave affects the water around it. Knowing precisely how these waves interact can help hone the design of offshore structures. It can also improve predictions for how the ocean interacts with the atmosphere. Having better estimates of how waves break can help scientists predict, for instance, how much carbon dioxide and other atmospheric gases the ocean can absorb.

    “Wave breaking is what puts air into the ocean,” says study author Themis Sapsis, an associate professor of mechanical and ocean engineering and an affiliate of the Institute for Data, Systems, and Society at MIT. “It may sound like a detail, but if you multiply its effect over the area of the entire ocean, wave breaking starts becoming fundamentally important to climate prediction.”

    The study’s co-authors include lead author and MIT postdoc Debbie Eeltink, Hubert Branger and Christopher Luneau of Aix-Marseille University, Amin Chabchoub of Kyoto University, Jerome Kasparian of the University of Geneva, and T.S. van den Bremer of Delft University of Technology.

    Learning tank

    To predict the dynamics of a breaking wave, scientists typically take one of two approaches: They either attempt to precisely simulate the wave at the scale of individual molecules of water and air, or they run experiments to try and characterize waves with actual measurements. The first approach is computationally expensive and difficult to simulate even over a small area; the second requires a huge amount of time to run enough experiments to yield statistically significant results.

    The MIT team instead borrowed pieces from both approaches to develop a more efficient and accurate model using machine learning. The researchers started with a set of equations that is considered the standard description of wave behavior. They aimed to improve the model by “training” the model on data of breaking waves from actual experiments.

    “We had a simple model that doesn’t capture wave breaking, and then we had the truth, meaning experiments that involve wave breaking,” Eeltink explains. “Then we wanted to use machine learning to learn the difference between the two.”

    The researchers obtained wave breaking data by running experiments in a 40-meter-long tank. The tank was fitted at one end with a paddle which the team used to initiate each wave. The team set the paddle to produce a breaking wave in the middle of the tank. Gauges along the length of the tank measured the water’s height as waves propagated down the tank.

    “It takes a lot of time to run these experiments,” Eeltink says. “Between each experiment you have to wait for the water to completely calm down before you launch the next experiment, otherwise they influence each other.”

    Safe harbor

    In all, the team ran about 250 experiments, the data from which they used to train a type of machine-learning algorithm known as a neural network. Specifically, the algorithm is trained to compare the real waves in experiments with the predicted waves in the simple model, and based on any differences between the two, the algorithm tunes the model to fit reality.

    After training the algorithm on their experimental data, the team introduced the model to entirely new data — in this case, measurements from two independent experiments, each run at separate wave tanks with different dimensions. In these tests, they found the updated model made more accurate predictions than the simple, untrained model, for instance making better estimates of a breaking wave’s steepness.

    The new model also captured an essential property of breaking waves known as the “downshift,” in which the frequency of a wave is shifted to a lower value. The speed of a wave depends on its frequency. For ocean waves, lower frequencies move faster than higher frequencies. Therefore, after the downshift, the wave will move faster. The new model predicts the change in frequency, before and after each breaking wave, which could be especially relevant in preparing for coastal storms.

    “When you want to forecast when high waves of a swell would reach a harbor, and you want to leave the harbor before those waves arrive, then if you get the wave frequency wrong, then the speed at which the waves are approaching is wrong,” Eeltink says.

    The team’s updated wave model is in the form of an open-source code that others could potentially use, for instance in climate simulations of the ocean’s potential to absorb carbon dioxide and other atmospheric gases. The code can also be worked into simulated tests of offshore platforms and coastal structures.

    “The number one purpose of this model is to predict what a wave will do,” Sapsis says. “If you don’t model wave breaking right, it would have tremendous implications for how structures behave. With this, you could simulate waves to help design structures better, more efficiently, and without huge safety factors.”

    This research is supported, in part, by the Swiss National Science Foundation, and by the U.S. Office of Naval Research. More

  • in

    MIT engineers introduce the Oreometer

    When you twist open an Oreo cookie to get to the creamy center, you’re mimicking a standard test in rheology — the study of how a non-Newtonian material flows when twisted, pressed, or otherwise stressed. MIT engineers have now subjected the sandwich cookie to rigorous materials tests to get to the center of a tantalizing question: Why does the cookie’s cream stick to just one wafer when twisted apart?

    “There’s the fascinating problem of trying to get the cream to distribute evenly between the two wafers, which turns out to be really hard,” says Max Fan, an undergraduate in MIT’s Department of Mechanical Engineering.

    In pursuit of an answer, the team subjected cookies to standard rheology tests in the lab and found that no matter the flavor or amount of stuffing, the cream at the center of an Oreo almost always sticks to one wafer when twisted open. Only for older boxes of cookies does the cream sometimes separate more evenly between both wafers.

    The researchers also measured the torque required to twist open an Oreo, and found it to be similar to the torque required to turn a doorknob and about 1/10th what’s needed to twist open a bottlecap. The cream’s failure stress — i.e. the force per area required to get the cream to flow, or deform — is twice that of cream cheese and peanut butter, and about the same magnitude as mozzarella cheese. Judging from the cream’s response to stress, the team classifies its texture as “mushy,” rather than brittle, tough, or rubbery.

    So, why does the cookie’s cream glom to one side rather than splitting evenly between both? The manufacturing process may be to blame.

    “Videos of the manufacturing process show that they put the first wafer down, then dispense a ball of cream onto that wafer before putting the second wafer on top,” says Crystal Owens, an MIT mechanical engineering PhD candidate who studies the properties of complex fluids. “Apparently that little time delay may make the cream stick better to the first wafer.”

    The team’s study isn’t simply a sweet diversion from bread-and-butter research; it’s also an opportunity to make the science of rheology accessible to others. To that end, the researchers have designed a 3D-printable “Oreometer” — a simple device that firmly grasps an Oreo cookie and uses pennies and rubber bands to control the twisting force that progressively twists the cookie open. Instructions for the tabletop device can be found here.

    The new study, “On Oreology, the fracture and flow of ‘milk’s favorite cookie,’” appears today in Kitchen Flows, a special issue of the journal Physics of Fluids. It was conceived of early in the Covid-19 pandemic, when many scientists’ labs were closed or difficult to access. In addition to Owens and Fan, co-authors are mechanical engineering professors Gareth McKinley and A. John Hart.

    Confection connection

    A standard test in rheology places a fluid, slurry, or other flowable material onto the base of an instrument known as a rheometer. A parallel plate above the base can be lowered onto the test material. The plate is then twisted as sensors track the applied rotation and torque.

    Owens, who regularly uses a laboratory rheometer to test fluid materials such as 3D-printable inks, couldn’t help noting a similarity with sandwich cookies. As she writes in the new study:

    “Scientifically, sandwich cookies present a paradigmatic model of parallel plate rheometry in which a fluid sample, the cream, is held between two parallel plates, the wafers. When the wafers are counter-rotated, the cream deforms, flows, and ultimately fractures, leading to separation of the cookie into two pieces.”

    While Oreo cream may not appear to possess fluid-like properties, it is considered a “yield stress fluid” — a soft solid when unperturbed that can start to flow under enough stress, the way toothpaste, frosting, certain cosmetics, and concrete do.

    Curious as to whether others had explored the connection between Oreos and rheology, Owens found mention of a 2016 Princeton University study in which physicists first reported that indeed, when twisting Oreos by hand, the cream almost always came off on one wafer.

    “We wanted to build on this to see what actually causes this effect and if we could control it if we mounted the Oreos carefully onto our rheometer,” she says.

    Play video

    Cookie twist

    In an experiment that they would repeat for multiple cookies of various fillings and flavors, the researchers glued an Oreo to both the top and bottom plates of a rheometer and applied varying degrees of torque and angular rotation, noting the values  that successfully twisted each cookie apart. They plugged the measurements into equations to calculate the cream’s viscoelasticity, or flowability. For each experiment, they also noted the cream’s “post-mortem distribution,” or where the cream ended up after twisting open.

    In all, the team went through about 20 boxes of Oreos, including regular, Double Stuf, and Mega Stuf levels of filling, and regular, dark chocolate, and “golden” wafer flavors. Surprisingly, they found that no matter the amount of cream filling or flavor, the cream almost always separated onto one wafer.

    “We had expected an effect based on size,” Owens says. “If there was more cream between layers, it should be easier to deform. But that’s not actually the case.”

    Curiously, when they mapped each cookie’s result to its original position in the box, they noticed the cream tended to stick to the inward-facing wafer: Cookies on the left side of the box twisted such that the cream ended up on the right wafer, whereas cookies on the right side separated with cream mostly on the left wafer. They suspect this box distribution may be a result of post-manufacturing environmental effects, such as heating or jostling that may cause cream to peel slightly away from the outer wafers, even before twisting.

    The understanding gained from the properties of Oreo cream could potentially be applied to the design of other complex fluid materials.

    “My 3D printing fluids are in the same class of materials as Oreo cream,” she says. “So, this new understanding can help me better design ink when I’m trying to print flexible electronics from a slurry of carbon nanotubes, because they deform in almost exactly the same way.”

    As for the cookie itself, she suggests that if the inside of Oreo wafers were more textured, the cream might grip better onto both sides and split more evenly when twisted.

    “As they are now, we found there’s no trick to twisting that would split the cream evenly,” Owens concludes.

    This research was supported, in part, by the MIT UROP program and by the National Defense Science and Engineering Graduate Fellowship Program. More

  • in

    Climate modeling confirms historical records showing rise in hurricane activity

    When forecasting how storms may change in the future, it helps to know something about their past. Judging from historical records dating back to the 1850s, hurricanes in the North Atlantic have become more frequent over the last 150 years.

    However, scientists have questioned whether this upward trend is a reflection of reality, or simply an artifact of lopsided record-keeping. If 19th-century storm trackers had access to 21st-century technology, would they have recorded more storms? This inherent uncertainty has kept scientists from relying on storm records, and the patterns within them, for clues to how climate influences storms.

    A new MIT study published today in Nature Communications has used climate modeling, rather than storm records, to reconstruct the history of hurricanes and tropical cyclones around the world. The study finds that North Atlantic hurricanes have indeed increased in frequency over the last 150 years, similar to what historical records have shown.

    In particular, major hurricanes, and hurricanes in general, are more frequent today than in the past. And those that make landfall appear to have grown more powerful, carrying more destructive potential.

    Curiously, while the North Atlantic has seen an overall increase in storm activity, the same trend was not observed in the rest of the world. The study found that the frequency of tropical cyclones globally has not changed significantly in the last 150 years.

    “The evidence does point, as the original historical record did, to long-term increases in North Atlantic hurricane activity, but no significant changes in global hurricane activity,” says study author Kerry Emanuel, the Cecil and Ida Green Professor of Atmospheric Science in MIT’s Department of Earth, Atmospheric, and Planetary Sciences. “It certainly will change the interpretation of climate’s effects on hurricanes — that it’s really the regionality of the climate, and that something happened to the North Atlantic that’s different from the rest of the globe. It may have been caused by global warming, which is not necessarily globally uniform.”

    Chance encounters

    The most comprehensive record of tropical cyclones is compiled in a database known as the International Best Track Archive for Climate Stewardship (IBTrACS). This historical record includes modern measurements from satellites and aircraft that date back to the 1940s. The database’s older records are based on reports from ships and islands that happened to be in a storm’s path. These earlier records date back to 1851, and overall the database shows an increase in North Atlantic storm activity over the last 150 years.

    “Nobody disagrees that that’s what the historical record shows,” Emanuel says. “On the other hand, most sensible people don’t really trust the historical record that far back in time.”

    Recently, scientists have used a statistical approach to identify storms that the historical record may have missed. To do so, they consulted all the digitally reconstructed shipping routes in the Atlantic over the last 150 years and mapped these routes over modern-day hurricane tracks. They then estimated the chance that a ship would encounter or entirely miss a hurricane’s presence. This analysis found a significant number of early storms were likely missed in the historical record. Accounting for these missed storms, they concluded that there was a chance that storm activity had not changed over the last 150 years.

    But Emanuel points out that hurricane paths in the 19th century may have looked different from today’s tracks. What’s more, the scientists may have missed key shipping routes in their analysis, as older routes have not yet been digitized.

    “All we know is, if there had been a change (in storm activity), it would not have been detectable, using digitized ship records,” Emanuel says “So I thought, there’s an opportunity to do better, by not using historical data at all.”

    Seeding storms

    Instead, he estimated past hurricane activity using dynamical downscaling — a technique that his group developed and has applied over the last 15 years to study climate’s effect on hurricanes. The technique starts with a coarse global climate simulation and embeds within this model a finer-resolution model that simulates features as small as hurricanes. The combined models are then fed with real-world measurements of atmospheric and ocean conditions. Emanuel then scatters the realistic simulation with hurricane “seeds” and runs the simulation forward in time to see which seeds bloom into full-blown storms.

    For the new study, Emanuel embedded a hurricane model into a climate “reanalysis” — a type of climate model that combines observations from the past with climate simulations to generate accurate reconstructions of past weather patterns and climate conditions. He used a particular subset of climate reanalyses that only accounts for observations collected from the surface — for instance from ships, which have recorded weather conditions and sea surface temperatures consistently since the 1850s, as opposed to from satellites, which only began systematic monitoring in the 1970s.

    “We chose to use this approach to avoid any artificial trends brought about by the introduction of progressively different observations,” Emanuel explains.

    He ran an embedded hurricane model on three different climate reanalyses, simulating tropical cyclones around the world over the past 150 years. Across all three models, he observed “unequivocal increases” in North Atlantic hurricane activity.

    “There’s been this quite large increase in activity in the Atlantic since the mid-19th century, which I didn’t expect to see,” Emanuel says.

    Within this overall rise in storm activity, he also observed a “hurricane drought” — a period during the 1970s and 80s when the number of yearly hurricanes momentarily dropped. This pause in storm activity can also be seen in historical records, and Emanuel’s group proposes a cause: sulfate aerosols, which were byproducts of fossil fuel combustion, likely set off a cascade of climate effects that cooled the North Atlantic and temporarily suppressed hurricane formation.

    “The general trend over the last 150 years was increasing storm activity, interrupted by this hurricane drought,” Emanuel notes. “And at this point, we’re more confident of why there was a hurricane drought than why there is an ongoing, long-term increase in activity that began in the 19th century. That is still a mystery, and it bears on the question of how global warming might affect future Atlantic hurricanes.”

    This research was supported, in part, by the National Science Foundation. More