More stories

  • in

    Technology developed by MIT engineers makes pesticides stick to plant leaves

    Reducing the amount of agricultural sprays used by farmers — including fertilizers, pesticides and herbicides — could cut down the amount of polluting runoff that ends up in the environment while at the same time reducing farmers’ costs and perhaps even enhancing their productivity. A classic win-win-win.A team of researchers at MIT and a spinoff company they launched has developed a system to do just that. Their technology adds a thin coating around droplets as they are being sprayed onto a field, greatly reducing their tendency to bounce off leaves and end up wasted on the ground. Instead, the coated droplets stick to the leaves as intended.The research is described today in the journal Soft Matter, in a paper by recent MIT alumni Vishnu Jayaprakash PhD ’22 and Sreedath Panat PhD ’23, graduate student Simon Rufer, and MIT professor of mechanical engineering Kripa Varanasi.A recent study found that if farmers didn’t use pesticides, they would lose 78 percent of fruit, 54 percent of vegetable, and 32 percent of cereal production. Despite their importance, a lack of technology that monitors and optimizes sprays has forced farmers to rely on personal experience and rules of thumb to decide how to apply these chemicals. As a result, these chemicals tend to be over-sprayed, leading to runoff and chemicals ending up in waterways or building up in the soil.Pesticides take a significant toll on global health and the environment, the researchers point out. A recent study found that 31 percent of agricultural soils around the world were at high risk from pesticide pollution. And agricultural chemicals are a major expense for farmers: In the U.S., they spend $16 billion a year just on pesticides.Making spraying more efficient is one of the best ways to make food production more sustainable and economical. Agricultural spraying essentially boils down to mixing chemicals into water and spraying water droplets onto plant leaves, which are often inherently water-repellent. “Over more than a decade of research in my lab at MIT, we have developed fundamental understandings of spraying and the interaction between droplets and plants — studying when they bounce and all the ways we have to make them stick better and enhance coverage,” Varanasi says.The team had previously found a way to reduce the amount of sprayed liquid that bounces away from the leaves it strikes, which involved using two spray nozzles instead of one and spraying mixtures with opposite electrical charges. But they found that farmers were reluctant to take on the expense and effort of converting their spraying equipment to a two-nozzle system. So, the team looked for a simpler alternative.They discovered they could achieve the same improvement in droplet retention using a single-nozzle system that can be easily adapted to existing sprayers. Instead of giving the droplets of pesticide an electric charge, they coat each droplet with a vanishingly thin layer of an oily material.In their new study, they conducted lab experiments with high-speed cameras. When they sprayed droplets with no special treatment onto a water-repelling (hydrophobic) surface similar to that of many plant leaves, the droplets initially spread out into a pancake-like disk, then rebounded back into a ball and bounced away. But when the researchers coated the surface of the droplets with a tiny amount of oil — making up less than 1 percent of the droplet’s liquid — the droplets spread out and then stayed put. The treatment improved the droplets’ “stickiness” by as much as a hundredfold.“When these droplets are hitting the surface and as they expand, they form this oil ring that essentially pins the droplet to the surface,” Rufer says. The researchers tried a wide variety of conditions, he says, explaining that they conducted hundreds of experiments, “with different impact velocities, different droplet sizes, different angles of inclination, all the things that fully characterize this phenomenon.” Though different oils varied in their effectiveness, all of them were effective. “Regardless of the impact velocity and the oils, we saw that the rebound height was significantly lower,” he says.The effect works with remarkably small amounts of oil. In their initial tests they used 1 percent oil compared to the water, then they tried a 0.1 percent, and even .01. The improvement in droplets sticking to the surface continued at a 0.1 percent, but began to break down beyond that. “Basically, this oil film acts as a way to trap that droplet on the surface, because oil is very attracted to the surface and sort of holds the water in place,” Rufer says.In the researchers’ initial tests they used soybean oil for the coating, figuring this would be a familiar material for the farmers they were working with, many of whom were growing soybeans. But it turned out that though they were producing the beans, the oil was not part of their usual supply chain for use on the farm. In further tests, the researchers found that several chemicals that farmers were already routinely using in their spraying, called surfactants and adjuvants, could be used instead, and that some of these provided the same benefits in keeping the droplets stuck on the leaves.“That way,” Varanasi says, “we’re not introducing a new chemical or changed chemistries into their field, but they’re using things they’ve known for a long time.”Varanasi and Jayaprakash formed a company called AgZen to commercialize the system. In order to prove how much their coating system improves the amount of spray that stays on the plant, they first had to develop a system to monitor spraying in real time. That system, which they call RealCoverage, has been deployed on farms ranging in size from a few dozen acres to hundreds of thousands of acres, and many different crop types, and has saved farmers 30 to 50 percent on their pesticide expenditures, just by improving the controls on the existing sprays. That system is being deployed to 920,000 acres of crops in 2025, the company says, including some in California, Texas, the Midwest, France and Italy. Adding the cloaking system using new nozzles, the researchers say, should yield at least another doubling of efficiency.“You could give back a billion dollars to U.S. growers if you just saved 6 percent of their pesticide budget,” says Jayaprakash, lead author of the research paper and CEO of AgZen. “In the lab we got 300 percent of extra product on the plant. So that means we could get orders of magnitude reductions in the amount of pesticides that farmers are spraying.”Farmers had already been using these surfactant and adjuvant chemicals as a way to enhance spraying effectiveness, but they were mixing it with a water solution. For it to have any effect, they had to use much more of these materials, risking causing burns to the plants. The new coating system reduces the amount of these materials needed, while improving their effectiveness.In field tests conducted by AgZen, “we doubled the amount of product on kale and soybeans just by changing where the adjuvant was,” from mixed in to being a coating, Jayaprakash says. It’s convenient for farmers because “all they’re doing is changing their nozzle. They’re getting all their existing chemicals to work better, and they’re getting more product on the plant.”And it’s not just for pesticides. “The really cool thing is this is useful for every chemistry that’s going on the leaf, be it an insecticide, a herbicide, a fungicide, or foliar nutrition,” Varanasi says. This year, they plan to introduce the new spray system on about 30,000 acres of cropland.Varanasi says that with projected world population growth, “the amount of food production has got to double, and we are limited in so many resources, for example we cannot double the arable land. … This means that every acre we currently farm must become more efficient and able to do more with less.” These improved spraying technologies, for both monitoring the spraying and coating the droplets, Varanasi says, “I think is fundamentally changing agriculture.”AgZen has recently raised $10 million in venture financing to support rapid commercial deployment of these technologies that can improve the control of chemical inputs into agriculture. “The knowledge we are gathering from every leaf, combined with our expertise in interfacial science and fluid mechanics, is giving us unparalleled insights into how chemicals are used and developed — and it’s clear that we can deliver value across the entire agrochemical supply chain,” Varanasi says  “Our mission is to use these technologies to deliver improved outcomes and reduced costs for the ag industry.”  More

  • in

    Study: Climate change will reduce the number of satellites that can safely orbit in space

    MIT aerospace engineers have found that greenhouse gas emissions are changing the environment of near-Earth space in ways that, over time, will reduce the number of satellites that can sustainably operate there.In a study appearing today in Nature Sustainability, the researchers report that carbon dioxide and other greenhouse gases can cause the upper atmosphere to shrink. An atmospheric layer of special interest is the thermosphere, where the International Space Station and most satellites orbit today. When the thermosphere contracts, the decreasing density reduces atmospheric drag — a force that pulls old satellites and other debris down to altitudes where they will encounter air molecules and burn up.Less drag therefore means extended lifetimes for space junk, which will litter sought-after regions for decades and increase the potential for collisions in orbit.The team carried out simulations of how carbon emissions affect the upper atmosphere and orbital dynamics, in order to estimate the “satellite carrying capacity” of low Earth orbit. These simulations predict that by the year 2100, the carrying capacity of the most popular regions could be reduced by 50-66 percent due to the effects of greenhouse gases.“Our behavior with greenhouse gases here on Earth over the past 100 years is having an effect on how we operate satellites over the next 100 years,” says study author Richard Linares, associate professor in MIT’s Department of Aeronautics and Astronautics (AeroAstro).“The upper atmosphere is in a fragile state as climate change disrupts the status quo,” adds lead author William Parker, a graduate student in AeroAstro. “At the same time, there’s been a massive increase in the number of satellites launched, especially for delivering broadband internet from space. If we don’t manage this activity carefully and work to reduce our emissions, space could become too crowded, leading to more collisions and debris.”The study includes co-author Matthew Brown of the University of Birmingham.Sky fallThe thermosphere naturally contracts and expands every 11 years in response to the sun’s regular activity cycle. When the sun’s activity is low, the Earth receives less radiation, and its outermost atmosphere temporarily cools and contracts before expanding again during solar maximum.In the 1990s, scientists wondered what response the thermosphere might have to greenhouse gases. Their preliminary modeling showed that, while the gases trap heat in the lower atmosphere, where we experience global warming and weather, the same gases radiate heat at much higher altitudes, effectively cooling the thermosphere. With this cooling, the researchers predicted that the thermosphere should shrink, reducing atmospheric density at high altitudes.In the last decade, scientists have been able to measure changes in drag on satellites, which has provided some evidence that the thermosphere is contracting in response to something more than the sun’s natural, 11-year cycle.“The sky is quite literally falling — just at a rate that’s on the scale of decades,” Parker says. “And we can see this by how the drag on our satellites is changing.”The MIT team wondered how that response will affect the number of satellites that can safely operate in Earth’s orbit. Today, there are over 10,000 satellites drifting through low Earth orbit, which describes the region of space up to 1,200 miles (2,000 kilometers), from Earth’s surface. These satellites deliver essential services, including internet, communications, navigation, weather forecasting, and banking. The satellite population has ballooned in recent years, requiring operators to perform regular collision-avoidance maneuvers to keep safe. Any collisions that do occur can generate debris that remains in orbit for decades or centuries, increasing the chance for follow-on collisions with satellites, both old and new.“More satellites have been launched in the last five years than in the preceding 60 years combined,” Parker says. “One of key things we’re trying to understand is whether the path we’re on today is sustainable.”Crowded shellsIn their new study, the researchers simulated different greenhouse gas emissions scenarios over the next century to investigate impacts on atmospheric density and drag. For each “shell,” or altitude range of interest, they then modeled the orbital dynamics and the risk of satellite collisions based on the number of objects within the shell. They used this approach to identify each shell’s “carrying capacity” — a term that is typically used in studies of ecology to describe the number of individuals that an ecosystem can support.“We’re taking that carrying capacity idea and translating it to this space sustainability problem, to understand how many satellites low Earth orbit can sustain,” Parker explains.The team compared several scenarios: one in which greenhouse gas concentrations remain at their level from the year 2000 and others where emissions change according to the Intergovernmental Panel on Climate Change (IPCC) Shared Socioeconomic Pathways (SSPs). They found that scenarios with continuing increases in emissions would lead to a significantly reduced carrying capacity throughout low Earth orbit.In particular, the team estimates that by the end of this century, the number of satellites safely accommodated within the altitudes of 200 and 1,000 kilometers could be reduced by 50 to 66 percent compared with a scenario in which emissions remain at year-2000 levels. If satellite capacity is exceeded, even in a local region, the researchers predict that the region will experience a “runaway instability,” or a cascade of collisions that would create so much debris that satellites could no longer safely operate there.Their predictions forecast out to the year 2100, but the team says that certain shells in the atmosphere today are already crowding up with satellites, particularly from recent “megaconstellations” such as SpaceX’s Starlink, which comprises fleets of thousands of small internet satellites.“The megaconstellation is a new trend, and we’re showing that because of climate change, we’re going to have a reduced capacity in orbit,” Linares says. “And in local regions, we’re close to approaching this capacity value today.”“We rely on the atmosphere to clean up our debris. If the atmosphere is changing, then the debris environment will change too,” Parker adds. “We show the long-term outlook on orbital debris is critically dependent on curbing our greenhouse gas emissions.”This research is supported, in part, by the U.S. National Science Foundation, the U.S. Air Force, and the U.K. Natural Environment Research Council. More

  • in

    Study: The ozone hole is healing, thanks to global reduction of CFCs

    A new MIT-led study confirms that the Antarctic ozone layer is healing, as a direct result of global efforts to reduce ozone-depleting substances.Scientists including the MIT team have observed signs of ozone recovery in the past. But the new study is the first to show, with high statistical confidence, that this recovery is due primarily to the reduction of ozone-depleting substances, versus other influences such as natural weather variability or increased greenhouse gas emissions to the stratosphere.“There’s been a lot of qualitative evidence showing that the Antarctic ozone hole is getting better. This is really the first study that has quantified confidence in the recovery of the ozone hole,” says study author Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies and Chemistry. “The conclusion is, with 95 percent confidence, it is recovering. Which is awesome. And it shows we can actually solve environmental problems.”The new study appears today in the journal Nature. Graduate student Peidong Wang from the Solomon group in the Department of Earth, Atmospheric and Planetary Sciences (EAPS) is the lead author. His co-authors include Solomon and EAPS Research Scientist Kane Stone, along with collaborators from multiple other institutions.Roots of ozone recoveryWithin the Earth’s stratosphere, ozone is a naturally occurring gas that acts as a sort of sunscreen, protecting the planet from the sun’s harmful ultraviolet radiation. In 1985, scientists discovered a “hole” in the ozone layer over Antarctica that opened up during the austral spring, between September and December. This seasonal ozone depletion was suddenly allowing UV rays to filter down to the surface, leading to skin cancer and other adverse health effects.In 1986, Solomon, who was then working at the National Oceanic and Atmospheric Administration (NOAA), led expeditions to the Antarctic, where she and her colleagues gathered evidence that quickly confirmed the ozone hole’s cause: chlorofluorocarbons, or CFCs — chemicals that were then used in refrigeration, air conditioning, insulation, and aerosol propellants. When CFCs drift up into the stratosphere, they can break down ozone under certain seasonal conditions.The following year, those relevations led to the drafting of the Montreal Protocol — an international treaty that aimed to phase out the production of CFCs and other ozone-depleting substances, in hopes of healing the ozone hole.In 2016, Solomon led a study reporting key signs of ozone recovery. The ozone hole seemed to be shrinking with each year, especially in September, the time of year when it opens up. Still, these observations were qualitative. The study showed large uncertainties regarding how much of this recovery was due to concerted efforts to reduce ozone-depleting substances, or if the shrinking ozone hole was a result of other “forcings,” such as year-to-year weather variability from El Niño, La Niña, and the polar vortex.“While detecting a statistically significant increase in ozone is relatively straightforward, attributing these changes to specific forcings is more challenging,” says Wang.Anthropogenic healingIn their new study, the MIT team took a quantitative approach to identify the cause of Antarctic ozone recovery. The researchers borrowed a method from the climate change community, known as “fingerprinting,” which was pioneered by Klaus Hasselmann, who was awarded the Nobel Prize in Physics in 2021 for the technique. In the context of climate, fingerprinting refers to a method that isolates the influence of specific climate factors, apart from natural, meteorological noise. Hasselmann applied fingerprinting to identify, confirm, and quantify the anthropogenic fingerprint of climate change.Solomon and Wang looked to apply the fingerprinting method to identify another anthropogenic signal: the effect of human reductions in ozone-depleting substances on the recovery of the ozone hole.“The atmosphere has really chaotic variability within it,” Solomon says. “What we’re trying to detect is the emerging signal of ozone recovery against that kind of variability, which also occurs in the stratosphere.”The researchers started with simulations of the Earth’s atmosphere and generated multiple “parallel worlds,” or simulations of the same global atmosphere, under different starting conditions. For instance, they ran simulations under conditions that assumed no increase in greenhouse gases or ozone-depleting substances. Under these conditions, any changes in ozone should be the result of natural weather variability. They also ran simulations with only increasing greenhouse gases, as well as only decreasing ozone-depleting substances.They compared these simulations to observe how ozone in the Antarctic stratosphere changed, both with season, and across different altitudes, in response to different starting conditions. From these simulations, they mapped out the times and altitudes where ozone recovered from month to month, over several decades, and identified a key “fingerprint,” or pattern, of ozone recovery that was specifically due to conditions of declining ozone-depleting substances.The team then looked for this fingerprint in actual satellite observations of the Antarctic ozone hole from 2005 to the present day. They found that, over time, the fingerprint that they identified in simulations became clearer and clearer in observations. In 2018, the fingerprint was at its strongest, and the team could say with 95 percent confidence that ozone recovery was due mainly to reductions in ozone-depleting substances.“After 15 years of observational records, we see this signal to noise with 95 percent confidence, suggesting there’s only a very small chance that the observed pattern similarity can be explained by variability noise,” Wang says. “This gives us confidence in the fingerprint. It also gives us confidence that we can solve environmental problems. What we can learn from ozone studies is how different countries can swiftly follow these treaties to decrease emissions.”If the trend continues, and the fingerprint of ozone recovery grows stronger, Solomon anticipates that soon there will be a year, here and there, when the ozone layer stays entirely intact. And eventually, the ozone hole should stay shut for good.“By something like 2035, we might see a year when there’s no ozone hole depletion at all in the Antarctic. And that will be very exciting for me,” she says. “And some of you will see the ozone hole go away completely in your lifetimes. And people did that.”This research was supported, in part, by the National Science Foundation and NASA. More

  • in

    Will neutrons compromise the operation of superconducting magnets in a fusion plant?

    High-temperature superconducting magnets made from REBCO, an acronym for rare earth barium copper oxide, make it possible to create an intense magnetic field that can confine the extremely hot plasma needed for fusion reactions, which combine two hydrogen atoms to form an atom of helium, releasing a neutron in the process.But some early tests suggested that neutron irradiation inside a fusion power plant might instantaneously suppress the superconducting magnets’ ability to carry current without resistance (called critical current), potentially causing a reduction in the fusion power output.Now, a series of experiments has clearly demonstrated that this instantaneous effect of neutron bombardment, known as the “beam on effect,” should not be an issue during reactor operation, thus clearing the path for projects such as the ARC fusion system being developed by MIT spinoff company Commonwealth Fusion Systems.The findings were reported in the journal Superconducting Science and Technology, in a paper by MIT graduate student Alexis Devitre and professors Michael Short, Dennis Whyte, and Zachary Hartwig, along with six others.“Nobody really knew if it would be a concern,” Short explains. He recalls looking at these early findings: “Our group thought, man, somebody should really look into this. But now, luckily, the result of the paper is: It’s conclusively not a concern.”The possible issue first arose during some initial tests of the REBCO tapes planned for use in the ARC system. “I can remember the night when we first tried the experiment,” Devitre recalls. “We were all down in the accelerator lab, in the basement. It was a big shocker because suddenly the measurement we were looking at, the critical current, just went down by 30 percent” when it was measured under radiation conditions (approximating those of the fusion system), as opposed to when it was only measured after irradiation.Before that, researchers had irradiated the REBCO tapes and then tested them afterward, Short says. “We had the idea to measure while irradiating, the way it would be when the reactor’s really on,” he says. “And then we observed this giant difference, and we thought, oh, this is a big deal. It’s a margin you’d want to know about if you’re designing a reactor.”After a series of carefully calibrated tests, it turned out the drop in critical current was not caused by the irradiation at all, but was just an effect of temperature changes brought on by the proton beam used for the irradiation experiments. This is something that would not be a factor in an actual fusion plant, Short says.“We repeated experiments ‘oh so many times’ and collected about a thousand data points,” Devitre says. They then went through a detailed statistical analysis to show that the effects were exactly the same, under conditions where the material was just heated as when it was both heated and irradiated.This excluded the possibility that the instantaneous suppression of the critical current had anything to do with the “beam on effect,” at least within the sensitivity of their tests. “Our experiments are quite sensitive,” Short says. “We can never say there’s no effect, but we can say that there’s no important effect.”To carry out these tests required building a special facility for the purpose. Only a few such facilities exist in the world. “They’re all custom builds, and without this, we wouldn’t have been able to find out the answer,” he says.The finding that this specific issue is not a concern for the design of fusion plants “illustrates the power of negative results. If you can conclusively prove that something doesn’t happen, you can stop scientists from wasting their time hunting for something that doesn’t exist.” And in this case, Short says, “You can tell the fusion companies: ‘You might have thought this effect would be real, but we’ve proven that it’s not, and you can ignore it in your designs.’ So that’s one more risk retired.”That could be a relief to not only Commonwealth Fusion Systems but also several other companies that are also pursuing fusion plant designs, Devitre says. “There’s a bunch. And it’s not just fusion companies,” he adds. There remains the important issue of longer-term degradation of the REBCO that would occur over years or decades, which the group is presently investigating. Others are pursuing the use of these magnets for satellite thrusters and particle accelerators to study subatomic physics, where the effect could also have been a concern. For all these uses, “this is now one less thing to be concerned about,” Devitre says.The research team also included David Fischer, Kevin Woller, Maxwell Rae, Lauryn Kortman, and Zoe Fisher at MIT, and N. Riva at Proxima Fusion in Germany. This research was supported by Eni S.p.A. through the MIT Energy Initiative. More

  • in

    High-speed videos show what happens when a droplet splashes into a pool

    Rain can freefall at speeds of up to 25 miles per hour. If the droplets land in a puddle or pond, they can form a crown-like splash that, with enough force, can dislodge any surface particles and launch them into the air.Now MIT scientists have taken high-speed videos of droplets splashing into a deep pool, to track how the fluid evolves, above and below the water line, frame by millisecond frame. Their work could help to predict how spashing droplets, such as from rainstorms and irrigation systems, may impact watery surfaces and aerosolize surface particles, such as pollen on puddles or pesticides in agricultural runoff.The team carried out experiments in which they dispensed water droplets of various sizes and from various heights into a pool of water. Using high-speed imaging, they measured how the liquid pool deformed as the impacting droplet hit the pool’s surface.Across all their experiments, they observed a common splash evolution: As a droplet hit the pool, it pushed down below the surface to form a “crater,” or cavity. At nearly the same time, a wall of liquid rose above the surface, forming a crown. Interestingly, the team observed that small, secondary droplets were ejected from the crown before the crown reached its maximum height. This entire evolution happens in a fraction of a second.

    “This cylinder-like wall of rising liquid, and how it evolves in time and space, is at the heart of everything,” Lydia Bourouiba says. GIF has been edited down to 5 frames per second.

    Image: Courtesy of the researchers; edited by MIT News

    Previous item
    Next item

    Scientists have caught snapshots of droplet splashes in the past, such as the famous “Milk Drop Coronet” — a photo of a drop of milk in mid-splash, taken by the late MIT professor Harold “Doc” Edgerton, who invented a photographic technique to capture quickly moving objects.The new work represents the first time scientists have used such high-speed images to model the entire splash dynamics of a droplet in a deep pool, combining what happens both above and below the surface. The team has used the imaging to gather new data central to build a mathematical model that predicts how a droplet’s shape will morph and merge as it hits a pool’s surface. They plan to use the model as a baseline to explore to what extent a splashing droplet might drag up and launch particles from the water pool.“Impacts of drops on liquid layers are ubiquitous,” says study author Lydia Bourouiba, a professor in the MIT departments of Civil and Environmental Engineering and Mechanical Engineering, and a core member of the Institute for Medical Engineering and Science (IMES). “Such impacts can produce myriads of secondary droplets that could act as carriers for pathogens, particles, or microbes that are on the surface of impacted pools or contaminated water bodies. This work is key in enabling prediction of droplet size distributions, and potentially also what such drops can carry with them.”Bourouiba and her mentees have published their results in the Journal of Fluid Mechanics. MIT co-authors include former graduate student Raj Dandekar PhD ’22, postdoc (Eric) Naijian Shen, and student mentee Boris Naar.Above and belowAt MIT, Bourouiba heads up the Fluid Dynamics of Disease Transmission Laboratory, part of the Fluids and Health Network, where she and her team explore the fundamental physics of fluids and droplets in a range of environmental, energy, and health contexts, including disease transmission. For their new study, the team looked to better understand how droplets impact a deep pool — a seemingly simple phenomenon that nevertheless has been tricky to precisely capture and characterize.Bourouiba notes that there have been recent breakthroughs in modeling the evolution of a splashing droplet below a pool’s surface. As a droplet hits a pool of water, it breaks through the surface and drags air down through the pool to create a short-lived crater. Until now, scientists have focused on the evolution of this underwater cavity, mainly for applications in energy harvesting. What happens above the water, and how a droplet’s crown-like shape evolves with the cavity below, remained less understood.“The descriptions and understanding of what happens below the surface, and above, have remained very much divorced,” says Bourouiba, who believes such an understanding can help to predict how droplets launch and spread chemicals, particles, and microbes into the air.Splash in 3DTo study the coupled dynamics between a droplet’s cavity and crown, the team set up an experiment to dispense water droplets into a deep pool. For the purposes of their study, the researchers considered a deep pool to be a body of water that is deep enough that a splashing droplet would remain far away from the pool’s bottom. In these terms, they found that a pool with a depth of at least 20 centimeters was sufficient for their experiments.They varied each droplet’s size, with an average diameter of about 5 millimeters. They also dispensed droplets from various heights, causing the droplets to hit the pool’s surface at different speeds, which on average was about 5 meters per second. The overall dynamics, Bourouiba says, should be similar to what occurs on the surface of a puddle or pond during an average rainstorm.“This is capturing the speed at which raindrops fall,” she says. “These wouldn’t be very small, misty drops. This would be rainstorm drops for which one needs an umbrella.”Using high-speed imaging techniques inspired by Edgerton’s pioneering photography, the team captured videos of pool-splashing droplets, at rates of up to 12,500 frames per second. They then applied in-house imaging processing methods to extract key measurements from the image sequences, such as the changing width and depth of the underwater cavity, and the evolving diameter and height of the rising crown. The researchers also captured especially tricky measurements, of the crown’s wall thickness profile and inner flow — the cylinder that rises out of the pool, just before it forms a rim and points that are characteristic of a crown.“This cylinder-like wall of rising liquid, and how it evolves in time and space, is at the heart of everything,” Bourouiba says. “It’s what connects the fluid from the pool to what will go into the rim and then be ejected into the air through smaller, secondary droplets.”The researchers worked the image data into a set of “evolution equations,” or a mathematical model that relates the various properties of an impacting droplet, such as the width of its cavity and the thickness and speed profiles of its crown wall, and how these properties change over time, given a droplet’s starting size and impact speed.“We now have a closed-form mathematical expression that people can use to see how all these quantities of a splashing droplet change over space and time,” says co-author Shen, who plans, with Bourouiba, to apply the new model to the behavior of secondary droplets and understanding how a splash end-up dispersing particles such as pathogens and pesticides. “This opens up the possibility to study all these problems of splash in 3D, with self-contained closed-formed equations, which was not possible before.”This research was supported, in part, by the Department of Agriculture-National Institute of Food and Agriculture Specialty Crop Research Initiative; the Richard and Susan Smith Family Foundation; the National Science Foundation; the Centers for Disease Control and Prevention-National Institute for Occupational Safety and Health; Inditex; and the National Institute of Allergy and Infectious Diseases of the National Institutes of Health. More

  • in

    3 Questions: Exploring the limits of carbon sequestration

    As part of a multi-pronged approach toward curbing the effects of greenhouse gas emissions, scientists seek to better understand the impact of rising carbon dioxide (CO2) levels on terrestrial ecosystems, particularly tropical forests. To that end, climate scientist César Terrer, the Class of 1958 Career Development Assistant Professor of Civil and Environmental Engineering (CEE) at MIT, and colleague Josh Fisher of Chapman University are bringing their scientific minds to bear on a unique setting — an active volcano in Costa Rica — as a way to study carbon dioxide emissions and their influence. Elevated CO2 levels can lead to a phenomenon known as the CO2 fertilization effect, where plants grow more and absorb greater amounts of carbon, providing a cooling effect. While this effect has the potential to be a natural climate change mitigator, the extent of how much carbon plants can continue to absorb remains uncertain. There are growing concerns from scientists that plants may eventually reach a saturation point, losing their ability to offset increasing atmospheric CO2. Understanding these dynamics is crucial for accurate climate predictions and developing strategies to manage carbon sequestration. Here, Terrer discusses his innovative approach, his motivations for joining the project, and the importance of advancing this research.Q: Why did you get involved in this line of research, and what makes it unique?A: Josh Fisher, a climate scientist and long-time collaborator, had the brilliant idea to take advantage of naturally high CO2 levels near active volcanoes to study the fertilization effect in real-world conditions. Conducting such research in dense tropical forests like the Amazon — where the largest uncertainties about CO2 fertilization exist — is challenging. It would require large-scale CO2 tanks and extensive infrastructure to evenly distribute the gas throughout the towering trees and intricate canopy layers — a task that is not only logistically complex, but also highly costly. Our approach allows us to circumvent those obstacles and gather critical data in a way that hasn’t been done before.Josh was looking for an expert in the field of carbon ecology to co-lead and advance this research with him. My expertise of understanding the dynamics that regulate carbon storage in terrestrial ecosystems within the context of climate change made for a natural fit to co-lead and advance this research with him. This field has been central to my research, and was the focus of my PhD thesis.Our experiments inside the Rincon de la Vieja National Park are particularly exciting because CO2 concentrations in the areas near the volcano are four times higher than the global average. This gives us a rare opportunity to observe how elevated CO2 affects plant biomass in a natural setting — something that has never been attempted at this scale.Q: How are you measuring CO2 concentrations at the volcano?A: We have installed a network of 50 sensors in the forest canopy surrounding the volcano. These sensors continuously monitor CO2 levels, allowing us to compare areas with naturally high CO2 emissions from the volcano to control areas with typical atmospheric CO2 concentrations. The sensors are Bluetooth-enabled, requiring us to be in close proximity to retrieve the data. They will remain in place for a full year, capturing a continuous dataset on CO2 fluctuations. Our next data collection trip is scheduled for March, with another planned a year after the initial deployment.Q: What are the long-term goals of this research?A: Our primary objective is to determine whether the CO2 fertilization effect can be sustained, or if plants will eventually reach a saturation point, limiting their ability to absorb additional carbon. Understanding this threshold is crucial for improving climate models and carbon mitigation strategies.To expand the scope of our measurements, we are exploring the use of airborne technologies — such as drones or airplane-mounted sensors — to assess carbon storage across larger areas. This would provide a more comprehensive view of carbon sequestration potential in tropical ecosystems. Ultimately, this research could offer critical insights into the future role of forests in mitigating climate change, helping scientists and policymakers develop more accurate carbon budgets and climate projections. If successful, our approach could pave the way for similar studies in other ecosystems, deepening our understanding of how nature responds to rising CO2 levels. More

  • in

    Chip-based system for terahertz waves could enable more efficient, sensitive electronics

    The use of terahertz waves, which have shorter wavelengths and higher frequencies than radio waves, could enable faster data transmission, more precise medical imaging, and higher-resolution radar.But effectively generating terahertz waves using a semiconductor chip, which is essential for incorporation into electronic devices, is notoriously difficult.Many current techniques can’t generate waves with enough radiating power for useful applications unless they utilize bulky and expensive silicon lenses. Higher radiating power allows terahertz signals to travel farther. Such lenses, which are often larger than the chip itself, make it hard to integrate the terahertz source into an electronic device.To overcome these limitations, MIT researchers developed a terahertz amplifier-multiplier system that achieves higher radiating power than existing devices without the need for silicon lenses.By affixing a thin, patterned sheet of material to the back of the chip and utilizing higher-power Intel transistors, the researchers produced a more efficient, yet scalable, chip-based terahertz wave generator.This compact chip could be used to make terahertz arrays for applications like improved security scanners for detecting hidden objects or environmental monitors for pinpointing airborne pollutants.“To take full advantage of a terahertz wave source, we need it to be scalable. A terahertz array might have hundreds of chips, and there is no place to put silicon lenses because the chips are combined with such high density. We need a different package, and here we’ve demonstrated a promising approach that can be used for scalable, low-cost terahertz arrays,” says Jinchen Wang, a graduate student in the Department of Electrical Engineering and Computer Science (EECS) and lead author of a paper on the terahertz radiator.He is joined on the paper by EECS graduate students Daniel Sheen and Xibi Chen; Steven F. Nagel, managing director of the T.J. Rodgers RLE Laboratory; and senior author Ruonan Han, an associate professor in EECS, who leads the Terahertz Integrated Electronics Group. The research will be presented at the IEEE International Solid-States Circuits Conference.Making wavesTerahertz waves sit on the electromagnetic spectrum between radio waves and infrared light. Their higher frequencies enable them to carry more information per second than radio waves, while they can safely penetrate a wider range of materials than infrared light.One way to generate terahertz waves is with a CMOS chip-based amplifier-multiplier chain that increases the frequency of radio waves until they reach the terahertz range. To achieve the best performance, waves go through the silicon chip and are eventually emitted out the back into the open air.But a property known as the dielectric constant gets in the way of a smooth transmission.The dielectric constant influences how electromagnetic waves interact with a material. It affects the amount of radiation that is absorbed, reflected, or transmitted. Because the dielectric constant of silicon is much higher than that of air, most terahertz waves are reflected at the silicon-air boundary rather than being cleanly transmitted out the back.Since most signal strength is lost at this boundary, current approaches often use silicon lenses to boost the power of the remaining signal. The MIT researchers approached this problem differently.They drew on an electromechanical theory known as matching. With matching, they seek to equal out the dielectric constants of silicon and air, which will minimize the amount of signal that is reflected at the boundary.They accomplish this by sticking a thin sheet of material which has a dielectric constant between silicon and air to the back of the chip. With this matching sheet in place, most waves will be transmitted out the back rather than being reflected.A scalable approachThey chose a low-cost, commercially available substrate material with a dielectric constant very close to what they needed for matching. To improve performance, they used a laser cutter to punch tiny holes into the sheet until its dielectric constant was exactly right.“Since the dielectric constant of air is 1, if you just cut some subwavelength holes in the sheet, it is equivalent to injecting some air, which lowers the overall dielectric constant of the matching sheet,” Wang explains.In addition, they designed their chip with special transistors developed by Intel that have a higher maximum frequency and breakdown voltage than traditional CMOS transistors.“These two things taken together, the more powerful transistors and the dielectric sheet, plus a few other small innovations, enabled us to outperform several other devices,” he says.Their chip generated terahertz signals with a peak radiation power of 11.1 decibel-milliwatts, the best among state-of-the-art techniques. Moreover, since the low-cost chip can be fabricated at scale, it could be integrated into real-world electronic devices more readily.One of the biggest challenges of developing a scalable chip was determining how to manage the power and temperature when generating terahertz waves.“Because the frequency and the power are so high, many of the standard ways to design a CMOS chip are not applicable here,” Wang says.The researchers also needed to devise a technique for installing the matching sheet that could be scaled up in a manufacturing facility.Moving forward, they want to demonstrate this scalability by fabricating a phased array of CMOS terahertz sources, enabling them to steer and focus a powerful terahertz beam with a low-cost, compact device.This research is supported, in part, by NASA’s Jet Propulsion Laboratory and Strategic University Research Partnerships Program, as well as the MIT Center for Integrated Circuits and Systems. The chip was fabricated through the Intel University Shuttle Program. More

  • in

    Rooftop panels, EV chargers, and smart thermostats could chip in to boost power grid resilience

    There’s a lot of untapped potential in our homes and vehicles that could be harnessed to reinforce local power grids and make them more resilient to unforeseen outages, a new study shows.In response to a cyber attack or natural disaster, a backup network of decentralized devices — such as residential solar panels, batteries, electric vehicles, heat pumps, and water heaters — could restore electricity or relieve stress on the grid, MIT engineers say.Such devices are “grid-edge” resources found close to the consumer rather than near central power plants, substations, or transmission lines. Grid-edge devices can independently generate, store, or tune their consumption of power. In their study, the research team shows how such devices could one day be called upon to either pump power into the grid, or rebalance it by dialing down or delaying their power use.In a paper appearing this week in the Proceedings of the National Academy of Sciences, the engineers present a blueprint for how grid-edge devices could reinforce the power grid through a “local electricity market.” Owners of grid-edge devices could subscribe to a regional market and essentially loan out their device to be part of a microgrid or a local network of on-call energy resources.In the event that the main power grid is compromised, an algorithm developed by the researchers would kick in for each local electricity market, to quickly determine which devices in the network are trustworthy. The algorithm would then identify the combination of trustworthy devices that would most effectively mitigate the power failure, by either pumping power into the grid or reducing the power they draw from it, by an amount that the algorithm would calculate and communicate to the relevant subscribers. The subscribers could then be compensated through the market, depending on their participation.The team illustrated this new framework through a number of grid attack scenarios, in which they considered failures at different levels of a power grid, from various sources such as a cyber attack or a natural disaster. Applying their algorithm, they showed that various networks of grid-edge devices were able to dissolve the various attacks.The results demonstrate that grid-edge devices such as rooftop solar panels, EV chargers, batteries, and smart thermostats (for HVAC devices or heat pumps) could be tapped to stabilize the power grid in the event of an attack.“All these small devices can do their little bit in terms of adjusting their consumption,” says study co-author Anu Annaswamy, a research scientist in MIT’s Department of Mechanical Engineering. “If we can harness our smart dishwashers, rooftop panels, and EVs, and put our combined shoulders to the wheel, we can really have a resilient grid.”The study’s MIT co-authors include lead author Vineet Nair and John Williams, along with collaborators from multiple institutions including the Indian Institute of Technology, the National Renewable Energy Laboratory, and elsewhere.Power boostThe team’s study is an extension of their broader work in adaptive control theory and designing systems to automatically adapt to changing conditions. Annaswamy, who leads the Active-Adaptive Control Laboratory at MIT, explores ways to boost the reliability of renewable energy sources such as solar power.“These renewables come with a strong temporal signature, in that we know for sure the sun will set every day, so the solar power will go away,” Annaswamy says. “How do you make up for the shortfall?”The researchers found the answer could lie in the many grid-edge devices that consumers are increasingly installing in their own homes.“There are lots of distributed energy resources that are coming up now, closer to the customer rather than near large power plants, and it’s mainly because of individual efforts to decarbonize,” Nair says. “So you have all this capability at the grid edge. Surely we should be able to put them to good use.”While considering ways to deal with drops in energy from the normal operation of renewable sources, the team also began to look into other causes of power dips, such as from cyber attacks. They wondered, in these malicious instances, whether and how the same grid-edge devices could step in to stabilize the grid following an unforeseen, targeted attack.Attack modeIn their new work, Annaswamy, Nair, and their colleagues developed a framework for incorporating grid-edge devices, and in particular, internet-of-things (IoT) devices, in a way that would support the larger grid in the event of an attack or disruption. IoT devices are physical objects that contain sensors and software that connect to the internet.For their new framework, named EUREICA (Efficient, Ultra-REsilient, IoT-Coordinated Assets), the researchers start with the assumption that one day, most grid-edge devices will also be IoT devices, enabling rooftop panels, EV chargers, and smart thermostats to wirelessly connect to a larger network of similarly independent and distributed devices. The team envisions that for a given region, such as a community of 1,000 homes, there exists a certain number of IoT devices that could potentially be enlisted in the region’s local network, or microgrid. Such a network would be managed by an operator, who would be able to communicate with operators of other nearby microgrids.If the main power grid is compromised or attacked, operators would run the researchers’ decision-making algorithm to determine trustworthy devices within the network that can pitch in to help mitigate the attack.The team tested the algorithm on a number of scenarios, such as a cyber attack in which all smart thermostats made by a certain manufacturer are hacked to raise their setpoints simultaneously to a degree that dramatically alters a region’s energy load and destabilizes the grid. The researchers also considered attacks and weather events that would shut off the transmission of energy at various levels and nodes throughout a power grid.“In our attacks we consider between 5 and 40 percent of the power being lost. We assume some nodes are attacked, and some are still available and have some IoT resources, whether a battery with energy available or an EV or HVAC device that’s controllable,” Nair explains. “So, our algorithm decides which of those houses can step in to either provide extra power generation to inject into the grid or reduce their demand to meet the shortfall.”In every scenario that they tested, the team found that the algorithm was able to successfully restabilize the grid and mitigate the attack or power failure. They acknowledge that to put in place such a network of grid-edge devices will require buy-in from customers, policymakers, and local officials, as well as innovations such as advanced power inverters that enable EVs to inject power back into the grid.“This is just the first of many steps that have to happen in quick succession for this idea of local electricity markets to be implemented and expanded upon,” Annaswamy says. “But we believe it’s a good start.”This work was supported, in part, by the U.S. Department of Energy and the MIT Energy Initiative. More