More stories

  • in

    MIT researchers remotely map crops, field by field

    Crop maps help scientists and policymakers track global food supplies and estimate how they might shift with climate change and growing populations. But getting accurate maps of the types of crops that are grown from farm to farm often requires on-the-ground surveys that only a handful of countries have the resources to maintain.

    Now, MIT engineers have developed a method to quickly and accurately label and map crop types without requiring in-person assessments of every single farm. The team’s method uses a combination of Google Street View images, machine learning, and satellite data to automatically determine the crops grown throughout a region, from one fraction of an acre to the next. 

    The researchers used the technique to automatically generate the first nationwide crop map of Thailand — a smallholder country where small, independent farms make up the predominant form of agriculture. The team created a border-to-border map of Thailand’s four major crops — rice, cassava, sugarcane, and maize — and determined which of the four types was grown, at every 10 meters, and without gaps, across the entire country. The resulting map achieved an accuracy of 93 percent, which the researchers say is comparable to on-the-ground mapping efforts in high-income, big-farm countries.

    The team is applying their mapping technique to other countries such as India, where small farms sustain most of the population but the type of crops grown from farm to farm has historically been poorly recorded.

    “It’s a longstanding gap in knowledge about what is grown around the world,” says Sherrie Wang, the d’Arbeloff Career Development Assistant Professor in MIT’s Department of Mechanical Engineering, and the Institute for Data, Systems, and Society (IDSS). “The final goal is to understand agricultural outcomes like yield, and how to farm more sustainably. One of the key preliminary steps is to map what is even being grown — the more granularly you can map, the more questions you can answer.”

    Wang, along with MIT graduate student Jordi Laguarta Soler and Thomas Friedel of the agtech company PEAT GmbH, will present a paper detailing their mapping method later this month at the AAAI Conference on Artificial Intelligence.

    Ground truth

    Smallholder farms are often run by a single family or farmer, who subsist on the crops and livestock that they raise. It’s estimated that smallholder farms support two-thirds of the world’s rural population and produce 80 percent of the world’s food. Keeping tabs on what is grown and where is essential to tracking and forecasting food supplies around the world. But the majority of these small farms are in low to middle-income countries, where few resources are devoted to keeping track of individual farms’ crop types and yields.

    Crop mapping efforts are mainly carried out in high-income regions such as the United States and Europe, where government agricultural agencies oversee crop surveys and send assessors to farms to label crops from field to field. These “ground truth” labels are then fed into machine-learning models that make connections between the ground labels of actual crops and satellite signals of the same fields. They then label and map wider swaths of farmland that assessors don’t cover but that satellites automatically do.

    “What’s lacking in low- and middle-income countries is this ground label that we can associate with satellite signals,” Laguarta Soler says. “Getting these ground truths to train a model in the first place has been limited in most of the world.”

    The team realized that, while many developing countries do not have the resources to maintain crop surveys, they could potentially use another source of ground data: roadside imagery, captured by services such as Google Street View and Mapillary, which send cars throughout a region to take continuous 360-degree images with dashcams and rooftop cameras.

    In recent years, such services have been able to access low- and middle-income countries. While the goal of these services is not specifically to capture images of crops, the MIT team saw that they could search the roadside images to identify crops.

    Cropped image

    In their new study, the researchers worked with Google Street View (GSV) images taken throughout Thailand — a country that the service has recently imaged fairly thoroughly, and which consists predominantly of smallholder farms.

    Starting with over 200,000 GSV images randomly sampled across Thailand, the team filtered out images that depicted buildings, trees, and general vegetation. About 81,000 images were crop-related. They set aside 2,000 of these, which they sent to an agronomist, who determined and labeled each crop type by eye. They then trained a convolutional neural network to automatically generate crop labels for the other 79,000 images, using various training methods, including iNaturalist — a web-based crowdsourced  biodiversity database, and GPT-4V, a “multimodal large language model” that enables a user to input an image and ask the model to identify what the image is depicting. For each of the 81,000 images, the model generated a label of one of four crops that the image was likely depicting — rice, maize, sugarcane, or cassava.

    The researchers then paired each labeled image with the corresponding satellite data taken of the same location throughout a single growing season. These satellite data include measurements across multiple wavelengths, such as a location’s greenness and its reflectivity (which can be a sign of water). 

    “Each type of crop has a certain signature across these different bands, which changes throughout a growing season,” Laguarta Soler notes.

    The team trained a second model to make associations between a location’s satellite data and its corresponding crop label. They then used this model to process satellite data taken of the rest of the country, where crop labels were not generated or available. From the associations that the model learned, it then assigned crop labels across Thailand, generating a country-wide map of crop types, at a resolution of 10 square meters.

    This first-of-its-kind crop map included locations corresponding to the 2,000 GSV images that the researchers originally set aside, that were labeled by arborists. These human-labeled images were used to validate the map’s labels, and when the team looked to see whether the map’s labels matched the expert, “gold standard” labels, it did so 93 percent of the time.

    “In the U.S., we’re also looking at over 90 percent accuracy, whereas with previous work in India, we’ve only seen 75 percent because ground labels are limited,” Wang says. “Now we can create these labels in a cheap and automated way.”

    The researchers are moving to map crops across India, where roadside images via Google Street View and other services have recently become available.

    “There are over 150 million smallholder farmers in India,” Wang says. “India is covered in agriculture, almost wall-to-wall farms, but very small farms, and historically it’s been very difficult to create maps of India because there are very sparse ground labels.”

    The team is working to generate crop maps in India, which could be used to inform policies having to do with assessing and bolstering yields, as global temperatures and populations rise.

    “What would be interesting would be to create these maps over time,” Wang says. “Then you could start to see trends, and we can try to relate those things to anything like changes in climate and policies.” More

  • in

    Researchers release open-source space debris model

    MIT’s Astrodynamics, Space Robotics, and Controls Laboratory (ARCLab) announced the public beta release of the MIT Orbital Capacity Assessment Tool (MOCAT) during the 2023 Organization for Economic Cooperation and Development (OECD) Space Forum Workshop on Dec. 14. MOCAT enables users to model the long-term future space environment to understand growth in space debris and assess the effectiveness of debris-prevention mechanisms.

    With the escalating congestion in low Earth orbit, driven by a surge in satellite deployments, the risk of collisions and space debris proliferation is a pressing concern. Conducting thorough space environment studies is critical for developing effective strategies for fostering responsible and sustainable use of space resources. 

    MOCAT stands out among orbital modeling tools for its capability to model individual objects, diverse parameters, orbital characteristics, fragmentation scenarios, and collision probabilities. With the ability to differentiate between object categories, generalize parameters, and offer multi-fidelity computations, MOCAT emerges as a versatile and powerful tool for comprehensive space environment analysis and management.

    MOCAT is intended to provide an open-source tool to empower stakeholders including satellite operators, regulators, and members of the public to make data-driven decisions. The ARCLab team has been developing these models for the last several years, recognizing that the lack of open-source implementation of evolutionary modeling tools limits stakeholders’ ability to develop consensus on actions to help improve space sustainability. This beta release is intended to allow users to experiment with the tool and provide feedback to help guide further development.

    Richard Linares, the principal investigator for MOCAT and an MIT associate professor of aeronautics and astronautics, expresses excitement about the tool’s potential impact: “MOCAT represents a significant leap forward in orbital capacity assessment. By making it open-source and publicly available, we hope to engage the global community in advancing our understanding of satellite orbits and contributing to the sustainable use of space.”

    MOCAT consists of two main components. MOCAT-MC evaluates space environment evolution with individual trajectory simulation and Monte Carlo parameter analysis, providing both a high-level overall view for the environment and a fidelity analysis into the individual space objects evolution. MOCAT Source Sink Evolutionary Model (MOCAT-SSEM), meanwhile, uses a lower-fidelity modeling approach that can run on personal computers within seconds to minutes. MOCAT-MC and MOCAT-SSEM can be accessed separately via GitHub.

    MOCAT’s initial development has been supported by the Defense Advanced Research Projects Agency (DARPA) and NASA’s Office of Technology and Strategy.

    “We are thrilled to support this groundbreaking orbital debris modeling work and the new knowledge it created,” says Charity Weeden, associate administrator for the Office of Technology, Policy, and Strategy at NASA headquarters in Washington. “This open-source modeling tool is a public good that will advance space sustainability, improve evidence-based policy analysis, and help all users of space make better decisions.” More

  • in

    Study: The ocean’s color is changing as a consequence of climate change

    The ocean’s color has changed significantly over the last 20 years, and the global trend is likely a consequence of human-induced climate change, report scientists at MIT, the National Oceanography Center in the U.K., and elsewhere.  

    In a study appearing today in Nature, the team writes that they have detected changes in ocean color over the past two decades that cannot be explained by natural, year-to-year variability alone. These color shifts, though subtle to the human eye, have occurred over 56 percent of the world’s oceans — an expanse that is larger than the total land area on Earth.

    In particular, the researchers found that tropical ocean regions near the equator have become steadily greener over time. The shift in ocean color indicates that ecosystems within the surface ocean must also be changing, as the color of the ocean is a literal reflection of the organisms and materials in its waters.

    At this point, the researchers cannot say how exactly marine ecosystems are changing to reflect the shifting color. But they are pretty sure of one thing: Human-induced climate change is likely the driver.

    “I’ve been running simulations that have been telling me for years that these changes in ocean color are going to happen,” says study co-author Stephanie Dutkiewicz, senior research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences and the Center for Global Change Science. “To actually see it happening for real is not surprising, but frightening. And these changes are consistent with man-induced changes to our climate.”

    “This gives additional evidence of how human activities are affecting life on Earth over a huge spatial extent,” adds lead author B. B. Cael PhD ’19 of the National Oceanography Center in Southampton, U.K. “It’s another way that humans are affecting the biosphere.”

    The study’s co-authors also include Stephanie Henson of the National Oceanography Center, Kelsey Bisson at Oregon State University, and Emmanuel Boss of the University of Maine.

    Above the noise

    The ocean’s color is a visual product of whatever lies within its upper layers. Generally, waters that are deep blue reflect very little life, whereas greener waters indicate the presence of ecosystems, and mainly phytoplankton — plant-like microbes that are abundant in upper ocean and that contain the green pigment chlorophyll. The pigment helps plankton harvest sunlight, which they use to capture carbon dioxide from the atmosphere and convert it into sugars.

    Phytoplankton are the foundation of the marine food web that sustains progressively more complex organisms, on up to krill, fish, and seabirds and marine mammals. Phytoplankton are also a powerful muscle in the ocean’s ability to capture and store carbon dioxide. Scientists are therefore keen to monitor phytoplankton across the surface oceans and to see how these essential communities might respond to climate change. To do so, scientists have tracked changes in chlorophyll, based on the ratio of how much blue versus green light is reflected from the ocean surface, which can be monitored from space

    But around a decade ago, Henson, who is a co-author of the current study, published a paper with others, which showed that, if scientists were tracking chlorophyll alone, it would take at least 30 years of continuous monitoring to detect any trend that was driven specifically by climate change. The reason, the team argued, was that the large, natural variations in chlorophyll from year to year would overwhelm any anthropogenic influence on chlorophyll concentrations. It would therefore take several decades to pick out a meaningful, climate-change-driven signal amid the normal noise.

    In 2019, Dutkiewicz and her colleagues published a separate paper, showing through a new model that the natural variation in other ocean colors is much smaller compared to that of chlorophyll. Therefore, any signal of climate-change-driven changes should be easier to detect over the smaller, normal variations of other ocean colors. They predicted that such changes should be apparent within 20, rather than 30 years of monitoring.

    “So I thought, doesn’t it make sense to look for a trend in all these other colors, rather than in chlorophyll alone?” Cael says. “It’s worth looking at the whole spectrum, rather than just trying to estimate one number from bits of the spectrum.”

     The power of seven

    In the current study, Cael and the team analyzed measurements of ocean color taken by the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard the Aqua satellite, which has been monitoring ocean color for 21 years. MODIS takes measurements in seven visible wavelengths, including the two colors researchers traditionally use to estimate chlorophyll.

    The differences in color that the satellite picks up are too subtle for human eyes to differentiate. Much of the ocean appears blue to our eye, whereas the true color may contain a mix of subtler wavelengths, from blue to green and even red.

    Cael carried out a statistical analysis using all seven ocean colors measured by the satellite from 2002 to 2022 together. He first looked at how much the seven colors changed from region to region during a given year, which gave him an idea of their natural variations. He then zoomed out to see how these annual variations in ocean color changed over a longer stretch of two decades. This analysis turned up a clear trend, above the normal year-to-year variability.

    To see whether this trend is related to climate change, he then looked to Dutkiewicz’s model from 2019. This model simulated the Earth’s oceans under two scenarios: one with the addition of greenhouse gases, and the other without it. The greenhouse-gas model predicted that a significant trend should show up within 20 years and that this trend should cause changes to ocean color in about 50 percent of the world’s surface oceans — almost exactly what Cael found in his analysis of real-world satellite data.

    “This suggests that the trends we observe are not a random variation in the Earth system,” Cael says. “This is consistent with anthropogenic climate change.”

    The team’s results show that monitoring ocean colors beyond chlorophyll could give scientists a clearer, faster way to detect climate-change-driven changes to marine ecosystems.

    “The color of the oceans has changed,” Dutkiewicz says. “And we can’t say how. But we can say that changes in color reflect changes in plankton communities, that will impact everything that feeds on plankton. It will also change how much the ocean will take up carbon, because different types of plankton have different abilities to do that. So, we hope people take this seriously. It’s not only models that are predicting these changes will happen. We can now see it happening, and the ocean is changing.”

    This research was supported, in part, by NASA. More

  • in

    Studying rivers from worlds away

    Rivers have flowed on two other worlds in the solar system besides Earth: Mars, where dry tracks and craters are all that’s left of ancient rivers and lakes, and Titan, Saturn’s largest moon, where rivers of liquid methane still flow today.

    A new technique developed by MIT geologists allows scientists to see how intensely rivers used to flow on Mars, and how they currently flow on Titan. The method uses satellite observations to estimate the rate at which rivers move fluid and sediment downstream.

    Applying their new technique, the MIT team calculated how fast and deep rivers were in certain regions on Mars more than 1 billion years ago. They also made similar estimates for currently active rivers on Titan, even though the moon’s thick atmosphere and distance from Earth make it harder to explore, with far fewer available images of its surface than those of Mars.

    “What’s exciting about Titan is that it’s active. With this technique, we have a method to make real predictions for a place where we won’t get more data for a long time,” says Taylor Perron, the Cecil and Ida Green Professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “And on Mars, it gives us a time machine, to take the rivers that are dead now and get a sense of what they were like when they were actively flowing.”

    Perron and his colleagues have published their results today in the Proceedings of the National Academy of Sciences. Perron’s MIT co-authors are first author Samuel Birch, Paul Corlies, and Jason Soderblom, with Rose Palermo and Andrew Ashton of the Woods Hole Oceanographic Institution (WHOI), Gary Parker of the University of Illinois at Urbana-Champaign, and collaborators from the University of California at Los Angeles, Yale University, and Cornell University.

    River math

    The team’s study grew out of Perron and Birch’s puzzlement over Titan’s rivers. The images taken by NASA’s Cassini spacecraft have shown a curious lack of fan-shaped deltas at the mouths of most of the moon’s rivers, contrary to many rivers on Earth. Could it be that Titan’s rivers don’t carry enough flow or sediment to build deltas?

    The group built on the work of co-author Gary Parker, who in the 2000s developed a series of mathematical equations to describe river flow on Earth. Parker had studied measurements of rivers taken directly in the field by others. From these data, he found there were certain universal relationships between a river’s physical dimensions — its width, depth, and slope — and the rate at which it flowed. He drew up equations to describe these relationships mathematically, accounting for other variables such as the gravitational field acting on the river, and the size and density of the sediment being pushed along a river’s bed.

    “This means that rivers with different gravity and materials should follow similar relationships,” Perron says. “That opened up a possibility to apply this to other planets too.”

    Getting a glimpse

    On Earth, geologists can make field measurements of a river’s width, slope, and average sediment size, all of which can be fed into Parker’s equations to accurately predict a river’s flow rate, or how much water and sediment it can move downstream. But for rivers on other planets, measurements are more limited, and largely based on images and elevation measurements collected by remote satellites. For Mars, multiple orbiters have taken high-resolution images of the planet. For Titan, views are few and far between.

    Birch realized that any estimate of river flow on Mars or Titan would have to be based on the few characteristics that can be measured from remote images and topography — namely, a river’s width and slope. With some algebraic tinkering, he adapted Parker’s equations to work only with width and slope inputs. He then assembled data from 491 rivers on Earth, tested the modified equations on these rivers, and found that the predictions based solely on each river’s width and slope were accurate.

    Then, he applied the equations to Mars, and specifically, to the ancient rivers leading into Gale and Jezero Craters, both of which are thought to have been water-filled lakes billions of years ago. To predict the flow rate of each river, he plugged into the equations Mars’ gravity, and estimates of each river’s width and slope, based on images and elevation measurements taken by orbiting satellites.

    From their predictions of flow rate, the team found that rivers likely flowed for at least 100,000 years at Gale Crater and at least 1 million years at Jezero Crater — long enough to have possibly supported life. They were also able to compare their predictions of the average size of sediment on each river’s bed with actual field measurements of Martian grains near each river, taken by NASA’s Curiosity and Perseverance rovers. These few field measurements allowed the team to check that their equations, applied on Mars, were accurate.

    The team then took their approach to Titan. They zeroed in on two locations where river slopes can be measured, including a river that flows into a lake the size of Lake Ontario. This river appears to form a delta as it feeds into the lake. However, the delta is one of only a few thought to exist on the moon — nearly every viewable river flowing into a lake mysteriously lacks a delta. The team also applied their method to one of these other delta-less rivers.

    They calculated both rivers’ flow and found that they may be comparable to some of the biggest rivers on Earth, with deltas estimated to have a flow rate as large as the Mississippi. Both rivers should move enough sediment to build up deltas. Yet, most rivers on Titan lack the fan-shaped deposits. Something else must be at work to explain this lack of river deposits.

    In another finding, the team calculated that rivers on Titan should be wider and have a gentler slope than rivers carrying the same flow on Earth or Mars. “Titan is the most Earth-like place,” Birch says. ”We’ve only gotten a glimpse of it. There’s so much more that we know is down there, and this remote technique is pushing us a little closer.”

    This research was supported, in part, by NASA and the Heising-Simons Foundation. More

  • in

    Detailed images from space offer clearer picture of drought effects on plants

    “MIT is a place where dreams come true,” says César Terrer, an assistant professor in the Department of Civil and Environmental Engineering. Here at MIT, Terrer says he’s given the resources needed to explore ideas he finds most exciting, and at the top of his list is climate science. In particular, he is interested in plant-soil interactions, and how the two can mitigate impacts of climate change. In 2022, Terrer received seed grant funding from the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) to produce drought monitoring systems for farmers. The project is leveraging a new generation of remote sensing devices to provide high-resolution plant water stress at regional to global scales.

    Growing up in Granada, Spain, Terrer always had an aptitude and passion for science. He studied environmental science at the University of Murcia, where he interned in the Department of Ecology. Using computational analysis tools, he worked on modeling species distribution in response to human development. Early on in his undergraduate experience, Terrer says he regarded his professors as “superheroes” with a kind of scholarly prowess. He knew he wanted to follow in their footsteps by one day working as a faculty member in academia. Of course, there would be many steps along the way before achieving that dream. 

    Upon completing his undergraduate studies, Terrer set his sights on exciting and adventurous research roles. He thought perhaps he would conduct field work in the Amazon, engaging with native communities. But when the opportunity arose to work in Australia on a state-of-the-art climate change experiment that simulates future levels of carbon dioxide, he headed south to study how plants react to CO2 in a biome of native Australian eucalyptus trees. It was during this experience that Terrer started to take a keen interest in the carbon cycle and the capacity of ecosystems to buffer rising levels of CO2 caused by human activity.

    Around 2014, he began to delve deeper into the carbon cycle as he began his doctoral studies at Imperial College London. The primary question Terrer sought to answer during his PhD was “will plants be able to absorb predicted future levels of CO2 in the atmosphere?” To answer the question, Terrer became an early adopter of artificial intelligence, machine learning, and remote sensing to analyze data from real-life, global climate change experiments. His findings from these “ground truth” values and observations resulted in a paper in the journal Science. In it, he claimed that climate models most likely overestimated how much carbon plants will be able to absorb by the end of the century, by a factor of three. 

    After postdoctoral positions at Stanford University and the Universitat Autonoma de Barcelona, followed by a prestigious Lawrence Fellowship, Terrer says he had “too many ideas and not enough time to accomplish all those ideas.” He knew it was time to lead his own group. Not long after applying for faculty positions, he landed at MIT. 

    New ways to monitor drought

    Terrer is employing similar methods to those he used during his PhD to analyze data from all over the world for his J-WAFS project. He and postdoc Wenzhe Jiao collect data from remote sensing satellites and field experiments and use machine learning to come up with new ways to monitor drought. Terrer says Jiao is a “remote sensing wizard,” who fuses data from different satellite products to understand the water cycle. With Jiao’s hydrology expertise and Terrer’s knowledge of plants, soil, and the carbon cycle, the duo is a formidable team to tackle this project.

    According to the U.N. World Meteorological Organization, the number and duration of droughts has increased by 29 percent since 2000, as compared to the two previous decades. From the Horn of Africa to the Western United States, drought is devastating vegetation and severely stressing water supplies, compromising food production and spiking food insecurity. Drought monitoring can offer fundamental information on drought location, frequency, and severity, but assessing the impact of drought on vegetation is extremely challenging. This is because plants’ sensitivity to water deficits varies across species and ecosystems. 

    Terrer and Jiao are able to obtain a clearer picture of how drought is affecting plants by employing the latest generation of remote sensing observations, which offer images of the planet with incredible spatial and temporal resolution. Satellite products such as Sentinel, Landsat, and Planet can provide daily images from space with such high resolution that individual trees can be discerned. Along with the images and datasets from satellites, the team is using ground-based observations from meteorological data. They are also using the MIT SuperCloud at MIT Lincoln Laboratory to process and analyze all of the data sets. The J-WAFS project is among one of the first to leverage high-resolution data to quantitatively measure plant drought impacts in the United States with the hopes of expanding to a global assessment in the future.

    Assisting farmers and resource managers 

    Every week, the U.S. Drought Monitor provides a map of drought conditions in the United States. The map has zero resolution and is more of a drought recap or summary, unable to predict future drought scenarios. The lack of a comprehensive spatiotemporal evaluation of historic and future drought impacts on global vegetation productivity is detrimental to farmers both in the United States and worldwide.  

    Terrer and Jiao plan to generate metrics for plant water stress at an unprecedented resolution of 10-30 meters. This means that they will be able to provide drought monitoring maps at the scale of a typical U.S. farm, giving farmers more precise, useful data every one to two days. The team will use the information from the satellites to monitor plant growth and soil moisture, as well as the time lag of plant growth response to soil moisture. In this way, Terrer and Jiao say they will eventually be able to create a kind of “plant water stress forecast” that may be able to predict adverse impacts of drought four weeks in advance. “According to the current soil moisture and lagged response time, we hope to predict plant water stress in the future,” says Jiao. 

    The expected outcomes of this project will give farmers, land and water resource managers, and decision-makers more accurate data at the farm-specific level, allowing for better drought preparation, mitigation, and adaptation. “We expect to make our data open-access online, after we finish the project, so that farmers and other stakeholders can use the maps as tools,” says Jiao. 

    Terrer adds that the project “has the potential to help us better understand the future states of climate systems, and also identify the regional hot spots more likely to experience water crises at the national, state, local, and tribal government scales.” He also expects the project will enhance our understanding of global carbon-water-energy cycle responses to drought, with applications in determining climate change impacts on natural ecosystems as a whole. More

  • in

    Study: Smoke particles from wildfires can erode the ozone layer

    A wildfire can pump smoke up into the stratosphere, where the particles drift for over a year. A new MIT study has found that while suspended there, these particles can trigger chemical reactions that erode the protective ozone layer shielding the Earth from the sun’s damaging ultraviolet radiation.

    The study, which appears today in Nature, focuses on the smoke from the “Black Summer” megafire in eastern Australia, which burned from December 2019 into January 2020. The fires — the country’s most devastating on record — scorched tens of millions of acres and pumped more than 1 million tons of smoke into the atmosphere.

    The MIT team identified a new chemical reaction by which smoke particles from the Australian wildfires made ozone depletion worse. By triggering this reaction, the fires likely contributed to a 3-5 percent depletion of total ozone at mid-latitudes in the Southern Hemisphere, in regions overlying Australia, New Zealand, and parts of Africa and South America.

    The researchers’ model also indicates the fires had an effect in the polar regions, eating away at the edges of the ozone hole over Antarctica. By late 2020, smoke particles from the Australian wildfires widened the Antarctic ozone hole by 2.5 million square kilometers — 10 percent of its area compared to the previous year.

    It’s unclear what long-term effect wildfires will have on ozone recovery. The United Nations recently reported that the ozone hole, and ozone depletion around the world, is on a recovery track, thanks to a sustained international effort to phase out ozone-depleting chemicals. But the MIT study suggests that as long as these chemicals persist in the atmosphere, large fires could spark a reaction that temporarily depletes ozone.

    “The Australian fires of 2020 were really a wake-up call for the science community,” says Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies at MIT and a leading climate scientist who first identified the chemicals responsible for the Antarctic ozone hole. “The effect of wildfires was not previously accounted for in [projections of] ozone recovery. And I think that effect may depend on whether fires become more frequent and intense as the planet warms.”

    The study is led by Solomon and MIT research scientist Kane Stone, along with collaborators from the Institute for Environmental and Climate Research in Guangzhou, China; the U.S. National Oceanic and Atmospheric Administration; the U.S. National Center for Atmospheric Research; and Colorado State University.

    Chlorine cascade

    The new study expands on a 2022 discovery by Solomon and her colleagues, in which they first identified a chemical link between wildfires and ozone depletion. The researchers found that chlorine-containing compounds, originally emitted by factories in the form of chlorofluorocarbons (CFCs), could react with the surface of fire aerosols. This interaction, they found, set off a chemical cascade that produced chlorine monoxide — the ultimate ozone-depleting molecule. Their results showed that the Australian wildfires likely depleted ozone through this newly identified chemical reaction.

    “But that didn’t explain all the changes that were observed in the stratosphere,” Solomon says. “There was a whole bunch of chlorine-related chemistry that was totally out of whack.”

    In the new study, the team took a closer look at the composition of molecules in the stratosphere following the Australian wildfires. They combed through three independent sets of satellite data and observed that in the months following the fires, concentrations of hydrochloric acid dropped significantly at mid-latitudes, while chlorine monoxide spiked.

    Hydrochloric acid (HCl) is present in the stratosphere as CFCs break down naturally over time. As long as chlorine is bound in the form of HCl, it doesn’t have a chance to destroy ozone. But if HCl breaks apart, chlorine can react with oxygen to form ozone-depleting chlorine monoxide.

    In the polar regions, HCl can break apart when it interacts with the surface of cloud particles at frigid temperatures of about 155 kelvins. However, this reaction was not expected to occur at mid-latitudes, where temperatures are much warmer.

    “The fact that HCl at mid-latitudes dropped by this unprecedented amount was to me kind of a danger signal,” Solomon says.

    She wondered: What if HCl could also interact with smoke particles, at warmer temperatures and in a way that released chlorine to destroy ozone? If such a reaction was possible, it would explain the imbalance of molecules and much of the ozone depletion observed following the Australian wildfires.

    Smoky drift

    Solomon and her colleagues dug through the chemical literature to see what sort of organic molecules could react with HCl at warmer temperatures to break it apart.

    “Lo and behold, I learned that HCl is extremely soluble in a whole broad range of organic species,” Solomon says. “It likes to glom on to lots of compounds.”

    The question then, was whether the Australian wildfires released any of those compounds that could have triggered HCl’s breakup and any subsequent depletion of ozone. When the team looked at the composition of smoke particles in the first days after the fires, the picture was anything but clear.

    “I looked at that stuff and threw up my hands and thought, there’s so much stuff in there, how am I ever going to figure this out?” Solomon recalls. “But then I realized it had actually taken some weeks before you saw the HCl drop, so you really need to look at the data on aged wildfire particles.”

    When the team expanded their search, they found that smoke particles persisted over months, circulating in the stratosphere at mid-latitudes, in the same regions and times when concentrations of HCl dropped.

    “It’s the aged smoke particles that really take up a lot of the HCl,” Solomon says. “And then you get, amazingly, the same reactions that you get in the ozone hole, but over mid-latitudes, at much warmer temperatures.”

    When the team incorporated this new chemical reaction into a model of atmospheric chemistry, and simulated the conditions of the Australian wildfires, they observed a 5 percent depletion of ozone throughout the stratosphere at mid-latitudes, and a 10 percent widening of the ozone hole over Antarctica.

    The reaction with HCl is likely the main pathway by which wildfires can deplete ozone. But Solomon guesses there may be other chlorine-containing compounds drifting in the stratosphere, that wildfires could unlock.

    “There’s now sort of a race against time,” Solomon says. “Hopefully, chlorine-containing compounds will have been destroyed, before the frequency of fires increases with climate change. This is all the more reason to be vigilant about global warming and these chlorine-containing compounds.”

    This research was supported, in part, by NASA and the U.S. National Science Foundation. More

  • in

    New nanosatellite tests autonomy in space

    In May 2022, a SpaceX Falcon 9 rocket launched the Transporter-5 mission into orbit. The mission contained a collection of micro and nanosatellites from both industry and government, including one from MIT Lincoln Laboratory called the Agile MicroSat (AMS).

    AMS’s primary mission is to test automated maneuvering capabilities in the tumultuous very low-Earth orbit (VLEO) environment, starting at 525 kilometers above the surface and lowering down. VLEO is a challenging location for satellites because the higher air density, coupled with variable space weather, causes increased and unpredictable drag that requires frequent maneuvers to maintain position. Using a commercial off-the-shelf electric-ion propulsion system and custom algorithms, AMS is testing how well it can execute automated navigation and control over an initial mission period of six months.

    “AMS integrates electric propulsion and autonomous navigation and guidance control algorithms that push a lot of the operation of the thruster onto the spacecraft — somewhat like a self-driving car,” says Andrew Stimac, who is the principal investigator for the AMS program and the leader of the laboratory’s Integrated Systems and Concepts Group.

    Stimac sees AMS as a kind of pathfinder mission for the field of small satellite autonomy. Autonomy is essential to support the growing number of small satellite launches for industry and science because it can reduce the cost and labor needed to maintain them, enable missions that call for quick and impromptu responses, and help to avoid collisions in an already-crowded sky.

    AMS is the first-ever test of a nanosatellite with this type of automated maneuvering capability.

    AMS uses an electric propulsion thruster that was selected to meet the size and power constraints of a nanosatellite while providing enough thrust and endurance to enable multiyear missions that operate in VLEO. The flight software, called the Bus Hosted Onboard Software Suite, was designed to autonomously operate the thruster to change the spacecraft’s orbit. Operators on the ground can give AMS a high-level command, such as to descend to and maintain a 300-kilometer orbit, and the software will schedule thruster burns to achieve that command autonomously, using measurements from the onboard GPS receiver as feedback. This experimental software is separate from the bus flight software, which allows AMS to safely test its novel algorithms without endangering the spacecraft.

    “One of the enablers for AMS is the way in which we’ve created this software sandbox onboard the spacecraft,” says Robert Legge, who is another member of the AMS team. “We have our own hosted software that’s running on the primary flight computer, but it’s separate from the critical health and safety avionics software. Basically, you can view this as being a little development environment on the spacecraft where we can test out different algorithms.”

    AMS has two secondary missions called Camera and Beacon. Camera’s mission is to take photos and short video clips of the Earth’s surface while AMS is in different low-Earth orbit positions.

    “One of the things we’re hoping to demonstrate is the ability to respond to current events,” says Rebecca Keenan, who helped to prepare the Camera payload. “We could hear about something that happened, like a fire or flood, and then respond pretty quickly to maneuver the satellite to image it.”

    Keenan and the rest of the AMS team are collaborating with the laboratory’s DisasterSat program, which aims to improve satellite image processing pipelines to help relief agencies respond to disasters more quickly. Small satellites that could schedule operations on-demand, rather than planning them months in advance before launch, could be a great asset to disaster response efforts.

    The other payload, Beacon, is testing new adaptive optics capabilities for tracking fast-moving targets by sending laser light from the moving satellite to a ground station at the laboratory’s Haystack Observatory in Westford, Massachusetts. Enabling precise laser pointing from an agile satellite could aid many different types of space missions, such as communications and tracking space debris. It could also be used for emerging programs such as Breakthrough Starshot, which is developing a satellite that can accelerate to high speeds using a laser-propelled lightsail.

    “As far as we know, this is the first on-orbit artificial guide star that has launched for a dedicated adaptive optics purpose,” says Lulu Liu, who worked on the Beacon payload. “Theoretically, the laser it carries can be maneuvered into position on other spacecraft to support a large number of science missions in different regions of the sky.”

    The team developed Beacon with a strict budget and timeline and hope that its success will shorten the design and test loop of next-generation laser transmitter systems. “The idea is that we could have a number of these flying in the sky at once, and a ground system can point to one of them and get near-real-time feedback on its performance,” says Liu.

    AMS weighs under 12 kilograms with 6U dimensions (23 x 11 x 36 centimeters). The bus was designed by Blue Canyon Technologies and the thruster was designed by Enpulsion GmbH.

    Legge says that the AMS program was approached as an opportunity for Lincoln Laboratory to showcase its ability to conduct work in the space domain quickly and flexibly. Some major roadblocks to rapid development of new space technology have been long timelines, high costs, and the extremely low risk tolerance associated with traditional space programs. “We wanted to show that we can really do rapid prototyping and testing of space hardware and software on orbit at an affordable cost,” Legge says.

    “AMS shows the value and fast time-to-orbit afforded by teaming with rapid space commercial partners for spacecraft core bus technologies and launch and ground segment operations, while allowing the laboratory to focus on innovative mission concepts, advanced components and payloads, and algorithms and processing software,” says Dan Cousins, who is the program manager for AMS. “The AMS team appreciates the support from the laboratory’s Technology Office for allowing us to showcase an effective operating model for rapid space programs.”

    AMS took its first image on June 1, completed its thruster commissioning in July, and has begun to descend toward its target VLEO position. More

  • in

    Using seismology for groundwater management

    As climate change increases the number of extreme weather events, such as megadroughts, groundwater management is key for sustaining water supply. But current groundwater monitoring tools are either costly or insufficient for deeper aquifers, limiting our ability to monitor and practice sustainable management in populated areas.

    Now, a new paper published in Nature Communications bridges seismology and hydrology with a pilot application that uses seismometers as a cost-effective way to monitor and map groundwater fluctuations.

    “Our measurements are independent from and complementary to traditional observations,” says Shujuan Mao PhD ’21, lead author on the paper. “It provides a new way to dictate groundwater management and evaluate the impact of human activity on shaping underground hydrologic systems.”

    Mao, currently a Thompson Postdoctoral Fellow in the Geophysics department at Stanford University, conducted most of the research during her PhD in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). Other contributors to the paper include EAPS department chair and Schlumberger Professor of Earth and Planetary Sciences Robert van der Hilst, as well as Michel Campillo and Albanne Lecointre from the Institut des Sciences de la Terre in France.

    While there are a few different methods currently used for measuring groundwater, they all come with notable drawbacks. Hydraulic heads, which drill through the ground and into the aquifers, are expensive and can only give limited information at the specific location they’re placed. Noninvasive techniques based on satellite- or airborne-sensing lack the sensitivity and resolution needed to observe deeper depths.

    Mao proposes using seismometers, which are instruments used to measure ground vibrations such as the waves produced by earthquakes. They can measure seismic velocity, which is the propagation speed of seismic waves. Seismic velocity measurements are unique to the mechanical state of rocks, or the ways rocks respond to their physical environment, and can tell us a lot about them.

    The idea of using seismic velocity to characterize property changes in rocks has long been used in laboratory-scale analysis, but only recently have scientists been able to measure it continuously in realistic-scale geological settings. For aquifer monitoring, Mao and her team associate the seismic velocity with the hydraulic property, or the water content, in the rocks.

    Seismic velocity measurements make use of ambient seismic fields, or background noise, recorded by seismometers. “The Earth’s surface is always vibrating, whether due to ocean waves, winds, or human activities,” she explains. “Most of the time those vibrations are really small and are considered ‘noise’ by traditional seismologists. But in recent years scientists have shown that the continuous noise records in fact contain a wealth of information about the properties and structures of the Earth’s interior.”

    To extract useful information from the noise records, Mao and her team used a technique called seismic interferometry, which analyzes wave interference to calculate the seismic velocity of the medium the waves pass through. For their pilot application, Mao and her team applied this analysis to basins in the Metropolitan Los Angeles region, an area suffering from worsening drought and a growing population.

    By doing this, Mao and her team were able to see how the aquifers changed physically over time at a high resolution. Their seismic velocity measurements verified measurements taken by hydraulic heads over the last 20 years, and the images matched very well with satellite data. They could also see differences in how the storage areas changed between counties in the area that used different water pumping practices, which is important for developing water management protocol.

    Mao also calls using the seismometers a “buy-one get-one free” deal, since seismometers are already in use for earthquake and tectonic studies not just across California, but worldwide, and could help “avoid the expensive cost of drilling and maintaining dedicated groundwater monitoring wells,” she says.

    Mao emphasizes that this study is just the beginning of exploring possible applications of seismic noise interferometry in this way. It can be used to monitor other near-surface systems, such as geothermal or volcanic systems, and Mao is currently applying it to oil and gas fields. But in places like California currently experiencing megadroughts, and who rely on groundwater for a large portion of their water needs, this kind of information is key for sustainable water management.

    “It’s really important, especially now, to characterize these changes in groundwater storage so that we can promote data-informed policymaking to help them thrive under increasing water stress,” she says.

    This study was funded, in part, by the European Research Council, with additional support from the Thompson Fellowship at Stanford University. More