More stories

  • in

    Liquid on Mars was not necessarily all water

    Dry river channels and lake beds on Mars point to the long-ago presence of a liquid on the planet’s surface, and the minerals observed from orbit and from landers seem to many to prove that the liquid was ordinary water. Not so fast, the authors of a new Perspectives article in Nature Geoscience suggest. Water is only one of two possible liquids under what are thought to be the conditions present on ancient Mars. The other is liquid carbon dioxide (CO2), and it may actually have been easier for CO2 in the atmosphere to condense into a liquid under those conditions than for water ice to melt. While others have suggested that liquid CO2 (LCO2) might be the source of some of the river channels seen on Mars, the mineral evidence has seemed to point uniquely to water. However, the new paper cites recent studies of carbon sequestration, the process of burying liquefied CO2 recovered from Earth’s atmosphere deep in underground caverns, which show that similar mineral alteration can occur in liquid CO2 as in water, sometimes even more rapidly.The new paper is led by Michael Hecht, principal investigator of the MOXIE instrument aboard the NASA Mars Rover Perseverance. Hecht, a research scientist at MIT’s Haystack Observatory and a former associate director, says, “Understanding how sufficient liquid water was able to flow on early Mars to explain the morphology and mineralogy we see today is probably the greatest unsettled question of Mars science. There is likely no one right answer, and we are merely suggesting another possible piece of the puzzle.”In the paper, the authors discuss the compatibility of their proposal with current knowledge of Martian atmospheric content and implications for Mars surface mineralogy. They also explore the latest carbon sequestration research and conclude that “LCO2–mineral reactions are consistent with the predominant Mars alteration products: carbonates, phyllosilicates, and sulfates.” The argument for the probable existence of liquid CO2 on the Martian surface is not an all-or-nothing scenario; either liquid CO2, liquid water, or a combination may have brought about such geomorphological and mineralogical evidence for a liquid Mars.Three plausible cases for liquid CO2 on the Martian surface are proposed and discussed: stable surface liquid, basal melting under CO2 ice, and subsurface reservoirs. The likelihood of each depends on the actual inventory of CO2 at the time, as well as the temperature conditions on the surface.The authors acknowledge that the tested sequestration conditions, where the liquid CO2 is above room temperature at pressures of tens of atmospheres, are very different from the cold, relatively low-pressure conditions that might have produced liquid CO2 on early Mars. They call for further laboratory investigations under more realistic conditions to test whether the same chemical reactions occur.Hecht explains, “It’s difficult to say how likely it is that this speculation about early Mars is actually true. What we can say, and we are saying, is that the likelihood is high enough that the possibility should not be ignored.”  More

  • in

    A new catalyst can turn methane into something useful

    Although it is less abundant than carbon dioxide, methane gas contributes disproportionately to global warming because it traps more heat in the atmosphere than carbon dioxide, due to its molecular structure.MIT chemical engineers have now designed a new catalyst that can convert methane into useful polymers, which could help reduce greenhouse gas emissions.“What to do with methane has been a longstanding problem,” says Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT and the senior author of the study. “It’s a source of carbon, and we want to keep it out of the atmosphere but also turn it into something useful.”The new catalyst works at room temperature and atmospheric pressure, which could make it easier and more economical to deploy at sites of methane production, such as power plants and cattle barns.Daniel Lundberg PhD ’24 and MIT postdoc Jimin Kim are the lead authors of the study, which appears today in Nature Catalysis. Former postdoc Yu-Ming Tu and postdoc Cody Ritt also authors of the paper.Capturing methaneMethane is produced by bacteria known as methanogens, which are often highly concentrated in landfills, swamps, and other sites of decaying biomass. Agriculture is a major source of methane, and methane gas is also generated as a byproduct of transporting, storing, and burning natural gas. Overall, it is believed to account for about 15 percent of global temperature increases.At the molecular level, methane is made of a single carbon atom bound to four hydrogen atoms. In theory, this molecule should be a good building block for making useful products such as polymers. However, converting methane to other compounds has proven difficult because getting it to react with other molecules usually requires high temperature and high pressures.To achieve methane conversion without that input of energy, the MIT team designed a hybrid catalyst with two components: a zeolite and a naturally occurring enzyme. Zeolites are abundant, inexpensive clay-like minerals, and previous work has found that they can be used to catalyze the conversion of methane to carbon dioxide.In this study, the researchers used a zeolite called iron-modified aluminum silicate, paired with an enzyme called alcohol oxidase. Bacteria, fungi, and plants use this enzyme to oxidize alcohols.This hybrid catalyst performs a two-step reaction in which zeolite converts methane to methanol, and then the enzyme converts methanol to formaldehyde. That reaction also generates hydrogen peroxide, which is fed back into the zeolite to provide a source of oxygen for the conversion of methane to methanol.This series of reactions can occur at room temperature and doesn’t require high pressure. The catalyst particles are suspended in water, which can absorb methane from the surrounding air. For future applications, the researchers envision that it could be painted onto surfaces.“Other systems operate at high temperature and high pressure, and they use hydrogen peroxide, which is an expensive chemical, to drive the methane oxidation. But our enzyme produces hydrogen peroxide from oxygen, so I think our system could be very cost-effective and scalable,” Kim says.Creating a system that incorporates both enzymes and artificial catalysts is a “smart strategy,” says Damien Debecker, a professor at the Institute of Condensed Matter and Nanosciences at the University of Louvain, Belgium.“Combining these two families of catalysts is challenging, as they tend to operate in rather distinct operation conditions. By unlocking this constraint and mastering the art of chemo-enzymatic cooperation, hybrid catalysis becomes key-enabling: It opens new perspectives to run complex reaction systems in an intensified way,” says Debecker, who was not involved in the research.Building polymersOnce formaldehyde is produced, the researchers showed they could use that molecule to generate polymers by adding urea, a nitrogen-containing molecule found in urine. This resin-like polymer, known as urea-formaldehyde, is now used in particle board, textiles and other products.The researchers envision that this catalyst could be incorporated into pipes used to transport natural gas. Within those pipes, the catalyst could generate a polymer that could act as a sealant to heal cracks in the pipes, which are a common source of methane leakage. The catalyst could also be applied as a film to coat surfaces that are exposed to methane gas, producing polymers that could be collected for use in manufacturing, the researchers say.Strano’s lab is now working on catalysts that could be used to remove carbon dioxide from the atmosphere and combine it with nitrate to produce urea. That urea could then be mixed with the formaldehyde produced by the zeolite-enzyme catalyst to produce urea-formaldehyde.The research was funded by the U.S. Department of Energy. More

  • in

    An inflatable gastric balloon could help people lose weight

    Gastric balloons — silicone balloons filled with air or saline and placed in the stomach — can help people lose weight by making them feel too full to overeat. However, this effect eventually can wear off as the stomach becomes used to the sensation of fullness.To overcome that limitation, MIT engineers have designed a new type of gastric balloon that can be inflated and deflated as needed. In an animal study, they showed that inflating the balloon before a meal caused the animals to reduce their food intake by 60 percent.This type of intervention could offer an alternative for people who don’t want to undergo more invasive treatments such as gastric bypass surgery, or people who don’t respond well to weight-loss drugs, the researchers say.“The basic concept is we can have this balloon that is dynamic, so it would be inflated right before a meal and then you wouldn’t feel hungry. Then it would be deflated in between meals,” says Giovanni Traverso, an associate professor of mechanical engineering at MIT, a gastroenterologist at Brigham and Women’s Hospital, and the senior author of the study.Neil Zixun Jia, who received a PhD from MIT in 2023, is the lead author of the paper, which appears today in the journal Device.An inflatable balloonGastric balloons filled with saline are currently approved for use in the United States. These balloons stimulate a sense of fullness in the stomach, and studies have shown that they work well, but the benefits are often temporary.“Gastric balloons do work initially. Historically, what has been seen is that the balloon is associated with weight loss. But then in general, the weight gain resumes the same trajectory,” Traverso says. “What we reasoned was perhaps if we had a system that simulates that fullness in a transient way, meaning right before a meal, that could be a way of inducing weight loss.”To achieve a longer-lasting effect in patients, the researchers set out to design a device that could expand and contract on demand. They created two prototypes: One is a traditional balloon that inflates and deflates, and the other is a mechanical device with four arms that expand outward, pushing out an elastic polymer shell that presses on the stomach wall.In animal tests, the researchers found that the mechanical-arm device could effectively expand to fill the stomach, but they ended up deciding to pursue the balloon option instead.“Our sense was that the balloon probably distributed the force better, and down the line, if you have balloon that is applying the pressure, that is probably a safer approach in the long run,” Traverso says.The researchers’ new balloon is similar to a traditional gastric balloon, but it is inserted into the stomach through an incision in the abdominal wall. The balloon is connected to an external controller that can be attached to the skin and contains a pump that inflates and deflates the balloon when needed. Inserting this device would be similar to the procedure used to place a feeding tube into a patient’s stomach, which is commonly done for people who are unable to eat or drink.“If people, for example, are unable to swallow, they receive food through a tube like this. We know that we can keep tubes in for years, so there is already precedent for other systems that can stay in the body for a very long time. That gives us some confidence in the longer-term compatibility of this system,” Traverso says.Reduced food intakeIn tests in animals, the researchers found that inflating the balloon before meals led to a 60 percent reduction in the amount of food consumed. These studies were done over the course of a month, but the researchers now plan to do longer-term studies to see if this reduction leads to weight loss.“The deployment for traditional gastric balloons is usually six months, if not more, and only then you will see good amount of weight loss. We will have to evaluate our device in a similar or longer time span to prove it really works better,” Jia says.If developed for use in humans, the new gastric balloon could offer an alternative to existing obesity treatments. Other treatments for obesity include gastric bypass surgery, “stomach stapling” (a surgical procedure in which the stomach capacity is reduced), and drugs including GLP-1 receptor agonists such as semaglutide.The gastric balloon could be an option for patients who are not good candidates for surgery or don’t respond well to weight-loss drugs, Traverso says.“For certain patients who are higher-risk, who cannot undergo surgery, or did not tolerate the medication or had some other contraindication, there are limited options,” he says. “Traditional gastric balloons are still being used, but they come with a caveat that eventually the weight loss can plateau, so this is a way of trying to address that fundamental limitation.”The research was funded by MIT’s Department of Mechanical Engineering, the Karl van Tassel Career Development Professorship, the Whitaker Health Sciences Fund Fellowship, the T.S. Lin Fellowship, the MIT Undergraduate Research Opportunities Program, and the Boston University Yawkey Funded Internship Program.  More

  • in

    Is there enough land on Earth to fight climate change and feed the world?

    Capping global warming at 1.5 degrees Celsius is a tall order. Achieving that goal will not only require a massive reduction in greenhouse gas emissions from human activities, but also a substantial reallocation of land to support that effort and sustain the biosphere, including humans. More land will be needed to accommodate a growing demand for bioenergy and nature-based carbon sequestration while ensuring sufficient acreage for food production and ecological sustainability.The expanding role of land in a 1.5 C world will be twofold — to remove carbon dioxide from the atmosphere and to produce clean energy. Land-based carbon dioxide removal strategies include bioenergy with carbon capture and storage; direct air capture; and afforestation/reforestation and other nature-based solutions. Land-based clean energy production includes wind and solar farms and sustainable bioenergy cropland. Any decision to allocate more land for climate mitigation must also address competing needs for long-term food security and ecosystem health.Land-based climate mitigation choices vary in terms of costs — amount of land required, implications for food security, impact on biodiversity and other ecosystem services — and benefits — potential for sequestering greenhouse gases and producing clean energy.Now a study in the journal Frontiers in Environmental Science provides the most comprehensive analysis to date of competing land-use and technology options to limit global warming to 1.5 C. Led by researchers at the MIT Center for Sustainability Science and Strategy (CS3), the study applies the MIT Integrated Global System Modeling (IGSM) framework to evaluate costs and benefits of different land-based climate mitigation options in Sky2050, a 1.5 C climate-stabilization scenario developed by Shell.Under this scenario, demand for bioenergy and natural carbon sinks increase along with the need for sustainable farming and food production. To determine if there’s enough land to meet all these growing demands, the research team uses the global hectare (gha) — an area of 10,000 square meters, or 2.471 acres — as the standard unit of measurement, and current estimates of the Earth’s total habitable land area (about 10 gha) and land area used for food production and bioenergy (5 gha).The team finds that with transformative changes in policy, land management practices, and consumption patterns, global land is sufficient to provide a sustainable supply of food and ecosystem services throughout this century while also reducing greenhouse gas emissions in alignment with the 1.5 C goal. These transformative changes include policies to protect natural ecosystems; stop deforestation and accelerate reforestation and afforestation; promote advances in sustainable agriculture technology and practice; reduce agricultural and food waste; and incentivize consumers to purchase sustainably produced goods.If such changes are implemented, 2.5–3.5 gha of land would be used for NBS practices to sequester 3–6 gigatonnes (Gt) of CO2 per year, and 0.4–0.6 gha of land would be allocated for energy production — 0.2–0.3 gha for bioenergy and 0.2–0.35 gha for wind and solar power generation.“Our scenario shows that there is enough land to support a 1.5 degree C future as long as effective policies at national and global levels are in place,” says CS3 Principal Research Scientist Angelo Gurgel, the study’s lead author. “These policies must not only promote efficient use of land for food, energy, and nature, but also be supported by long-term commitments from government and industry decision-makers.” More

  • in

    To design better water filters, MIT engineers look to manta rays

    Filter feeders are everywhere in the animal world, from tiny crustaceans and certain types of coral and krill, to various molluscs, barnacles, and even massive basking sharks and baleen whales. Now, MIT engineers have found that one filter feeder has evolved to sift food in ways that could improve the design of industrial water filters.In a paper appearing this week in the Proceedings of the National Academy of Sciences, the team characterizes the filter-feeding mechanism of the mobula ray — a family of aquatic rays that includes two manta species and seven devil rays. Mobula rays feed by swimming open-mouthed through plankton-rich regions of the ocean and filtering plankton particles into their gullet as water streams into their mouths and out through their gills.The floor of the mobula ray’s mouth is lined on either side with parallel, comb-like structures, called plates, that siphon water into the ray’s gills. The MIT team has shown that the dimensions of these plates may allow for incoming plankton to bounce all the way across the plates and further into the ray’s cavity, rather than out through the gills. What’s more, the ray’s gills absorb oxygen from the outflowing water, helping the ray to simultaneously breathe while feeding.“We show that the mobula ray has evolved the geometry of these plates to be the perfect size to balance feeding and breathing,” says study author Anette “Peko” Hosoi, the Pappalardo Professor of Mechanical Engineering at MIT.The engineers fabricated a simple water filter modeled after the mobula ray’s plankton-filtering features. They studied how water flowed through the filter when it was fitted with 3D-printed plate-like structures. The team took the results of these experiments and drew up a blueprint, which they say designers can use to optimize industrial cross-flow filters, which are broadly similar in configuration to that of the mobula ray.“We want to expand the design space of traditional cross-flow filtration with new knowledge from the manta ray,” says lead author and MIT postdoc Xinyu Mao PhD ’24. “People can choose a parameter regime of the mobula ray so they could potentially improve overall filter performance.”Hosoi and Mao co-authored the new study with Irmgard Bischofberger, associate professor of mechanical engineering at MIT.A better trade-offThe new study grew out of the group’s focus on filtration during the height of the Covid pandemic, when the researchers were designing face masks to filter out the virus. Since then, Mao has shifted focus to study filtration in animals and how certain filter-feeding mechanisms might improve filters used in industry, such as in water treatment plants.Mao observed that any industrial filter must strike a balance between permeability (how easily fluid can flow through a filter), and selectivity (how successful a filter is at keeping out particles of a target size). For instance, a membrane that is studded with large holes might be highly permeable, meaning a lot of water can be pumped through using very little energy. However, the membrane’s large holes would let many particles through, making it very low in selectivity. Likewise, a membrane with much smaller pores would be more selective yet also require more energy to pump the water through the smaller openings.“We asked ourselves, how do we do better with this tradeoff between permeability and selectivity?” Hosoi says.As Mao looked into filter-feeding animals, he found that the mobula ray has struck an ideal balance between permeability and selectivity: The ray is highly permeable, in that it can let water into its mouth and out through its gills quickly enough to capture oxygen to breathe. At the same time, it is highly selective, filtering and feeding on plankton rather than letting the particles stream out through the gills.The researchers realized that the ray’s filtering features are broadly similar to that of industrial cross-flow filters. These filters are designed such that fluid flows across a permeable membrane that lets through most of the fluid, while any polluting particles continue flowing across the membrane and eventually out into a reservoir of waste.The team wondered whether the mobula ray might inspire design improvements to industrial cross-flow filters. For that, they took a deeper dive into the dynamics of mobula ray filtration.A vortex keyAs part of their new study, the team fabricated a simple filter inspired by the mobula ray. The filter’s design is what engineers refer to as a “leaky channel” — effectively, a pipe with holes along its sides. In this case, the team’s “channel” consists of two flat, transparent acrylic plates that are glued together at the edges, with a slight opening between the plates through which fluid can be pumped. At one end of the channel, the researchers inserted 3D-printed structures resembling the grooved plates that run along the floor of the mobula ray’s mouth.The team then pumped water through the channel at various rates, along with colored dye to visualize the flow. They took images across the channel and observed an interesting transition: At slow pumping rates, the flow was “very peaceful,” and fluid easily slipped through the grooves in the printed plates and out into a reservoir. When the researchers increased the pumping rate, the faster-flowing fluid did not slip through, but appeared to swirl at the mouth of each groove, creating a vortex, similar to a small knot of hair between the tips of a comb’s teeth.“This vortex is not blocking water, but it is blocking particles,” Hosoi explains. “Whereas in a slower flow, particles go through the filter with the water, at higher flow rates, particles try to get through the filter but are blocked by this vortex and are shot down the channel instead. The vortex is helpful because it prevents particles from flowing out.”The team surmised that vortices are the key to mobula rays’ filter-feeding ability. The ray is able to swim at just the right speed that water, streaming into its mouth, can form vortices between the grooved plates. These vortices effectively block any plankton particles — even those that are smaller than the space between plates. The particles then bounce across the plates and head further into the ray’s cavity, while the rest of the water can still flow between the plates and out through the gills.The researchers used the results of their experiments, along with dimensions of the filtering features of mobula rays, to develop a blueprint for cross-flow filtration.“We have provided practical guidance on how to actually filter as the mobula ray does,” Mao offers.“You want to design a filter such that you’re in the regime where you generate vortices,” Hosoi says. “Our guidelines tell you: If you want your plant to pump at a certain rate, then your filter has to have a particular pore diameter and spacing to generate vortices that will filter out particles of this size. The mobula ray is giving us a really nice rule of thumb for rational design.”This work was supported, in part, by the U.S. National Institutes of Health, and the Harvey P. Greenspan Fellowship Fund.  More

  • in

    New AI tool generates realistic satellite images of future flooding

    Visualizing the potential impacts of a hurricane on people’s homes before it hits can help residents prepare and decide whether to evacuate.MIT scientists have developed a method that generates satellite imagery from the future to depict how a region would look after a potential flooding event. The method combines a generative artificial intelligence model with a physics-based flood model to create realistic, birds-eye-view images of a region, showing where flooding is likely to occur given the strength of an oncoming storm.As a test case, the team applied the method to Houston and generated satellite images depicting what certain locations around the city would look like after a storm comparable to Hurricane Harvey, which hit the region in 2017. The team compared these generated images with actual satellite images taken of the same regions after Harvey hit. They also compared AI-generated images that did not include a physics-based flood model.The team’s physics-reinforced method generated satellite images of future flooding that were more realistic and accurate. The AI-only method, in contrast, generated images of flooding in places where flooding is not physically possible.The team’s method is a proof-of-concept, meant to demonstrate a case in which generative AI models can generate realistic, trustworthy content when paired with a physics-based model. In order to apply the method to other regions to depict flooding from future storms, it will need to be trained on many more satellite images to learn how flooding would look in other regions.“The idea is: One day, we could use this before a hurricane, where it provides an additional visualization layer for the public,” says Björn Lütjens, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences, who led the research while he was a doctoral student in MIT’s Department of Aeronautics and Astronautics (AeroAstro). “One of the biggest challenges is encouraging people to evacuate when they are at risk. Maybe this could be another visualization to help increase that readiness.”To illustrate the potential of the new method, which they have dubbed the “Earth Intelligence Engine,” the team has made it available as an online resource for others to try.The researchers report their results today in the journal IEEE Transactions on Geoscience and Remote Sensing. The study’s MIT co-authors include Brandon Leshchinskiy; Aruna Sankaranarayanan; and Dava Newman, professor of AeroAstro and director of the MIT Media Lab; along with collaborators from multiple institutions.Generative adversarial imagesThe new study is an extension of the team’s efforts to apply generative AI tools to visualize future climate scenarios.“Providing a hyper-local perspective of climate seems to be the most effective way to communicate our scientific results,” says Newman, the study’s senior author. “People relate to their own zip code, their local environment where their family and friends live. Providing local climate simulations becomes intuitive, personal, and relatable.”For this study, the authors use a conditional generative adversarial network, or GAN, a type of machine learning method that can generate realistic images using two competing, or “adversarial,” neural networks. The first “generator” network is trained on pairs of real data, such as satellite images before and after a hurricane. The second “discriminator” network is then trained to distinguish between the real satellite imagery and the one synthesized by the first network.Each network automatically improves its performance based on feedback from the other network. The idea, then, is that such an adversarial push and pull should ultimately produce synthetic images that are indistinguishable from the real thing. Nevertheless, GANs can still produce “hallucinations,” or factually incorrect features in an otherwise realistic image that shouldn’t be there.“Hallucinations can mislead viewers,” says Lütjens, who began to wonder whether such hallucinations could be avoided, such that generative AI tools can be trusted to help inform people, particularly in risk-sensitive scenarios. “We were thinking: How can we use these generative AI models in a climate-impact setting, where having trusted data sources is so important?”Flood hallucinationsIn their new work, the researchers considered a risk-sensitive scenario in which generative AI is tasked with creating satellite images of future flooding that could be trustworthy enough to inform decisions of how to prepare and potentially evacuate people out of harm’s way.Typically, policymakers can get an idea of where flooding might occur based on visualizations in the form of color-coded maps. These maps are the final product of a pipeline of physical models that usually begins with a hurricane track model, which then feeds into a wind model that simulates the pattern and strength of winds over a local region. This is combined with a flood or storm surge model that forecasts how wind might push any nearby body of water onto land. A hydraulic model then maps out where flooding will occur based on the local flood infrastructure and generates a visual, color-coded map of flood elevations over a particular region.“The question is: Can visualizations of satellite imagery add another level to this, that is a bit more tangible and emotionally engaging than a color-coded map of reds, yellows, and blues, while still being trustworthy?” Lütjens says.The team first tested how generative AI alone would produce satellite images of future flooding. They trained a GAN on actual satellite images taken by satellites as they passed over Houston before and after Hurricane Harvey. When they tasked the generator to produce new flood images of the same regions, they found that the images resembled typical satellite imagery, but a closer look revealed hallucinations in some images, in the form of floods where flooding should not be possible (for instance, in locations at higher elevation).To reduce hallucinations and increase the trustworthiness of the AI-generated images, the team paired the GAN with a physics-based flood model that incorporates real, physical parameters and phenomena, such as an approaching hurricane’s trajectory, storm surge, and flood patterns. With this physics-reinforced method, the team generated satellite images around Houston that depict the same flood extent, pixel by pixel, as forecasted by the flood model.“We show a tangible way to combine machine learning with physics for a use case that’s risk-sensitive, which requires us to analyze the complexity of Earth’s systems and project future actions and possible scenarios to keep people out of harm’s way,” Newman says. “We can’t wait to get our generative AI tools into the hands of decision-makers at the local community level, which could make a significant difference and perhaps save lives.”The research was supported, in part, by the MIT Portugal Program, the DAF-MIT Artificial Intelligence Accelerator, NASA, and Google Cloud. More

  • in

    Advancing urban tree monitoring with AI-powered digital twins

    The Irish philosopher George Berkely, best known for his theory of immaterialism, once famously mused, “If a tree falls in a forest and no one is around to hear it, does it make a sound?”What about AI-generated trees? They probably wouldn’t make a sound, but they will be critical nonetheless for applications such as adaptation of urban flora to climate change. To that end, the novel “Tree-D Fusion” system developed by researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), Google, and Purdue University merges AI and tree-growth models with Google’s Auto Arborist data to create accurate 3D models of existing urban trees. The project has produced the first-ever large-scale database of 600,000 environmentally aware, simulation-ready tree models across North America.“We’re bridging decades of forestry science with modern AI capabilities,” says Sara Beery, MIT electrical engineering and computer science (EECS) assistant professor, MIT CSAIL principal investigator, and a co-author on a new paper about Tree-D Fusion. “This allows us to not just identify trees in cities, but to predict how they’ll grow and impact their surroundings over time. We’re not ignoring the past 30 years of work in understanding how to build these 3D synthetic models; instead, we’re using AI to make this existing knowledge more useful across a broader set of individual trees in cities around North America, and eventually the globe.”Tree-D Fusion builds on previous urban forest monitoring efforts that used Google Street View data, but branches it forward by generating complete 3D models from single images. While earlier attempts at tree modeling were limited to specific neighborhoods, or struggled with accuracy at scale, Tree-D Fusion can create detailed models that include typically hidden features, such as the back side of trees that aren’t visible in street-view photos.The technology’s practical applications extend far beyond mere observation. City planners could use Tree-D Fusion to one day peer into the future, anticipating where growing branches might tangle with power lines, or identifying neighborhoods where strategic tree placement could maximize cooling effects and air quality improvements. These predictive capabilities, the team says, could change urban forest management from reactive maintenance to proactive planning.A tree grows in Brooklyn (and many other places)The researchers took a hybrid approach to their method, using deep learning to create a 3D envelope of each tree’s shape, then using traditional procedural models to simulate realistic branch and leaf patterns based on the tree’s genus. This combo helped the model predict how trees would grow under different environmental conditions and climate scenarios, such as different possible local temperatures and varying access to groundwater.Now, as cities worldwide grapple with rising temperatures, this research offers a new window into the future of urban forests. In a collaboration with MIT’s Senseable City Lab, the Purdue University and Google team is embarking on a global study that re-imagines trees as living climate shields. Their digital modeling system captures the intricate dance of shade patterns throughout the seasons, revealing how strategic urban forestry could hopefully change sweltering city blocks into more naturally cooled neighborhoods.“Every time a street mapping vehicle passes through a city now, we’re not just taking snapshots — we’re watching these urban forests evolve in real-time,” says Beery. “This continuous monitoring creates a living digital forest that mirrors its physical counterpart, offering cities a powerful lens to observe how environmental stresses shape tree health and growth patterns across their urban landscape.”AI-based tree modeling has emerged as an ally in the quest for environmental justice: By mapping urban tree canopy in unprecedented detail, a sister project from the Google AI for Nature team has helped uncover disparities in green space access across different socioeconomic areas. “We’re not just studying urban forests — we’re trying to cultivate more equity,” says Beery. The team is now working closely with ecologists and tree health experts to refine these models, ensuring that as cities expand their green canopies, the benefits branch out to all residents equally.It’s a breezeWhile Tree-D fusion marks some major “growth” in the field, trees can be uniquely challenging for computer vision systems. Unlike the rigid structures of buildings or vehicles that current 3D modeling techniques handle well, trees are nature’s shape-shifters — swaying in the wind, interweaving branches with neighbors, and constantly changing their form as they grow. The Tree-D fusion models are “simulation-ready” in that they can estimate the shape of the trees in the future, depending on the environmental conditions.“What makes this work exciting is how it pushes us to rethink fundamental assumptions in computer vision,” says Beery. “While 3D scene understanding techniques like photogrammetry or NeRF [neural radiance fields] excel at capturing static objects, trees demand new approaches that can account for their dynamic nature, where even a gentle breeze can dramatically alter their structure from moment to moment.”The team’s approach of creating rough structural envelopes that approximate each tree’s form has proven remarkably effective, but certain issues remain unsolved. Perhaps the most vexing is the “entangled tree problem;” when neighboring trees grow into each other, their intertwined branches create a puzzle that no current AI system can fully unravel.The scientists see their dataset as a springboard for future innovations in computer vision, and they’re already exploring applications beyond street view imagery, looking to extend their approach to platforms like iNaturalist and wildlife camera traps.“This marks just the beginning for Tree-D Fusion,” says Jae Joong Lee, a Purdue University PhD student who developed, implemented and deployed the Tree-D-Fusion algorithm. “Together with my collaborators, I envision expanding the platform’s capabilities to a planetary scale. Our goal is to use AI-driven insights in service of natural ecosystems — supporting biodiversity, promoting global sustainability, and ultimately, benefiting the health of our entire planet.”Beery and Lee’s co-authors are Jonathan Huang, Scaled Foundations head of AI (formerly of Google); and four others from Purdue University: PhD students Jae Joong Lee and Bosheng Li, Professor and Dean’s Chair of Remote Sensing Songlin Fei, Assistant Professor Raymond Yeh, and Professor and Associate Head of Computer Science Bedrich Benes. Their work is based on efforts supported by the United States Department of Agriculture’s (USDA) Natural Resources Conservation Service and is directly supported by the USDA’s National Institute of Food and Agriculture. The researchers presented their findings at the European Conference on Computer Vision this month.  More

  • in

    Reality check on technologies to remove carbon dioxide from the air

    In 2015, 195 nations plus the European Union signed the Paris Agreement and pledged to undertake plans designed to limit the global temperature increase to 1.5 degrees Celsius. Yet in 2023, the world exceeded that target for most, if not all of, the year — calling into question the long-term feasibility of achieving that target.To do so, the world must reduce the levels of greenhouse gases in the atmosphere, and strategies for achieving levels that will “stabilize the climate” have been both proposed and adopted. Many of those strategies combine dramatic cuts in carbon dioxide (CO2) emissions with the use of direct air capture (DAC), a technology that removes CO2 from the ambient air. As a reality check, a team of researchers in the MIT Energy Initiative (MITEI) examined those strategies, and what they found was alarming: The strategies rely on overly optimistic — indeed, unrealistic — assumptions about how much CO2 could be removed by DAC. As a result, the strategies won’t perform as predicted. Nevertheless, the MITEI team recommends that work to develop the DAC technology continue so that it’s ready to help with the energy transition — even if it’s not the silver bullet that solves the world’s decarbonization challenge.DAC: The promise and the realityIncluding DAC in plans to stabilize the climate makes sense. Much work is now under way to develop DAC systems, and the technology looks promising. While companies may never run their own DAC systems, they can already buy “carbon credits” based on DAC. Today, a multibillion-dollar market exists on which entities or individuals that face high costs or excessive disruptions to reduce their own carbon emissions can pay others to take emissions-reducing actions on their behalf. Those actions can involve undertaking new renewable energy projects or “carbon-removal” initiatives such as DAC or afforestation/reforestation (planting trees in areas that have never been forested or that were forested in the past). DAC-based credits are especially appealing for several reasons, explains Howard Herzog, a senior research engineer at MITEI. With DAC, measuring and verifying the amount of carbon removed is straightforward; the removal is immediate, unlike with planting forests, which may take decades to have an impact; and when DAC is coupled with CO2 storage in geologic formations, the CO2 is kept out of the atmosphere essentially permanently — in contrast to, for example, sequestering it in trees, which may one day burn and release the stored CO2.Will current plans that rely on DAC be effective in stabilizing the climate in the coming years? To find out, Herzog and his colleagues Jennifer Morris and Angelo Gurgel, both MITEI principal research scientists, and Sergey Paltsev, a MITEI senior research scientist — all affiliated with the MIT Center for Sustainability Science and Strategy (CS3) — took a close look at the modeling studies on which those plans are based.Their investigation identified three unavoidable engineering challenges that together lead to a fourth challenge — high costs for removing a single ton of CO2 from the atmosphere. The details of their findings are reported in a paper published in the journal One Earth on Sept. 20.Challenge 1: Scaling upWhen it comes to removing CO2 from the air, nature presents “a major, non-negotiable challenge,” notes the MITEI team: The concentration of CO2 in the air is extremely low — just 420 parts per million, or roughly 0.04 percent. In contrast, the CO2 concentration in flue gases emitted by power plants and industrial processes ranges from 3 percent to 20 percent. Companies now use various carbon capture and sequestration (CCS) technologies to capture CO2 from their flue gases, but capturing CO2 from the air is much more difficult. To explain, the researchers offer the following analogy: “The difference is akin to needing to find 10 red marbles in a jar of 25,000 marbles of which 24,990 are blue [the task representing DAC] versus needing to find about 10 red marbles in a jar of 100 marbles of which 90 are blue [the task for CCS].”Given that low concentration, removing a single metric ton (tonne) of CO2 from air requires processing about 1.8 million cubic meters of air, which is roughly equivalent to the volume of 720 Olympic-sized swimming pools. And all that air must be moved across a CO2-capturing sorbent — a feat requiring large equipment. For example, one recently proposed design for capturing 1 million tonnes of CO2 per year would require an “air contactor” equivalent in size to a structure about three stories high and three miles long.Recent modeling studies project DAC deployment on the scale of 5 to 40 gigatonnes of CO2 removed per year. (A gigatonne equals 1 billion metric tonnes.) But in their paper, the researchers conclude that the likelihood of deploying DAC at the gigatonne scale is “highly uncertain.”Challenge 2: Energy requirementGiven the low concentration of CO2 in the air and the need to move large quantities of air to capture it, it’s no surprise that even the best DAC processes proposed today would consume large amounts of energy — energy that’s generally supplied by a combination of electricity and heat. Including the energy needed to compress the captured CO2 for transportation and storage, most proposed processes require an equivalent of at least 1.2 megawatt-hours of electricity for each tonne of CO2 removed.The source of that electricity is critical. For example, using coal-based electricity to drive an all-electric DAC process would generate 1.2 tonnes of CO2 for each tonne of CO2 captured. The result would be a net increase in emissions, defeating the whole purpose of the DAC. So clearly, the energy requirement must be satisfied using either low-carbon electricity or electricity generated using fossil fuels with CCS. All-electric DAC deployed at large scale — say, 10 gigatonnes of CO2 removed annually — would require 12,000 terawatt-hours of electricity, which is more than 40 percent of total global electricity generation today.Electricity consumption is expected to grow due to increasing overall electrification of the world economy, so low-carbon electricity will be in high demand for many competing uses — for example, in power generation, transportation, industry, and building operations. Using clean electricity for DAC instead of for reducing CO2 emissions in other critical areas raises concerns about the best uses of clean electricity.Many studies assume that a DAC unit could also get energy from “waste heat” generated by some industrial process or facility nearby. In the MITEI researchers’ opinion, “that may be more wishful thinking than reality.” The heat source would need to be within a few miles of the DAC plant for transporting the heat to be economical; given its high capital cost, the DAC plant would need to run nonstop, requiring constant heat delivery; and heat at the temperature required by the DAC plant would have competing uses, for example, for heating buildings. Finally, if DAC is deployed at the gigatonne per year scale, waste heat will likely be able to provide only a small fraction of the needed energy.Challenge 3: SitingSome analysts have asserted that, because air is everywhere, DAC units can be located anywhere. But in reality, siting a DAC plant involves many complex issues. As noted above, DAC plants require significant amounts of energy, so having access to enough low-carbon energy is critical. Likewise, having nearby options for storing the removed CO2 is also critical. If storage sites or pipelines to such sites don’t exist, major new infrastructure will need to be built, and building new infrastructure of any kind is expensive and complicated, involving issues related to permitting, environmental justice, and public acceptability — issues that are, in the words of the researchers, “commonly underestimated in the real world and neglected in models.”Two more siting needs must be considered. First, meteorological conditions must be acceptable. By definition, any DAC unit will be exposed to the elements, and factors like temperature and humidity will affect process performance and process availability. And second, a DAC plant will require some dedicated land — though how much is unclear, as the optimal spacing of units is as yet unresolved. Like wind turbines, DAC units need to be properly spaced to ensure maximum performance such that one unit is not sucking in CO2-depleted air from another unit.Challenge 4: CostConsidering the first three challenges, the final challenge is clear: the cost per tonne of CO2 removed is inevitably high. Recent modeling studies assume DAC costs as low as $100 to $200 per ton of CO2 removed. But the researchers found evidence suggesting far higher costs.To start, they cite typical costs for power plants and industrial sites that now use CCS to remove CO2 from their flue gases. The cost of CCS in such applications is estimated to be in the range of $50 to $150 per ton of CO2 removed. As explained above, the far lower concentration of CO2 in the air will lead to substantially higher costs.As explained under Challenge 1, the DAC units needed to capture the required amount of air are massive. The capital cost of building them will be high, given labor, materials, permitting costs, and so on. Some estimates in the literature exceed $5,000 per tonne captured per year.Then there are the ongoing costs of energy. As noted under Challenge 2, removing 1 tonne of CO2 requires the equivalent of 1.2 megawatt-hours of electricity. If that electricity costs $0.10 per kilowatt-hour, the cost of just the electricity needed to remove 1 tonne of CO2 is $120. The researchers point out that assuming such a low price is “questionable,” given the expected increase in electricity demand, future competition for clean energy, and higher costs on a system dominated by renewable — but intermittent — energy sources.Then there’s the cost of storage, which is ignored in many DAC cost estimates.Clearly, many considerations show that prices of $100 to $200 per tonne are unrealistic, and assuming such low prices will distort assessments of strategies, leading them to underperform going forward.The bottom lineIn their paper, the MITEI team calls DAC a “very seductive concept.” Using DAC to suck CO2 out of the air and generate high-quality carbon-removal credits can offset reduction requirements for industries that have hard-to-abate emissions. By doing so, DAC would minimize disruptions to key parts of the world’s economy, including air travel, certain carbon-intensive industries, and agriculture. However, the world would need to generate billions of tonnes of CO2 credits at an affordable price. That prospect doesn’t look likely. The largest DAC plant in operation today removes just 4,000 tonnes of CO2 per year, and the price to buy the company’s carbon-removal credits on the market today is $1,500 per tonne.The researchers recognize that there is room for energy efficiency improvements in the future, but DAC units will always be subject to higher work requirements than CCS applied to power plant or industrial flue gases, and there is not a clear pathway to reducing work requirements much below the levels of current DAC technologies.Nevertheless, the researchers recommend that work to develop DAC continue “because it may be needed for meeting net-zero emissions goals, especially given the current pace of emissions.” But their paper concludes with this warning: “Given the high stakes of climate change, it is foolhardy to rely on DAC to be the hero that comes to our rescue.” More