More stories

  • in

    Would you like that coffee with iron?

    Around the world, about 2 billion people suffer from iron deficiency, which can lead to anemia, impaired brain development in children, and increased infant mortality.To combat that problem, MIT researchers have come up with a new way to fortify foods and beverages with iron, using small crystalline particles. These particles, known as metal-organic frameworks, could be sprinkled on food, added to staple foods such as bread, or incorporated into drinks like coffee and tea.“We’re creating a solution that can be seamlessly added to staple foods across different regions,” says Ana Jaklenec, a principal investigator at MIT’s Koch Institute for Integrative Cancer Research. “What’s considered a staple in Senegal isn’t the same as in India or the U.S., so our goal was to develop something that doesn’t react with the food itself. That way, we don’t have to reformulate for every context — it can be incorporated into a wide range of foods and beverages without compromise.”The particles designed in this study can also carry iodine, another critical nutrient. The particles could also be adapted to carry important minerals such as zinc, calcium, or magnesium.“We are very excited about this new approach and what we believe is a novel application of metal-organic frameworks to potentially advance nutrition, particularly in the developing world,” says Robert Langer, the David H. Koch Institute Professor at MIT and a member of the Koch Institute.Jaklenec and Langer are the senior authors of the study, which appears today in the journal Matter. MIT postdoc Xin Yang and Linzixuan (Rhoda) Zhang PhD ’24 are the lead authors of the paper.Iron stabilizationFood fortification can be a successful way to combat nutrient deficiencies, but this approach is often challenging because many nutrients are fragile and break down during storage or cooking. When iron is added to foods, it can react with other molecules in the food, giving the food a metallic taste.In previous work, Jaklenec’s lab has shown that encapsulating nutrients in polymers can protect them from breaking down or reacting with other molecules. In a small clinical trial, the researchers found that women who ate bread fortified with encapsulated iron were able to absorb the iron from the food.However, one drawback to this approach is that the polymer adds a lot of bulk to the material, limiting the amount of iron or other nutrients that end up in the food.“Encapsulating iron in polymers significantly improves its stability and reactivity, making it easier to add to food,” Jaklenec says. “But to be effective, it requires a substantial amount of polymer. That limits how much iron you can deliver in a typical serving, making it difficult to meet daily nutritional targets through fortified foods alone.”To overcome that challenge, Yang came up with a new idea: Instead of encapsulating iron in a polymer, they could use iron itself as a building block for a crystalline particle known as a metal-organic framework, or MOF (pronounced “moff”).MOFs consist of metal atoms joined by organic molecules called ligands to create a rigid, cage-like structure. Depending on the combination of metals and ligands chosen, they can be used for a wide variety of applications.“We thought maybe we could synthesize a metal-organic framework with food-grade ligands and food-grade micronutrients,” Yang says. “Metal-organic frameworks have very high porosity, so they can load a lot of cargo. That’s why we thought we could leverage this platform to make a new metal-organic framework that could be used in the food industry.”In this case, the researchers designed a MOF consisting of iron bound to a ligand called fumaric acid, which is often used as a food additive to enhance flavor or help preserve food.This structure prevents iron from reacting with polyphenols — compounds commonly found in foods such as whole grains and nuts, as well as coffee and tea. When iron does react with those compounds, it forms a metal polyphenol complex that cannot be absorbed by the body.The MOFs’ structure also allows them to remain stable until they reach an acidic environment, such as the stomach, where they break down and release their iron payload.Double-fortified saltsThe researchers also decided to include iodine in their MOF particle, which they call NuMOF. Iodized salt has been very successful at preventing iodine deficiency, and many efforts are now underway to create “double-fortified salts” that would also contain iron.Delivering these nutrients together has proven difficult because iron and iodine can react with each other, making each one less likely to be absorbed by the body. In this study, the MIT team showed that once they formed their iron-containing MOF particles, they could load them with iodine, in a way that the iron and iodine do not react with each other.In tests of the particles’ stability, the researchers found that the NuMOFs could withstand long-term storage, high heat and humidity, and boiling water.Throughout these tests, the particles maintained their structure. When the researchers then fed the particles to mice, they found that both iron and iodine became available in the bloodstream within several hours of the NuMOF consumption.The researchers are now working on launching a company that is developing coffee and other beverages fortified with iron and iodine. They also hope to continue working toward a double-fortified salt that could be consumed on its own or incorporated into staple food products.The research was partially supported by J-WAFS Fellowships for Water and Food Solutions.Other authors of the paper include Fangzheng Chen, Wenhao Gao, Zhiling Zheng, Tian Wang, Erika Yan Wang, Behnaz Eshaghi, and Sydney MacDonald. More

  • in

    Jessika Trancik named director of the Sociotechnical Systems Research Center

    Jessika Trancik, a professor in MIT’s Institute for Data, Systems, and Society, has been named the new director of the Sociotechnical Systems Research Center (SSRC), effective July 1. The SSRC convenes and supports researchers focused on problems and solutions at the intersection of technology and its societal impacts.Trancik conducts research on technology innovation and energy systems. At the Trancik Lab, she and her team develop methods drawing on engineering knowledge, data science, and policy analysis. Their work examines the pace and drivers of technological change, helping identify where innovation is occurring most rapidly, how emerging technologies stack up against existing systems, and which performance thresholds matter most for real-world impact. Her models have been used to inform government innovation policy and have been applied across a wide range of industries.“Professor Trancik’s deep expertise in the societal implications of technology, and her commitment to developing impactful solutions across industries, make her an excellent fit to lead SSRC,” says Maria C. Yang, interim dean of engineering and William E. Leonhard (1940) Professor of Mechanical Engineering.Much of Trancik’s research focuses on the domain of energy systems, and establishing methods for energy technology evaluation, including of their costs, performance, and environmental impacts. She covers a wide range of energy services — including electricity, transportation, heating, and industrial processes. Her research has applications in solar and wind energy, energy storage, low-carbon fuels, electric vehicles, and nuclear fission. Trancik is also known for her research on extreme events in renewable energy availability.A prolific researcher, Trancik has helped measure progress and inform the development of solar photovoltaics, batteries, electric vehicle charging infrastructure, and other low-carbon technologies — and anticipate future trends. One of her widely cited contributions includes quantifying learning rates and identifying where targeted investments can most effectively accelerate innovation. These tools have been used by U.S. federal agencies, international organizations, and the private sector to shape energy R&D portfolios, climate policy, and infrastructure planning.Trancik is committed to engaging and informing the public on energy consumption. She and her team developed the app carboncounter.com, which helps users choose cars with low costs and low environmental impacts.As an educator, Trancik teaches courses for students across MIT’s five schools and the MIT Schwarzman College of Computing.“The question guiding my teaching and research is how do we solve big societal challenges with technology, and how can we be more deliberate in developing and supporting technologies to get us there?” Trancik said in an article about course IDS.521/IDS.065 (Energy Systems for Climate Change Mitigation).Trancik received her undergraduate degree in materials science and engineering from Cornell University. As a Rhodes Scholar, she completed her PhD in materials science at the University of Oxford. She subsequently worked for the United Nations in Geneva, Switzerland, and the Earth Institute at Columbia University. After serving as an Omidyar Research Fellow at the Santa Fe Institute, she joined MIT in 2010 as a faculty member.Trancik succeeds Fotini Christia, the Ford International Professor of Social Sciences in the Department of Political Science and director of IDSS, who previously served as director of SSRC. More

  • in

    Surprisingly diverse innovations led to dramatically cheaper solar panels

    The cost of solar panels has dropped by more than 99 percent since the 1970s, enabling widespread adoption of photovoltaic systems that convert sunlight into electricity.A new MIT study drills down on specific innovations that enabled such dramatic cost reductions, revealing that technical advances across a web of diverse research efforts and industries played a pivotal role.The findings could help renewable energy companies make more effective R&D investment decisions and aid policymakers in identifying areas to prioritize to spur growth in manufacturing and deployment.The researchers’ modeling approach shows that key innovations often originated outside the solar sector, including advances in semiconductor fabrication, metallurgy, glass manufacturing, oil and gas drilling, construction processes, and even legal domains.“Our results show just how intricate the process of cost improvement is, and how much scientific and engineering advances, often at a very basic level, are at the heart of these cost reductions. A lot of knowledge was drawn from different domains and industries, and this network of knowledge is what makes these technologies improve,” says study senior author Jessika Trancik, a professor in MIT’s Institute for Data, Systems, and Society.Trancik is joined on the paper by co-lead authors Goksin Kavlak, a former IDSS graduate student and postdoc who is now a senior energy associate at the Brattle Group; Magdalena Klemun, a former IDSS graduate student and postdoc who is now an assistant professor at Johns Hopkins University; former MIT postdoc Ajinkya Kamat; as well as Brittany Smith and Robert Margolis of the National Renewable Energy Laboratory. The research appears today in PLOS ONE.Identifying innovationsThis work builds on mathematical models that the researchers previously developed that tease out the effects of engineering technologies on the cost of photovoltaic (PV) modules and systems.In this study, the researchers aimed to dig even deeper into the scientific advances that drove those cost declines.They combined their quantitative cost model with a detailed, qualitative analysis of innovations that affected the costs of PV system materials, manufacturing steps, and deployment processes.“Our quantitative cost model guided the qualitative analysis, allowing us to look closely at innovations in areas that are hard to measure due to a lack of quantitative data,” Kavlak says.Building on earlier work identifying key cost drivers — such as the number of solar cells per module, wiring efficiency, and silicon wafer area — the researchers conducted a structured scan of the literature for innovations likely to affect these drivers. Next, they grouped these innovations to identify patterns, revealing clusters that reduced costs by improving materials or prefabricating components to streamline manufacturing and installation. Finally, the team tracked industry origins and timing for each innovation, and consulted domain experts to zero in on the most significant innovations.All told, they identified 81 unique innovations that affected PV system costs since 1970, from improvements in antireflective coated glass to the implementation of fully online permitting interfaces.“With innovations, you can always go to a deeper level, down to things like raw materials processing techniques, so it was challenging to know when to stop. Having that quantitative model to ground our qualitative analysis really helped,” Trancik says.They chose to separate PV module costs from so-called balance-of-system (BOS) costs, which cover things like mounting systems, inverters, and wiring.PV modules, which are wired together to form solar panels, are mass-produced and can be exported, while many BOS components are designed, built, and sold at the local level.“By examining innovations both at the BOS level and within the modules, we identify the different types of innovations that have emerged in these two parts of PV technology,” Kavlak says.BOS costs depend more on soft technologies, nonphysical elements such as permitting procedures, which have contributed significantly less to PV’s past cost improvement compared to hardware innovations.“Often, it comes down to delays. Time is money, and if you have delays on construction sites and unpredictable processes, that affects these balance-of-system costs,” Trancik says.Innovations such as automated permitting software, which flags code-compliant systems for fast-track approval, show promise. Though not yet quantified in this study, the team’s framework could support future analysis of their economic impact and similar innovations that streamline deployment processes.Interconnected industriesThe researchers found that innovations from the semiconductor, electronics, metallurgy, and petroleum industries played a major role in reducing both PV and BOS costs, but BOS costs were also impacted by innovations in software engineering and electric utilities.Noninnovation factors, like efficiency gains from bulk purchasing and the accumulation of knowledge in the solar power industry, also reduced some cost variables.In addition, while most PV panel innovations originated in research organizations or industry, many BOS innovations were developed by city governments, U.S. states, or professional associations.“I knew there was a lot going on with this technology, but the diversity of all these fields and how closely linked they are, and the fact that we can clearly see that network through this analysis, was interesting,” Trancik says.“PV was very well-positioned to absorb innovations from other industries — thanks to the right timing, physical compatibility, and supportive policies to adapt innovations for PV applications,” Klemun adds.The analysis also reveals the role greater computing power could play in reducing BOS costs through advances like automated engineering review systems and remote site assessment software.“In terms of knowledge spillovers, what we’ve seen so far in PV may really just be the beginning,” Klemun says, pointing to the expanding role of robotics and AI-driven digital tools in driving future cost reductions and quality improvements.In addition to their qualitative analysis, the researchers demonstrated how this methodology could be used to estimate the quantitative impact of a particular innovation if one has the numerical data to plug into the cost equation.For instance, using information about material prices and manufacturing procedures, they estimate that wire sawing, a technique which was introduced in the 1980s, led to an overall PV system cost decrease of $5 per watt by reducing silicon losses and increasing throughput during fabrication.“Through this retrospective analysis, you learn something valuable for future strategy because you can see what worked and what didn’t work, and the models can also be applied prospectively. It is also useful to know what adjacent sectors may help support improvement in a particular technology,” Trancik says.Moving forward, the researchers plan to apply this methodology to a wide range of technologies, including other renewable energy systems. They also want to further study soft technology to identify innovations or processes that could accelerate cost reductions.“Although the process of technological innovation may seem like a black box, we’ve shown that you can study it just like any other phenomena,” Trancik says.This research is funded, in part, by the U.S. Department of Energy Solar Energies Technology Office. More

  • in

    Eco-driving measures could significantly reduce vehicle emissions

    Any motorist who has ever waited through multiple cycles for a traffic light to turn green knows how annoying signalized intersections can be. But sitting at intersections isn’t just a drag on drivers’ patience — unproductive vehicle idling could contribute as much as 15 percent of the carbon dioxide emissions from U.S. land transportation.A large-scale modeling study led by MIT researchers reveals that eco-driving measures, which can involve dynamically adjusting vehicle speeds to reduce stopping and excessive acceleration, could significantly reduce those CO2 emissions.Using a powerful artificial intelligence method called deep reinforcement learning, the researchers conducted an in-depth impact assessment of the factors affecting vehicle emissions in three major U.S. cities.Their analysis indicates that fully adopting eco-driving measures could cut annual city-wide intersection carbon emissions by 11 to 22 percent, without slowing traffic throughput or affecting vehicle and traffic safety.Even if only 10 percent of vehicles on the road employ eco-driving, it would result in 25 to 50 percent of the total reduction in CO2 emissions, the researchers found.In addition, dynamically optimizing speed limits at about 20 percent of intersections provides 70 percent of the total emission benefits. This indicates that eco-driving measures could be implemented gradually while still having measurable, positive impacts on mitigating climate change and improving public health.

    An animated GIF compares what 20% eco-driving adoption looks like to 100% eco-driving adoption.Image: Courtesy of the researchers

    “Vehicle-based control strategies like eco-driving can move the needle on climate change reduction. We’ve shown here that modern machine-learning tools, like deep reinforcement learning, can accelerate the kinds of analysis that support sociotechnical decision making. This is just the tip of the iceberg,” says senior author Cathy Wu, the Class of 1954 Career Development Associate Professor in Civil and Environmental Engineering (CEE) and the Institute for Data, Systems, and Society (IDSS) at MIT, and a member of the Laboratory for Information and Decision Systems (LIDS).She is joined on the paper by lead author Vindula Jayawardana, an MIT graduate student; as well as MIT graduate students Ao Qu, Cameron Hickert, and Edgar Sanchez; MIT undergraduate Catherine Tang; Baptiste Freydt, a graduate student at ETH Zurich; and Mark Taylor and Blaine Leonard of the Utah Department of Transportation. The research appears in Transportation Research Part C: Emerging Technologies.A multi-part modeling studyTraffic control measures typically call to mind fixed infrastructure, like stop signs and traffic signals. But as vehicles become more technologically advanced, it presents an opportunity for eco-driving, which is a catch-all term for vehicle-based traffic control measures like the use of dynamic speeds to reduce energy consumption.In the near term, eco-driving could involve speed guidance in the form of vehicle dashboards or smartphone apps. In the longer term, eco-driving could involve intelligent speed commands that directly control the acceleration of semi-autonomous and fully autonomous vehicles through vehicle-to-infrastructure communication systems.“Most prior work has focused on how to implement eco-driving. We shifted the frame to consider the question of should we implement eco-driving. If we were to deploy this technology at scale, would it make a difference?” Wu says.To answer that question, the researchers embarked on a multifaceted modeling study that would take the better part of four years to complete.They began by identifying 33 factors that influence vehicle emissions, including temperature, road grade, intersection topology, age of the vehicle, traffic demand, vehicle types, driver behavior, traffic signal timing, road geometry, etc.“One of the biggest challenges was making sure we were diligent and didn’t leave out any major factors,” Wu says.Then they used data from OpenStreetMap, U.S. geological surveys, and other sources to create digital replicas of more than 6,000 signalized intersections in three cities — Atlanta, San Francisco, and Los Angeles — and simulated more than a million traffic scenarios.The researchers used deep reinforcement learning to optimize each scenario for eco-driving to achieve the maximum emissions benefits.Reinforcement learning optimizes the vehicles’ driving behavior through trial-and-error interactions with a high-fidelity traffic simulator, rewarding vehicle behaviors that are more energy-efficient while penalizing those that are not.The researchers cast the problem as a decentralized cooperative multi-agent control problem, where the vehicles cooperate to achieve overall energy efficiency, even among non-participating vehicles, and they act in a decentralized manner, avoiding the need for costly communication between vehicles.However, training vehicle behaviors that generalize across diverse intersection traffic scenarios was a major challenge. The researchers observed that some scenarios are more similar to one another than others, such as scenarios with the same number of lanes or the same number of traffic signal phases.As such, the researchers trained separate reinforcement learning models for different clusters of traffic scenarios, yielding better emission benefits overall.But even with the help of AI, analyzing citywide traffic at the network level would be so computationally intensive it could take another decade to unravel, Wu says.Instead, they broke the problem down and solved each eco-driving scenario at the individual intersection level.“We carefully constrained the impact of eco-driving control at each intersection on neighboring intersections. In this way, we dramatically simplified the problem, which enabled us to perform this analysis at scale, without introducing unknown network effects,” she says.Significant emissions benefitsWhen they analyzed the results, the researchers found that full adoption of eco-driving could result in intersection emissions reductions of between 11 and 22 percent.These benefits differ depending on the layout of a city’s streets. A denser city like San Francisco has less room to implement eco-driving between intersections, offering a possible explanation for reduced emission savings, while Atlanta could see greater benefits given its higher speed limits.Even if only 10 percent of vehicles employ eco-driving, a city could still realize 25 to 50 percent of the total emissions benefit because of car-following dynamics: Non-eco-driving vehicles would follow controlled eco-driving vehicles as they optimize speed to pass smoothly through intersections, reducing their carbon emissions as well.In some cases, eco-driving could also increase vehicle throughput by minimizing emissions. However, Wu cautions that increasing throughput could result in more drivers taking to the roads, reducing emissions benefits.And while their analysis of widely used safety metrics known as surrogate safety measures, such as time to collision, suggest that eco-driving is as safe as human driving, it could cause unexpected behavior in human drivers. More research is needed to fully understand potential safety impacts, Wu says.Their results also show that eco-driving could provide even greater benefits when combined with alternative transportation decarbonization solutions. For instance, 20 percent eco-driving adoption in San Francisco would cut emission levels by 7 percent, but when combined with the projected adoption of hybrid and electric vehicles, it would cut emissions by 17 percent.“This is a first attempt to systematically quantify network-wide environmental benefits of eco-driving. This is a great research effort that will serve as a key reference for others to build on in the assessment of eco-driving systems,” says Hesham Rakha, the Samuel L. Pritchard Professor of Engineering at Virginia Tech, who was not involved with this research.And while the researchers focus on carbon emissions, the benefits are highly correlated with improvements in fuel consumption, energy use, and air quality.“This is almost a free intervention. We already have smartphones in our cars, and we are rapidly adopting cars with more advanced automation features. For something to scale quickly in practice, it must be relatively simple to implement and shovel-ready. Eco-driving fits that bill,” Wu says.This work is funded, in part, by Amazon and the Utah Department of Transportation. More

  • in

    Creeping crystals: Scientists observe “salt creep” at the single-crystal scale

    Salt creeping, a phenomenon that occurs in both natural and industrial processes, describes the collection and migration of salt crystals from evaporating solutions onto surfaces. Once they start collecting, the crystals climb, spreading away from the solution. This creeping behavior, according to researchers, can cause damage or be harnessed for good, depending on the context. New research published June 30 in the journal Langmuir is the first to show salt creeping at a single-crystal scale and beneath a liquid’s meniscus.“The work not only explains how salt creeping begins, but why it begins and when it does,” says Joseph Phelim Mooney, a postdoc in the MIT Device Research Laboratory and one of the authors of the new study. “We hope this level of insight helps others, whether they’re tackling water scarcity, preserving ancient murals, or designing longer-lasting infrastructure.”The work is the first to directly visualize how salt crystals grow and interact with surfaces underneath a liquid meniscus, something that’s been theorized for decades but never actually imaged or confirmed at this level, and it offers fundamental insights that could impact a wide range of fields — from mineral extraction and desalination to anti-fouling coatings, membrane design for separation science, and even art conservation, where salt damage is a major threat to heritage materials.In civil engineering applications, for example, the research can help explain why and when salt crystals start growing across surfaces like concrete, stone, or building materials. “These crystals can exert pressure and cause cracking or flaking, reducing the long-term durability of structures,” says Mooney. “By pinpointing the moment when salt begins to creep, engineers can better design protective coatings or drainage systems to prevent this form of degradation.”For a field like art conservation, where salt can be devastating to murals, frescoes, and ancient artifacts, often forming beneath the surface before visible damage appears, the work can help identify the exact conditions that cause salt to start moving and spreading, allowing conservators to act earlier and more precisely to protect heritage objects.The work began during Mooney’s Marie Curie Fellowship at MIT. “I was focused on improving desalination systems and quickly ran into [salt buildup as] a major roadblock,” he says. “[Salt] was everywhere, coating surfaces, clogging flow paths, and undermining the efficiency of our designs. I realized we didn’t fully understand how or why salt starts creeping across surfaces in the first place.”That experience led Mooney to team up with colleagues to dig into the fundamentals of salt crystallization at the air–liquid–solid interface. “We wanted to zoom in, to really see the moment salt begins to move, so we turned to in situ X-ray microscopy,” he says. “What we found gave us a whole new way to think about surface fouling, material degradation, and controlled crystallization.”The new research may, in fact, allow better control of a crystallization processes required to remove salt from water in zero-liquid discharge systems. It can also be used to explain how and when scaling happens on equipment surfaces, and may support emerging climate technologies that depend on smart control of evaporation and crystallization.The work also supports mineral and salt extraction applications, where salt creeping can be both a bottleneck and an opportunity. In these applications, Mooney says, “by understanding the precise physics of salt formation at surfaces, operators can optimize crystal growth, improving recovery rates and reducing material losses.”Mooney’s co-authors on the paper include fellow MIT Device Lab researchers Omer Refet Caylan, Bachir El Fil (now an associate professor at Georgia Tech), and Lenan Zhang (now an associate professor at Cornell University); Jeff Punch and Vanessa Egan of the University of Limerick; and Jintong Gao of Cornell.The research was conducted using in situ X-ray microscopy. Mooney says the team’s big realization moment occurred when they were able to observe a single salt crystal pinning itself to the surface, which kicked off a cascading chain reaction of growth.“People had speculated about this, but we captured it on X-ray for the first time. It felt like watching the microscopic moment where everything tips, the ignition points of a self-propagating process,” says Mooney. “Even more surprising was what followed: The salt crystal didn’t just grow passively to fill the available space. It pierced through the liquid-air interface and reshaped the meniscus itself, setting up the perfect conditions for the next crystal. That subtle, recursive mechanism had never been visually documented before — and seeing it play out in real time completely changed how we thought about salt crystallization.”The paper, “In Situ X-ray Microscopy Unraveling the Onset of Salt Creeping at a Single-Crystal Level,” is available now in the journal Langmuir. Research was conducted in MIT.nano.  More

  • in

    Why animals are a critical part of forest carbon absorption

    A lot of attention has been paid to how climate change can drive biodiversity loss. Now, MIT researchers have shown the reverse is also true: Reductions in biodiversity can jeopardize one of Earth’s most powerful levers for mitigating climate change.In a paper published in PNAS, the researchers showed that following deforestation, naturally-regrowing tropical forests, with healthy populations of seed-dispersing animals, can absorb up to four times more carbon than similar forests with fewer seed-dispersing animals.Because tropical forests are currently Earth’s largest land-based carbon sink, the findings improve our understanding of a potent tool to fight climate change.“The results underscore the importance of animals in maintaining healthy, carbon-rich tropical forests,” says Evan Fricke, a research scientist in the MIT Department of Civil and Environmental Engineering and the lead author of the new study. “When seed-dispersing animals decline, we risk weakening the climate-mitigating power of tropical forests.”Fricke’s co-authors on the paper include César Terrer, the Tianfu Career Development Associate Professor at MIT; Charles Harvey, an MIT professor of civil and environmental engineering; and Susan Cook-Patton of The Nature Conservancy.The study combines a wide array of data on animal biodiversity, movement, and seed dispersal across thousands of animal species, along with carbon accumulation data from thousands of tropical forest sites.The researchers say the results are the clearest evidence yet that seed-dispersing animals play an important role in forests’ ability to absorb carbon, and that the findings underscore the need to address biodiversity loss and climate change as connected parts of a delicate ecosystem rather as separate problems in isolation.“It’s been clear that climate change threatens biodiversity, and now this study shows how biodiversity losses can exacerbate climate change,” Fricke says. “Understanding that two-way street helps us understand the connections between these challenges, and how we can address them. These are challenges we need to tackle in tandem, and the contribution of animals to tropical forest carbon shows that there are win-wins possible when supporting biodiversity and fighting climate change at the same time.”Putting the pieces togetherThe next time you see a video of a monkey or bird enjoying a piece of fruit, consider that the animals are actually playing an important role in their ecosystems. Research has shown that by digesting the seeds and defecating somewhere else, animals can help with the germination, growth, and long-term survival of the plant.Fricke has been studying animals that disperse seeds for nearly 15 years. His previous research has shown that without animal seed dispersal, trees have lower survival rates and a harder time keeping up with environmental changes.“We’re now thinking more about the roles that animals might play in affecting the climate through seed dispersal,” Fricke says. “We know that in tropical forests, where more than three-quarters of trees rely on animals for seed dispersal, the decline of seed dispersal could affect not just the biodiversity of forests, but how they bounce back from deforestation. We also know that all around the world, animal populations are declining.”Regrowing forests is an often-cited way to mitigate the effects of climate change, but the influence of biodiversity on forests’ ability to absorb carbon has not been fully quantified, especially at larger scales.For their study, the researchers combined data from thousands of separate studies and used new tools for quantifying disparate but interconnected ecological processes. After analyzing data from more than 17,000 vegetation plots, the researchers decided to focus on tropical regions, looking at data on where seed-dispersing animals live, how many seeds each animal disperses, and how they affect germination.The researchers then incorporated data showing how human activity impacts different seed-dispersing animals’ presence and movement. They found, for example, that animals move less when they consume seeds in areas with a bigger human footprint.Combining all that data, the researchers created an index of seed-dispersal disruption that revealed a link between human activities and declines in animal seed dispersal. They then analyzed the relationship between that index and records of carbon accumulation in naturally regrowing tropical forests over time, controlling for factors like drought conditions, the prevalence of fires, and the presence of grazing livestock.“It was a big task to bring data from thousands of field studies together into a map of the disruption of seed dispersal,” Fricke says. “But it lets us go beyond just asking what animals are there to actually quantifying the ecological roles those animals are playing and understanding how human pressures affect them.”The researchers acknowledged that the quality of animal biodiversity data could be improved and introduces uncertainty into their findings. They also note that other processes, such as pollination, seed predation, and competition influence seed dispersal and can constrain forest regrowth. Still, the findings were in line with recent estimates.“What’s particularly new about this study is we’re actually getting the numbers around these effects,” Fricke says. “Finding that seed dispersal disruption explains a fourfold difference in carbon absorption across the thousands of tropical regrowth sites included in the study points to seed dispersers as a major lever on tropical forest carbon.”Quantifying lost carbonIn forests identified as potential regrowth sites, the researchers found seed-dispersal declines were linked to reductions in carbon absorption each year averaging 1.8 metric tons per hectare, equal to a reduction in regrowth of 57 percent.The researchers say the results show natural regrowth projects will be more impactful in landscapes where seed-dispersing animals have been less disrupted, including areas that were recently deforested, are near high-integrity forests, or have higher tree cover.“In the discussion around planting trees versus allowing trees to regrow naturally, regrowth is basically free, whereas planting trees costs money, and it also leads to less diverse forests,” Terrer says. “With these results, now we can understand where natural regrowth can happen effectively because there are animals planting the seeds for free, and we also can identify areas where, because animals are affected, natural regrowth is not going to happen, and therefore planting trees actively is necessary.”To support seed-dispersing animals, the researchers encourage interventions that protect or improve their habitats and that reduce pressures on species, ranging from wildlife corridors to restrictions on wildlife trade. Restoring the ecological roles of seed dispersers is also possible by reintroducing seed-dispersing species where they’ve been lost or planting certain trees that attract those animals.The findings could also make modeling the climate impact of naturally regrowing forests more accurate.“Overlooking the impact of seed-dispersal disruption may overestimate natural regrowth potential in many areas and underestimate it in others,” the authors write.The researchers believe the findings open up new avenues of inquiry for the field.“Forests provide a huge climate subsidy by sequestering about a third of all human carbon emissions,” Terrer says. “Tropical forests are by far the most important carbon sink globally, but in the last few decades, their ability to sequester carbon has been declining. We will next explore how much of that decline is due to an increase in extreme droughts or fires versus declines in animal seed dispersal.”Overall, the researchers hope the study helps improves our understanding of the planet’s complex ecological processes.“When we lose our animals, we’re losing the ecological infrastructure that keeps our tropical forests healthy and resilient,” Fricke says.The research was supported by the MIT Climate and Sustainability Consortium, the Government of Portugal, and the Bezos Earth Fund. More

  • in

    Theory-guided strategy expands the scope of measurable quantum interactions

    A new theory-guided framework could help scientists probe the properties of new semiconductors for next-generation microelectronic devices, or discover materials that boost the performance of quantum computers.Research to develop new or better materials typically involves investigating properties that can be reliably measured with existing lab equipment, but this represents just a fraction of the properties that scientists could potentially probe in principle. Some properties remain effectively “invisible” because they are too difficult to capture directly with existing methods.Take electron-phonon interaction — this property plays a critical role in a material’s electrical, thermal, optical, and superconducting properties, but directly capturing it using existing techniques is notoriously challenging.Now, MIT researchers have proposed a theoretically justified approach that could turn this challenge into an opportunity. Their method reinterprets neutron scattering, an often-overlooked interference effect as a potential direct probe of electron-phonon coupling strength.The procedure creates two interaction effects in the material. The researchers show that, by deliberately designing their experiment to leverage the interference between the two interactions, they can capture the strength of a material’s electron-phonon interaction.The researchers’ theory-informed methodology could be used to shape the design of future experiments, opening the door to measuring new quantities that were previously out of reach.“Rather than discovering new spectroscopy techniques by pure accident, we can use theory to justify and inform the design of our experiments and our physical equipment,” says Mingda Li, the Class of 1947 Career Development Professor and an associate professor of nuclear science and engineering, and senior author of a paper on this experimental method.Li is joined on the paper by co-lead authors Chuliang Fu, an MIT postdoc; Phum Siriviboon and Artittaya Boonkird, both MIT graduate students; as well as others at MIT, the National Institute of Standards and Technology, the University of California at Riverside, Michigan State University, and Oak Ridge National Laboratory. The research appears this week in Materials Today Physics.Investigating interferenceNeutron scattering is a powerful measurement technique that involves aiming a beam of neutrons at a material and studying how the neutrons are scattered after they strike it. The method is ideal for measuring a material’s atomic structure and magnetic properties.When neutrons collide with the material sample, they interact with it through two different mechanisms, creating a nuclear interaction and a magnetic interaction. These interactions can interfere with each other.“The scientific community has known about this interference effect for a long time, but researchers tend to view it as a complication that can obscure measurement signals. So it hasn’t received much focused attention,” Fu says.The team and their collaborators took a conceptual “leap of faith” and decided to explore this oft-overlooked interference effect more deeply.They flipped the traditional materials research approach on its head by starting with a multifaceted theoretical analysis. They explored what happens inside a material when the nuclear interaction and magnetic interaction interfere with each other.Their analysis revealed that this interference pattern is directly proportional to the strength of the material’s electron-phonon interaction.“This makes the interference effect a probe we can use to detect this interaction,” explains Siriviboon.Electron-phonon interactions play a role in a wide range of material properties. They affect how heat flows through a material, impact a material’s ability to absorb and emit light, and can even lead to superconductivity.But the complexity of these interactions makes them hard to directly measure using existing experimental techniques. Instead, researchers often rely on less precise, indirect methods to capture electron-phonon interactions.However, leveraging this interference effect enables direct measurement of the electron-phonon interaction, a major advantage over other approaches.“Being able to directly measure the electron-phonon interaction opens the door to many new possibilities,” says Boonkird.Rethinking materials researchBased on their theoretical insights, the researchers designed an experimental setup to demonstrate their approach.Since the available equipment wasn’t powerful enough for this type of neutron scattering experiment, they were only able to capture a weak electron-phonon interaction signal — but the results were clear enough to support their theory.“These results justify the need for a new facility where the equipment might be 100 to 1,000 times more powerful, enabling scientists to clearly resolve the signal and measure the interaction,” adds Landry.With improved neutron scattering facilities, like those proposed for the upcoming Second Target Station at Oak Ridge National Laboratory, this experimental method could be an effective technique for measuring many crucial material properties.For instance, by helping scientists identify and harness better semiconductors, this approach could enable more energy-efficient appliances, faster wireless communication devices, and more reliable medical equipment like pacemakers and MRI scanners.   Ultimately, the team sees this work as a broader message about the need to rethink the materials research process.“Using theoretical insights to design experimental setups in advance can help us redefine the properties we can measure,” Fu says.To that end, the team and their collaborators are currently exploring other types of interactions they could leverage to investigate additional material properties.“This is a very interesting paper,” says Jon Taylor, director of the neutron scattering division at Oak Ridge National Laboratory, who was not involved with this research. “It would be interesting to have a neutron scattering method that is directly sensitive to charge lattice interactions or more generally electronic effects that were not just magnetic moments. It seems that such an effect is expectedly rather small, so facilities like STS could really help develop that fundamental understanding of the interaction and also leverage such effects routinely for research.”This work is funded, in part, by the U.S. Department of Energy and the National Science Foundation. More

  • in

    Model predicts long-term effects of nuclear waste on underground disposal systems

    As countries across the world experience a resurgence in nuclear energy projects, the questions of where and how to dispose of nuclear waste remain as politically fraught as ever. The United States, for instance, has indefinitely stalled its only long-term underground nuclear waste repository. Scientists are using both modeling and experimental methods to study the effects of underground nuclear waste disposal and ultimately, they hope, build public trust in the decision-making process.New research from scientists at MIT, Lawrence Berkeley National Lab, and the University of Orléans makes progress in that direction. The study shows that simulations of underground nuclear waste interactions, generated by new, high-performance-computing software, aligned well with experimental results from a research facility in Switzerland.The study, which was co-authored by MIT PhD student Dauren Sarsenbayev and Assistant Professor Haruko Wainwright, along with Christophe Tournassat and Carl Steefel, appears in the journal PNAS.“These powerful new computational tools, coupled with real-world experiments like those at the Mont Terri research site in Switzerland, help us understand how radionuclides will migrate in coupled underground systems,” says Sarsenbayev, who is first author of the new study.The authors hope the research will improve confidence among policymakers and the public in the long-term safety of underground nuclear waste disposal.“This research — coupling both computation and experiments — is important to improve our confidence in waste disposal safety assessments,” says Wainwright. “With nuclear energy re-emerging as a key source for tackling climate change and ensuring energy security, it is critical to validate disposal pathways.”Comparing simulations with experimentsDisposing of nuclear waste in deep underground geological formations is currently considered the safest long-term solution for managing high-level radioactive waste. As such, much effort has been put into studying the migration behaviors of radionuclides from nuclear waste within various natural and engineered geological materials.Since its founding in 1996, the Mont Terri research site in northern Switzerland has served as an important test bed for an international consortium of researchers interested in studying materials like Opalinus clay — a thick, water-tight claystone abundant in the tunneled areas of the mountain.“It is widely regarded as one of the most valuable real-world experiment sites because it provides us with decades of datasets around the interactions of cement and clay, and those are the key materials proposed to be used by countries across the world for engineered barrier systems and geological repositories for nuclear waste,” explains Sarsenbayev.For their study, Sarsenbayev and Wainwright collaborated with co-authors Tournassat and Steefel, who have developed high-performance computing software to improve modeling of interactions between the nuclear waste and both engineered and natural materials.To date, several challenges have limited scientists’ understanding of how nuclear waste reacts with cement-clay barriers. For one thing, the barriers are made up of irregularly mixed materials deep underground. Additionally, the existing class of models commonly used to simulate radionuclide interactions with cement-clay do not take into account electrostatic effects associated with the negatively charged clay minerals in the barriers.Tournassat and Steefel’s new software accounts for electrostatic effects, making it the only one that can simulate those interactions in three-dimensional space. The software, called CrunchODiTi, was developed from established software known as CrunchFlow and was most recently updated this year. It is designed to be run on many high-performance computers at once in parallel.For the study, the researchers looked at a 13-year-old experiment, with an initial focus on cement-clay rock interactions. Within the last several years, a mix of both negatively and positively charged ions were added to the borehole located near the center of the cement emplaced in the formation. The researchers focused on a 1-centimeter-thick zone between the radionuclides and cement-clay referred to as the “skin.” They compared their experimental results to the software simulation, finding the two datasets aligned.“The results are quite significant because previously, these models wouldn’t fit field data very well,” Sarsenbayev says. “It’s interesting how fine-scale phenomena at the ‘skin’ between cement and clay, the physical and chemical properties of which changes over time, could be used to reconcile the experimental and simulation data.” The experimental results showed the model successfully accounted for electrostatic effects associated with the clay-rich formation and the interaction between materials in Mont Terri over time.“This is all driven by decades of work to understand what happens at these interfaces,” Sarsenbayev says. “It’s been hypothesized that there is mineral precipitation and porosity clogging at this interface, and our results strongly suggest that.”“This application requires millions of degrees of freedom because these multibarrier systems require high resolution and a lot of computational power,” Sarsenbayev says. “This software is really ideal for the Mont Terri experiment.”Assessing waste disposal plansThe new model could now replace older models that have been used to conduct safety and performance assessments of underground geological repositories.“If the U.S. eventually decides to dispose nuclear waste in a geological repository, then these models could dictate the most appropriate materials to use,” Sarsenbayev says. “For instance, right now clay is considered an appropriate storage material, but salt formations are another potential medium that could be used. These models allow us to see the fate of radionuclides over millennia. We can use them to understand interactions at timespans that vary from months to years to many millions of years.”Sarsenbayev says the model is reasonably accessible to other researchers and that future efforts may focus on the use of machine learning to develop less computationally expensive surrogate models.Further data from the experiment will be available later this month. The team plans to compare those data to additional simulations.“Our collaborators will basically get this block of cement and clay, and they’ll be able to run experiments to determine the exact thickness of the skin along with all of the minerals and processes present at this interface,” Sarsenbayev says. “It’s a huge project and it takes time, but we wanted to share initial data and this software as soon as we could.”For now, the researchers hope their study leads to a long-term solution for storing nuclear waste that policymakers and the public can support.“This is an interdisciplinary study that includes real world experiments showing we’re able to predict radionuclides’ fate in the subsurface,” Sarsenbayev says. “The motto of MIT’s Department of Nuclear Science and Engineering is ‘Science. Systems. Society.’ I think this merges all three domains.” More