More stories

  • in

    Scientists uncover the amazing way sandgrouse hold water in their feathers

    Many birds’ feathers are remarkably efficient at shedding water — so much so that “like water off a duck’s back” is a common expression. Much more unusual are the belly feathers of the sandgrouse, especially Namaqua sandgrouse, which absorb and retain water so efficiently the male birds can fly more than 20 kilometers from a distant watering hole back to the nest and still retain enough water in their feathers for the chicks to drink and sustain themselves in the searing deserts of Namibia, Botswana, and South Africa.

    How do those feathers work? While scientists had inferred a rough picture, it took the latest tools of microscopy, and patient work with a collection of sandgrouse feathers, to unlock the unique structural details that enable the feathers to hold water. The findings appear today in the Journal of the Royal Society Interface, in a paper by Lorna Gibson, the Matoula S. Salapatas Professor of Materials Science and Engineering and a professor of mechanical engineering at MIT, and Professor Jochen Mueller of Johns Hopkins University.

    The unique water-carrying ability of sandgrouse feathers was first reported back in 1896, Gibson says, by E.G.B. Meade-Waldo, who was breeding the birds in captivity. “He saw them behaving like this, and nobody believed him! I mean, it just sounded so outlandish,” Gibson says.

    In 1967, Tom Cade and Gordon MacLean reported detailed observations of the birds at watering holes, in a study that proved the unique behavior was indeed real. The scientists found that male sandgrouse feathers could hold about 25 milliliters of water, or about a tenth of a cup, after the bird had spent about five minutes dipping in the water and fluffing its feathers.

    About half of that amount can evaporate during the male bird’s half-hour-long flight back to the nest, where the chicks, which cannot fly for about their first month, drink the remainder straight from the feathers.

    Cade and MacLean “had part of the story,” Gibson says, but the tools didn’t exist at the time to carry out the detailed imaging of the feather structures that the new study was able to do.

    Gibson and Mueller carried out their study using scanning electron microscopy, micro-computed tomography, and video imaging. They borrowed Namaqua sandgrouse belly feathers from Harvard University’s Museum of Comparative Zoology, which has a collection of specimens of about 80 percent of the world’s birds.

    Bird feathers in general have a central shaft, from which smaller barbs extend, and then smaller barbules extend out from those. Sandgrouse feathers are structured differently, however. In the inner zone of the feather, the barbules have a helically coiled structure close to their base and then a straight extension. In the outer zone of the feather, the barbules lack the helical coil and are simply straight. Both parts lack the grooves and hooks that hold the vane of contour feathers together in most other birds.
    Video of water spreading through the specialized sandgrouse feathers, under magnification, shows the uncoiling and spreading of the feather’s barbules as they become wet. Initially, most barbules in the outer zone of the feather form tubular features.Credit: Specimen #142928, Museum of Comparative Zoology, Harvard University © President and Fellows of Harvard College.

    When wetted, the coiled portions of the barbules unwind and rotate to be perpendicular to the vane, producing a dense forest of fibers that can hold water through capillary action. At the same time, the barbules in the outer zone curl inward, helping to hold the water in.

    The microscopy techniques used in the new study allowed the dimensions of the different parts of the feather to be measured. In the inner zone, the barb shafts are large and stiff enough to provide a rigid base about which the other parts of the feather deform, and the barbules are small and flexible enough that surface tension is sufficient to bend the straight extensions into tear-like structures that hold water. And in the outer zone, the barb shafts and barbules are smaller still, allowing them to curl around the inner zone, further retaining water.

    While previous work had suggested that surface tension produced the water retention characteristics, “what we did was make measurements of the dimensions and do some calculations to show that that’s what is actually happening,” Gibson says. Her group’s work demonstrated that the varying stiffnesses of the different feather parts plays a key role in their ability to hold water.

    The study was mostly driven by intellectual curiosity about this unique behavioral phenomenon, Gibson says. “We just wanted to see how it works. The whole story just seemed so interesting.” But she says it might lead to some useful applications. For example, in desert regions where water is scarce but fog and dew regularly occur, such as in Chile’s Atacama Desert, some adaptation of this feather structure might be incorporated into the systems of huge nets that are used to collect water. “You could imagine this could be a way to improve those systems,” she says. “A material with this kind of structure might be more effective at fog harvesting and holding the water.”

    “This fascinating and in-depth study reveals how the different parts of the sandgrouse’s belly feathers — including the microscopic barb shafts and barbules — work together to hold water,” says Mary Caswell Stoddard, an evolutionary biologist at Princeton University, who was not associated with this study. “By using a suite of advanced imaging techniques to describe the belly feathers and estimate their bending stiffnesses, Mueller and Gibson add rich new details to our understanding of the sandgrouse’s water-carrying feathers. … This study may inspire others to take a closer look at diverse feather microstructures across bird species — and to wonder whether these structures, as in sandgrouse, help support unusual or surprising functions.”

    The work was partly supported by the National Science Foundation and the Matoula S. Salapatas Professorship in Materials Science and Engineering at MIT. More

  • in

    A new microneedle-based drug delivery technique for plants

    Increasing environmental conditions caused by climate change, an ever-growing human population, scarcity of arable land, and limited resources are pressuring the agriculture industry to adopt more sustainable and precise practices that foster more efficient use of resources (e.g., water, fertilizers, and pesticides) and mitigation of environmental impacts. Developing delivery systems that efficiently deploy agrochemicals such as micronutrients, pesticides, and antibiotics in crops will help ensure high productivity and high produce quality, while minimizing the waste of resources, is crucial.

    Now, researchers in Singapore and the U.S. have developed the first-ever microneedle-based drug delivery technique for plants. The method can be used to precisely deliver controlled amounts of agrochemicals to specific plant tissues for research purposes. When applied in the field, it could one day be used in precision agriculture to improve crop quality and disease management.

    The work is led by researchers from the Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP) interdisciplinary research group at the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, and their collaborators from MIT and the Temasek Life Sciences Laboratory (TLL).

    Current and standard practices for agrochemical application in plants, such as foliar spray, are inefficient due to off-target application, quick runoff in the rain, and actives’ rapid degradation. These practices also cause significant detrimental environmental side effects, such as water and soil contamination, biodiversity loss, and degraded ecosystems; and public health concerns, such as respiratory problems, chemical exposure, and food contamination.

    The novel silk-based microneedles technique circumvents these limitations by deploying and targeting a known amount of payload directly into a plant’s deep tissues, which will lead to higher efficacy of plant growth and help with disease management. The technique is minimally invasive, as it delivers the compound without causing long-term damage to the plants, and is environmentally sustainable. It minimizes resource wastage and mitigates the adverse side effects caused by agrochemical contamination of the environment. Additionally, it will help foster precise agricultural practices and provide new tools to study plants and design crop traits, helping to ensure food security.

    Described in a paper titled “Drug Delivery in Plants Using Silk Microneedles,” published in a recent issue of Advanced Materials, the research studies the first-ever polymeric microneedles used to deliver small compounds to a wide variety of plants and the plant response to biomaterial injection. Through gene expression analysis, the researchers could closely examine the reactions to drug delivery following microneedle injection. Minimal scar and callus formation were observed, suggesting minimal injection-induced wounding to the plant. The proof of concept provided in this study opens the door to plant microneedles’ application in plant biology and agriculture, enabling new means to regulate plant physiology and study metabolisms via efficient and effective delivery of payloads.

    The study optimized the design of microneedles to target the systemic transport system in Arabidopsis (mouse-ear cress), the chosen model plant. Gibberellic acid (GA3), a widely used plant growth regulator in agriculture, was selected for the delivery. The researchers found that delivering GA3 through microneedles was more effective in promoting growth than traditional methods (such as foliar spray). They then confirmed the effectiveness using genetic methods and demonstrated that the technique is applicable to various plant species, including vegetables, cereals, soybeans, and rice.

    Professor Benedetto Marelli, co-corresponding author of the paper, principal investigator at DiSTAP, and associate professor of civil and environmental engineering at MIT, shares, “The technique saves resources as compared to current methods of agrochemical delivery, which suffer from wastage. During the application, the microneedles break through the tissue barriers and release compounds directly inside the plants, avoiding agrochemical losses. The technique also allows for precise control of the amounts of the agrochemical used, ensuring high-tech precision agriculture and crop growth to optimize yield.”

    “The first-of-its-kind technique is revolutionary for the agriculture industry. It also minimizes resource wastage and environmental contamination. In the future, with automated microneedle application as a possibility, the technique may be used in high-tech outdoor and indoor farms for precise agrochemical delivery and disease management,” adds Yunteng Cao, the first author of the paper and postdoc at MIT.

    “This work also highlights the importance of using genetic tools to study plant responses to biomaterials. Analyzing these responses at the genetic level offers a comprehensive understanding of these responses, thereby serving as a guide for the development of future biomaterials that can be used across the agri-food industry,” says Sally Koh, the co-first author of this work and PhD candidate from NUS and TLL.

    The future seems promising as Professor Daisuke Urano, co-corresponding author of the paper, TLL principal investigator, and NUS adjunct assistant professor elaborates, “Our research has validated the use of silk-based microneedles for agrochemical application, and we look forward to further developing the technique and microneedle design into a scalable model for manufacturing and commercialization. At the same time, we are also actively investigating potential applications that could have a significant impact on society.”

    The study of drug delivery in plants using silk microneedles expanded upon previous research supervised by Marelli. The original idea was conceived by SMART and MIT: Marelli, Cao, and Professor Nam-Hai Chua, co-lead principal investigator at DiSTAP. Researchers from TLL and the National University of Singapore, Professor Urano Daisuke and Koh, joined the study to contribute biological perspectives. The research is carried out by SMART and supported by the National Research Foundation Singapore (NRF) under its Campus for Research Excellence And Technological Enterprise (CREATE) program.

    SMART was established by MIT and NRF in 2007. SMART is the first entity in CREATE, developed by NRF. SMART serves as an intellectual and innovation hub for research interactions between MIT and Singapore, undertaking cutting-edge research in areas of interest to both parties. SMART currently comprises an Innovation Center and interdisciplinary research groups: Antimicrobial Resistance, Critical Analytics for Manufacturing Personalized-Medicine, DiSTAP, Future Urban Mobility, and Low Energy Electronic Systems. More

  • in

    Study: Shutting down nuclear power could increase air pollution

    Nearly 20 percent of today’s electricity in the United States comes from nuclear power. The U.S. has the largest nuclear fleet in the world, with 92 reactors scattered around the country. Many of these power plants have run for more than half a century and are approaching the end of their expected lifetimes.

    Policymakers are debating whether to retire the aging reactors or reinforce their structures to continue producing nuclear energy, which many consider a low-carbon alternative to climate-warming coal, oil, and natural gas.

    Now, MIT researchers say there’s another factor to consider in weighing the future of nuclear power: air quality. In addition to being a low carbon-emitting source, nuclear power is relatively clean in terms of the air pollution it generates. Without nuclear power, how would the pattern of air pollution shift, and who would feel its effects?

    The MIT team took on these questions in a new study appearing today in Nature Energy. They lay out a scenario in which every nuclear power plant in the country has shut down, and consider how other sources such as coal, natural gas, and renewable energy would fill the resulting energy needs throughout an entire year.

    Their analysis reveals that indeed, air pollution would increase, as coal, gas, and oil sources ramp up to compensate for nuclear power’s absence. This in itself may not be surprising, but the team has put numbers to the prediction, estimating that the increase in air pollution would have serious health effects, resulting in an additional 5,200 pollution-related deaths over a single year.

    If, however, more renewable energy sources become available to supply the energy grid, as they are expected to by the year 2030, air pollution would be curtailed, though not entirely. The team found that even under this heartier renewable scenario, there is still a slight increase in air pollution in some parts of the country, resulting in a total of 260 pollution-related deaths over one year.

    When they looked at the populations directly affected by the increased pollution, they found that Black or African American communities — a disproportionate number of whom live near fossil-fuel plants — experienced the greatest exposure.

    “This adds one more layer to the environmental health and social impacts equation when you’re thinking about nuclear shutdowns, where the conversation often focuses on local risks due to accidents and mining or long-term climate impacts,” says lead author Lyssa Freese, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).

    “In the debate over keeping nuclear power plants open, air quality has not been a focus of that discussion,” adds study author Noelle Selin, a professor in MIT’s Institute for Data, Systems, and Society (IDSS) and EAPS. “What we found was that air pollution from fossil fuel plants is so damaging, that anything that increases it, such as a nuclear shutdown, is going to have substantial impacts, and for some people more than others.”

    The study’s MIT-affiliated co-authors also include Principal Research Scientist Sebastian Eastham and Guillaume Chossière SM ’17, PhD ’20, along with Alan Jenn of the University of California at Davis.

    Future phase-outs

    When nuclear power plants have closed in the past, fossil fuel use increased in response. In 1985, the closure of reactors in Tennessee Valley prompted a spike in coal use, while the 2012 shutdown of a plant in California led to an increase in natural gas. In Germany, where nuclear power has almost completely been phased out, coal-fired power increased initially to fill the gap.

    Noting these trends, the MIT team wondered how the U.S. energy grid would respond if nuclear power were completely phased out.

    “We wanted to think about what future changes were expected in the energy grid,” Freese says. “We knew that coal use was declining, and there was a lot of work already looking at the impact of what that would have on air quality. But no one had looked at air quality and nuclear power, which we also noticed was on the decline.”

    In the new study, the team used an energy grid dispatch model developed by Jenn to assess how the U.S. energy system would respond to a shutdown of nuclear power. The model simulates the production of every power plant in the country and runs continuously to estimate, hour by hour, the energy demands in 64 regions across the country.

    Much like the way the actual energy market operates, the model chooses to turn a plant’s production up or down based on cost: Plants producing the cheapest energy at any given time are given priority to supply the grid over more costly energy sources.

    The team fed the model available data on each plant’s changing emissions and energy costs throughout an entire year. They then ran the model under different scenarios, including: an energy grid with no nuclear power, a baseline grid similar to today’s that includes nuclear power, and a grid with no nuclear power that also incorporates the additional renewable sources that are expected to be added by 2030.

    They combined each simulation with an atmospheric chemistry model to simulate how each plant’s various emissions travel around the country and to overlay these tracks onto maps of population density. For populations in the path of pollution, they calculated the risk of premature death based on their degree of exposure.

    System response

    Play video

    Courtesy of the researchers, edited by MIT News

    Their analysis showed a clear pattern: Without nuclear power, air pollution worsened in general, mainly affecting regions in the East Coast, where nuclear power plants are mostly concentrated. Without those plants, the team observed an uptick in production from coal and gas plants, resulting in 5,200 pollution-related deaths across the country, compared to the baseline scenario.

    They also calculated that more people are also likely to die prematurely due to climate impacts from the increase in carbon dioxide emissions, as the grid compensates for nuclear power’s absence. The climate-related effects from this additional influx of carbon dioxide could lead to 160,000 additional deaths over the next century.

    “We need to be thoughtful about how we’re retiring nuclear power plants if we are trying to think about them as part of an energy system,” Freese says. “Shutting down something that doesn’t have direct emissions itself can still lead to increases in emissions, because the grid system will respond.”

    “This might mean that we need to deploy even more renewables, in order to fill the hole left by nuclear, which is essentially a zero-emissions energy source,” Selin adds. “Otherwise we will have a reduction in air quality that we weren’t necessarily counting on.”

    This study was supported, in part, by the U.S. Environmental Protection Agency. More

  • in

    Flow batteries for grid-scale energy storage

    In the coming decades, renewable energy sources such as solar and wind will increasingly dominate the conventional power grid. Because those sources only generate electricity when it’s sunny or windy, ensuring a reliable grid — one that can deliver power 24/7 — requires some means of storing electricity when supplies are abundant and delivering it later when they’re not. And because there can be hours and even days with no wind, for example, some energy storage devices must be able to store a large amount of electricity for a long time.

    A promising technology for performing that task is the flow battery, an electrochemical device that can store hundreds of megawatt-hours of energy — enough to keep thousands of homes running for many hours on a single charge. Flow batteries have the potential for long lifetimes and low costs in part due to their unusual design. In the everyday batteries used in phones and electric vehicles, the materials that store the electric charge are solid coatings on the electrodes. “A flow battery takes those solid-state charge-storage materials, dissolves them in electrolyte solutions, and then pumps the solutions through the electrodes,” says Fikile Brushett, an associate professor of chemical engineering at MIT. That design offers many benefits and poses a few challenges.

    Flow batteries: Design and operation

    A flow battery contains two substances that undergo electrochemical reactions in which electrons are transferred from one to the other. When the battery is being charged, the transfer of electrons forces the two substances into a state that’s “less energetically favorable” as it stores extra energy. (Think of a ball being pushed up to the top of a hill.) When the battery is being discharged, the transfer of electrons shifts the substances into a more energetically favorable state as the stored energy is released. (The ball is set free and allowed to roll down the hill.)

    At the core of a flow battery are two large tanks that hold liquid electrolytes, one positive and the other negative. Each electrolyte contains dissolved “active species” — atoms or molecules that will electrochemically react to release or store electrons. During charging, one species is “oxidized” (releases electrons), and the other is “reduced” (gains electrons); during discharging, they swap roles. Pumps are used to circulate the two electrolytes through separate electrodes, each made of a porous material that provides abundant surfaces on which the active species can react. A thin membrane between the adjacent electrodes keeps the two electrolytes from coming into direct contact and possibly reacting, which would release heat and waste energy that could otherwise be used on the grid.

    When the battery is being discharged, active species on the negative side oxidize, releasing electrons that flow through an external circuit to the positive side, causing the species there to be reduced. The flow of those electrons through the external circuit can power the grid. In addition to the movement of the electrons, “supporting” ions — other charged species in the electrolyte — pass through the membrane to help complete the reaction and keep the system electrically neutral.

    Once all the species have reacted and the battery is fully discharged, the system can be recharged. In that process, electricity from wind turbines, solar farms, and other generating sources drives the reverse reactions. The active species on the positive side oxidize to release electrons back through the wires to the negative side, where they rejoin their original active species. The battery is now reset and ready to send out more electricity when it’s needed. Brushett adds, “The battery can be cycled in this way over and over again for years on end.”

    Benefits and challenges

    A major advantage of this system design is that where the energy is stored (the tanks) is separated from where the electrochemical reactions occur (the so-called reactor, which includes the porous electrodes and membrane). As a result, the capacity of the battery — how much energy it can store — and its power — the rate at which it can be charged and discharged — can be adjusted separately. “If I want to have more capacity, I can just make the tanks bigger,” explains Kara Rodby PhD ’22, a former member of Brushett’s lab and now a technical analyst at Volta Energy Technologies. “And if I want to increase its power, I can increase the size of the reactor.” That flexibility makes it possible to design a flow battery to suit a particular application and to modify it if needs change in the future.

    However, the electrolyte in a flow battery can degrade with time and use. While all batteries experience electrolyte degradation, flow batteries in particular suffer from a relatively faster form of degradation called “crossover.” The membrane is designed to allow small supporting ions to pass through and block the larger active species, but in reality, it isn’t perfectly selective. Some of the active species in one tank can sneak through (or “cross over”) and mix with the electrolyte in the other tank. The two active species may then chemically react, effectively discharging the battery. Even if they don’t, some of the active species is no longer in the first tank where it belongs, so the overall capacity of the battery is lower.

    Recovering capacity lost to crossover requires some sort of remediation — for example, replacing the electrolyte in one or both tanks or finding a way to reestablish the “oxidation states” of the active species in the two tanks. (Oxidation state is a number assigned to an atom or compound to tell if it has more or fewer electrons than it has when it’s in its neutral state.) Such remediation is more easily — and therefore more cost-effectively — executed in a flow battery because all the components are more easily accessed than they are in a conventional battery.

    The state of the art: Vanadium

    A critical factor in designing flow batteries is the selected chemistry. The two electrolytes can contain different chemicals, but today the most widely used setup has vanadium in different oxidation states on the two sides. That arrangement addresses the two major challenges with flow batteries.

    First, vanadium doesn’t degrade. “If you put 100 grams of vanadium into your battery and you come back in 100 years, you should be able to recover 100 grams of that vanadium — as long as the battery doesn’t have some sort of a physical leak,” says Brushett.

    And second, if some of the vanadium in one tank flows through the membrane to the other side, there is no permanent cross-contamination of the electrolytes, only a shift in the oxidation states, which is easily remediated by re-balancing the electrolyte volumes and restoring the oxidation state via a minor charge step. Most of today’s commercial systems include a pipe connecting the two vanadium tanks that automatically transfers a certain amount of electrolyte from one tank to the other when the two get out of balance.

    However, as the grid becomes increasingly dominated by renewables, more and more flow batteries will be needed to provide long-duration storage. Demand for vanadium will grow, and that will be a problem. “Vanadium is found around the world but in dilute amounts, and extracting it is difficult,” says Rodby. “So there are limited places — mostly in Russia, China, and South Africa — where it’s produced, and the supply chain isn’t reliable.” As a result, vanadium prices are both high and extremely volatile — an impediment to the broad deployment of the vanadium flow battery.

    Beyond vanadium

    The question then becomes: If not vanadium, then what? Researchers worldwide are trying to answer that question, and many are focusing on promising chemistries using materials that are more abundant and less expensive than vanadium. But it’s not that easy, notes Rodby. While other chemistries may offer lower initial capital costs, they may be more expensive to operate over time. They may require periodic servicing to rejuvenate one or both of their electrolytes. “You may even need to replace them, so you’re essentially incurring that initial (low) capital cost again and again,” says Rodby.

    Indeed, comparing the economics of different options is difficult because “there are so many dependent variables,” says Brushett. “A flow battery is an electrochemical system, which means that there are multiple components working together in order for the device to function. Because of that, if you are trying to improve a system — performance, cost, whatever — it’s very difficult because when you touch one thing, five other things change.”

    So how can we compare these new and emerging chemistries — in a meaningful way — with today’s vanadium systems? And how do we compare them with one another, so we know which ones are more promising and what the potential pitfalls are with each one? “Addressing those questions can help us decide where to focus our research and where to invest our research and development dollars now,” says Brushett.

    Techno-economic modeling as a guide

    A good way to understand and assess the economic viability of new and emerging energy technologies is using techno-economic modeling. With certain models, one can account for the capital cost of a defined system and — based on the system’s projected performance — the operating costs over time, generating a total cost discounted over the system’s lifetime. That result allows a potential purchaser to compare options on a “levelized cost of storage” basis.

    Using that approach, Rodby developed a framework for estimating the levelized cost for flow batteries. The framework includes a dynamic physical model of the battery that tracks its performance over time, including any changes in storage capacity. The calculated operating costs therefore cover all services required over decades of operation, including the remediation steps taken in response to species degradation and crossover.

    Analyzing all possible chemistries would be impossible, so the researchers focused on certain classes. First, they narrowed the options down to those in which the active species are dissolved in water. “Aqueous systems are furthest along and are most likely to be successful commercially,” says Rodby. Next, they limited their analyses to “asymmetric” chemistries; that is, setups that use different materials in the two tanks. (As Brushett explains, vanadium is unusual in that using the same “parent” material in both tanks is rarely feasible.) Finally, they divided the possibilities into two classes: species that have a finite lifetime and species that have an infinite lifetime; that is, ones that degrade over time and ones that don’t.

    Results from their analyses aren’t clear-cut; there isn’t a particular chemistry that leads the pack. But they do provide general guidelines for choosing and pursuing the different options.

    Finite-lifetime materials

    While vanadium is a single element, the finite-lifetime materials are typically organic molecules made up of multiple elements, among them carbon. One advantage of organic molecules is that they can be synthesized in a lab and at an industrial scale, and the structure can be altered to suit a specific function. For example, the molecule can be made more soluble, so more will be present in the electrolyte and the energy density of the system will be greater; or it can be made bigger so it won’t fit through the membrane and cross to the other side. Finally, organic molecules can be made from simple, abundant, low-cost elements, potentially even waste streams from other industries.

    Despite those attractive features, there are two concerns. First, organic molecules would probably need to be made in a chemical plant, and upgrading the low-cost precursors as needed may prove to be more expensive than desired. Second, these molecules are large chemical structures that aren’t always very stable, so they’re prone to degradation. “So along with crossover, you now have a new degradation mechanism that occurs over time,” says Rodby. “Moreover, you may figure out the degradation process and how to reverse it in one type of organic molecule, but the process may be totally different in the next molecule you work on, making the discovery and development of each new chemistry require significant effort.”

    Research is ongoing, but at present, Rodby and Brushett find it challenging to make the case for the finite-lifetime chemistries, mostly based on their capital costs. Citing studies that have estimated the manufacturing costs of these materials, Rodby believes that current options cannot be made at low enough costs to be economically viable. “They’re cheaper than vanadium, but not cheap enough,” says Rodby.

    The results send an important message to researchers designing new chemistries using organic molecules: Be sure to consider operating challenges early on. Rodby and Brushett note that it’s often not until way down the “innovation pipeline” that researchers start to address practical questions concerning the long-term operation of a promising-looking system. The MIT team recommends that understanding the potential decay mechanisms and how they might be cost-effectively reversed or remediated should be an upfront design criterion.

    Infinite-lifetime species

    The infinite-lifetime species include materials that — like vanadium — are not going to decay. The most likely candidates are other metals; for example, iron or manganese. “These are commodity-scale chemicals that will certainly be low cost,” says Rodby.

    Here, the researchers found that there’s a wider “design space” of feasible options that could compete with vanadium. But there are still challenges to be addressed. While these species don’t degrade, they may trigger side reactions when used in a battery. For example, many metals catalyze the formation of hydrogen, which reduces efficiency and adds another form of capacity loss. While there are ways to deal with the hydrogen-evolution problem, a sufficiently low-cost and effective solution for high rates of this side reaction is still needed.

    In addition, crossover is a still a problem requiring remediation steps. The researchers evaluated two methods of dealing with crossover in systems combining two types of infinite-lifetime species.

    The first is the “spectator strategy.” Here, both of the tanks contain both active species. Explains Brushett, “You have the same electrolyte mixture on both sides of the battery, but only one of the species is ever working and the other is a spectator.” As a result, crossover can be remediated in similar ways to those used in the vanadium flow battery. The drawback is that half of the active material in each tank is unavailable for storing charge, so it’s wasted. “You’ve essentially doubled your electrolyte cost on a per-unit energy basis,” says Rodby.

    The second method calls for making a membrane that is perfectly selective: It must let through only the supporting ion needed to maintain the electrical balance between the two sides. However, that approach increases cell resistance, hurting system efficiency. In addition, the membrane would need to be made of a special material — say, a ceramic composite — that would be extremely expensive based on current production methods and scales. Rodby notes that work on such membranes is under way, but the cost and performance metrics are “far off from where they’d need to be to make sense.”

    Time is of the essence

    The researchers stress the urgency of the climate change threat and the need to have grid-scale, long-duration storage systems at the ready. “There are many chemistries now being looked at,” says Rodby, “but we need to hone in on some solutions that will actually be able to compete with vanadium and can be deployed soon and operated over the long term.”

    The techno-economic framework is intended to help guide that process. It can calculate the levelized cost of storage for specific designs for comparison with vanadium systems and with one another. It can identify critical gaps in knowledge related to long-term operation or remediation, thereby identifying technology development or experimental investigations that should be prioritized. And it can help determine whether the trade-off between lower upfront costs and greater operating costs makes sense in these next-generation chemistries.

    The good news, notes Rodby, is that advances achieved in research on one type of flow battery chemistry can often be applied to others. “A lot of the principles learned with vanadium can be translated to other systems,” she says. She believes that the field has advanced not only in understanding but also in the ability to design experiments that address problems common to all flow batteries, thereby helping to prepare the technology for its important role of grid-scale storage in the future.

    This research was supported by the MIT Energy Initiative. Kara Rodby PhD ’22 was supported by an ExxonMobil-MIT Energy Fellowship in 2021-22.

    This article appears in the Winter 2023 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Tackling counterfeit seeds with “unclonable” labels

    Average crop yields in Africa are consistently far below those expected, and one significant reason is the prevalence of counterfeit seeds whose germination rates are far lower than those of the genuine ones. The World Bank estimates that as much as half of all seeds sold in some African countries are fake, which could help to account for crop production that is far below potential.

    There have been many attempts to prevent this counterfeiting through tracking labels, but none have proved effective; among other issues, such labels have been vulnerable to hacking because of the deterministic nature of their encoding systems. But now, a team of MIT researchers has come up with a kind of tiny, biodegradable tag that can be applied directly to the seeds themselves, and that provides a unique randomly created code that cannot be duplicated.

    The new system, which uses minuscule dots of silk-based material, each containing a unique combination of different chemical signatures, is described today in the journal Science Advances in a paper by MIT’s dean of engineering Anantha Chandrakasan, professor of civil and environmental engineering Benedetto Marelli, postdoc Hui Sun, and graduate student Saurav Maji.

    The problem of counterfeiting is an enormous one globally, the researchers point out, affecting everything from drugs to luxury goods, and many different systems have been developed to try to combat this. But there has been less attention to the problem in the area of agriculture, even though the consequences can be severe. In sub-Saharan Africa, for example, the World Bank estimates that counterfeit seeds are a significant factor in crop yields that average less than one-fifth of the potential for maize, and less than one-third for rice.

    Marelli explains that a key to the new system is creating a randomly-produced physical object whose exact composition is virtually impossible to duplicate. The labels they create “leverage randomness and uncertainty in the process of application, to generate unique signature features that can be read, and that cannot be replicated,” he says.

    What they’re dealing with, Sun adds, “is the very old job of trying, basically, not to get your stuff stolen. And you can try as much as you can, but eventually somebody is always smart enough to figure out how to do it, so nothing is really unbreakable. But the idea is, it’s almost impossible, if not impossible, to replicate it, or it takes so much effort that it’s not worth it anymore.”

    The idea of an “unclonable” code was originally developed as a way of protecting the authenticity of computer chips, explains Chandrakasan, who is the Vannevar Bush Professor of Electrical Engineering and Computer Science. “In integrated circuits, individual transistors have slightly different properties coined device variations,” he explains, “and you could then use that variability and combine that variability with higher-level circuits to create a unique ID for the device. And once you have that, then you can use that unique ID as a part of a security protocol. Something like transistor variability is hard to replicate from device to device, so that’s what gives it its uniqueness, versus storing a particular fixed ID.” The concept is based on what are known as physically unclonable functions, or PUFs.

    The team decided to try to apply that PUF principle to the problem of fake seeds, and the use of silk proteins was a natural choice because the material is not only harmless to the environment but also classified by the Food and Drug Administration in the “generally recognized as safe” category, so it requires no special approval for use on food products.

    “You could coat it on top of seeds,” Maji says, “and if you synthesize silk in a certain way, it will also have natural random variations. So that’s the idea, that every seed or every bag could have a unique signature.”

    Developing effective secure system solutions has long been one of Chandrakasan’s specialties, while Marelli has spent many years developing systems for applying silk coatings to a variety of fruits, vegetables, and seeds, so their collaboration was a natural for developing such a silk-based coding system toward enhanced security.

    “The challenge was what type of form factor to give to silk,” Sun says, “so that it can be fabricated very easily.” They developed a simple drop-casting approach that produces tags that are less than one-tenth of an inch in diameter. The second challenge was to develop “a way where we can read the uniqueness, in also a very high throughput and easy way.”

    For the unique silk-based codes, Marelli says, “eventually we found a way to add a color to these microparticles so that they assemble in random structures.” The resulting unique patterns can be read out not only by a spectrograph or a portable microscope, but even by an ordinary cellphone camera with a macro lens. This image can be processed locally to generate the PUF code and then sent to the cloud and compared with a secure database to ensure the authenticity of the product. “It’s random so that people cannot easily replicate it,” says Sun. “People cannot predict it without measuring it.”

    And the number of possible permutations that could result from the way they mix four basic types of colored silk nanoparticles is astronomical. “We were able to show that with a minimal amount of silk, we were able to generate 128 random bits of security,” Maji says. “So this gives rise to 2 to the power 128 possible combinations, which is extremely difficult to crack given the computational capabilities of the state-of-the-art computing systems.”

    Marelli says that “for us, it’s a good test bed in order to think out-of-the-box, and how we can have a path that somehow is more democratic.” In this case, that means “something that you can literally read with your phone, and you can fabricate by simply drop casting a solution, without using any advanced manufacturing technique, without going in a clean room.”

    Some additional work will be needed to make this a practical commercial product, Chandrakasan says. “There will have to be a development for at-scale reading” via smartphones. “So, that’s clearly a future opportunity.” But the principle now shows a clear path to the day when “a farmer could at least, maybe not every seed, but could maybe take some random seeds in a particular batch and verify them,” he says.

    The research was partially supported by the U.S. Office of Naval research and the National Science Foundation, Analog Devices Inc., an EECS Mathworks fellowship, and a Paul M. Cook Career Development Professorship. More

  • in

    MIT-led teams win National Science Foundation grants to research sustainable materials

    Three MIT-led teams are among 16 nationwide to receive funding awards to address sustainable materials for global challenges through the National Science Foundation’s Convergence Accelerator program. Launched in 2019, the program targets solutions to especially compelling societal or scientific challenges at an accelerated pace, by incorporating a multidisciplinary research approach.

    “Solutions for today’s national-scale societal challenges are hard to solve within a single discipline. Instead, these challenges require convergence to merge ideas, approaches, and technologies from a wide range of diverse sectors, disciplines, and experts,” the NSF explains in its description of the Convergence Accelerator program. Phase 1 of the award involves planning to expand initial concepts, identify new team members, participate in an NSF development curriculum, and create an early prototype.

    Sustainable microchips

    One of the funded projects, “Building a Sustainable, Innovative Ecosystem for Microchip Manufacturing,” will be led by Anuradha Murthy Agarwal, a principal research scientist at the MIT Materials Research Laboratory. The aim of this project is to help transition the manufacturing of microchips to more sustainable processes that, for example, can reduce e-waste landfills by allowing repair of chips, or enable users to swap out a rogue chip in a motherboard rather than tossing out the entire laptop or cellphone.

    “Our goal is to help transition microchip manufacturing towards a sustainable industry,” says Agarwal. “We aim to do that by partnering with industry in a multimodal approach that prototypes technology designs to minimize energy consumption and waste generation, retrains the semiconductor workforce, and creates a roadmap for a new industrial ecology to mitigate materials-critical limitations and supply-chain constraints.”

    Agarwal’s co-principal investigators are Samuel Serna, an MIT visiting professor and assistant professor of physics at Bridgewater State University, and two MIT faculty affiliated with the Materials Research Laboratory: Juejun Hu, the John Elliott Professor of Materials Science and Engineering; and Lionel Kimerling, the Thomas Lord Professor of Materials Science and Engineering.

    The training component of the project will also create curricula for multiple audiences. “At Bridgewater State University, we will create a new undergraduate course on microchip manufacturing sustainability, and eventually adapt it for audiences from K-12, as well as incumbent employees,” says Serna.

    Sajan Saini and Erik Verlage of the MIT Department of Materials Science and Engineering (DMSE), and Randolph Kirchain from the MIT Materials Systems Laboratory, who have led MIT initiatives in virtual reality digital education, materials criticality, and roadmapping, are key contributors. The project also includes DMSE graduate students Drew Weninger and Luigi Ranno, and undergraduate Samuel Bechtold from Bridgewater State University’s Department of Physics.

    Sustainable topological materials

    Under the direction of Mingda Li, the Class of 1947 Career Development Professor and an Associate Professor of Nuclear Science and Engineering, the “Sustainable Topological Energy Materials (STEM) for Energy-efficient Applications” project will accelerate research in sustainable topological quantum materials.

    Topological materials are ones that retain a particular property through all external disturbances. Such materials could potentially be a boon for quantum computing, which has so far been plagued by instability, and would usher in a post-silicon era for microelectronics. Even better, says Li, topological materials can do their job without dissipating energy even at room temperatures.

    Topological materials can find a variety of applications in quantum computing, energy harvesting, and microelectronics. Despite their promise, and a few thousands of potential candidates, discovery and mass production of these materials has been challenging. Topology itself is not a measurable characteristic so researchers have to first develop ways to find hints of it. Synthesis of materials and related process optimization can take months, if not years, Li adds. Machine learning can accelerate the discovery and vetting stage.

    Given that a best-in-class topological quantum material has the potential to disrupt the semiconductor and computing industries, Li and team are paying special attention to the environmental sustainability of prospective materials. For example, some potential candidates include gold, lead, or cadmium, whose scarcity or toxicity does not lend itself to mass production and have been disqualified.

    Co-principal investigators on the project include Liang Fu, associate professor of physics at MIT; Tomas Palacios, professor of electrical engineering and computer science at MIT and director of the Microsystems Technology Laboratories; Susanne Stemmer of the University of California at Santa Barbara; and Qiong Ma of Boston College. The $750,000 one-year Phase 1 grant will focus on three priorities: building a topological materials database; identifying the most environmentally sustainable candidates for energy-efficient topological applications; and building the foundation for a Center for Sustainable Topological Energy Materials at MIT that will encourage industry-academia collaborations.

    At a time when the size of silicon-based electronic circuit boards is reaching its lower limit, the promise of topological materials whose conductivity increases with decreasing size is especially attractive, Li says. In addition, topological materials can harvest wasted heat: Imagine using your body heat to power your phone. “There are different types of application scenarios, and we can go much beyond the capabilities of existing materials,” Li says, “the possibilities of topological materials are endlessly exciting.”

    Socioresilient materials design

    Researchers in the MIT Department of Materials Science and Engineering (DMSE) have been awarded $750,000 in a cross-disciplinary project that aims to fundamentally redirect materials research and development toward more environmentally, socially, and economically sustainable and resilient materials. This “socioresilient materials design” will serve as the foundation for a new research and development framework that takes into account technical, environmental, and social factors from the beginning of the materials design and development process.

    Christine Ortiz, the Morris Cohen Professor of Materials Science and Engineering, and Ellan Spero PhD ’14, an instructor in DMSE, are leading this research effort, which includes Cornell University, the University of Swansea, Citrine Informatics, Station1, and 14 other organizations in academia, industry, venture capital, the social sector, government, and philanthropy.

    The team’s project, “Mind Over Matter: Socioresilient Materials Design,” emphasizes that circular design approaches, which aim to minimize waste and maximize the reuse, repair, and recycling of materials, are often insufficient to address negative repercussions for the planet and for human health and safety.

    Too often society understands the unintended negative consequences long after the materials that make up our homes and cities and systems have been in production and use for many years. Examples include disparate and negative public health impacts due to industrial scale manufacturing of materials, water and air contamination with harmful materials, and increased risk of fire in lower-income housing buildings due to flawed materials usage and design. Adverse climate events including drought, flood, extreme temperatures, and hurricanes have accelerated materials degradation, for example in critical infrastructure, leading to amplified environmental damage and social injustice. While classical materials design and selection approaches are insufficient to address these challenges, the new research project aims to do just that.

    “The imagination and technical expertise that goes into materials design is too often separated from the environmental and social realities of extraction, manufacturing, and end-of-life for materials,” says Ortiz. 

    Drawing on materials science and engineering, chemistry, and computer science, the project will develop a framework for materials design and development. It will incorporate powerful computational capabilities — artificial intelligence and machine learning with physics-based materials models — plus rigorous methodologies from the social sciences and the humanities to understand what impacts any new material put into production could have on society. More

  • in

    Detailed images from space offer clearer picture of drought effects on plants

    “MIT is a place where dreams come true,” says César Terrer, an assistant professor in the Department of Civil and Environmental Engineering. Here at MIT, Terrer says he’s given the resources needed to explore ideas he finds most exciting, and at the top of his list is climate science. In particular, he is interested in plant-soil interactions, and how the two can mitigate impacts of climate change. In 2022, Terrer received seed grant funding from the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) to produce drought monitoring systems for farmers. The project is leveraging a new generation of remote sensing devices to provide high-resolution plant water stress at regional to global scales.

    Growing up in Granada, Spain, Terrer always had an aptitude and passion for science. He studied environmental science at the University of Murcia, where he interned in the Department of Ecology. Using computational analysis tools, he worked on modeling species distribution in response to human development. Early on in his undergraduate experience, Terrer says he regarded his professors as “superheroes” with a kind of scholarly prowess. He knew he wanted to follow in their footsteps by one day working as a faculty member in academia. Of course, there would be many steps along the way before achieving that dream. 

    Upon completing his undergraduate studies, Terrer set his sights on exciting and adventurous research roles. He thought perhaps he would conduct field work in the Amazon, engaging with native communities. But when the opportunity arose to work in Australia on a state-of-the-art climate change experiment that simulates future levels of carbon dioxide, he headed south to study how plants react to CO2 in a biome of native Australian eucalyptus trees. It was during this experience that Terrer started to take a keen interest in the carbon cycle and the capacity of ecosystems to buffer rising levels of CO2 caused by human activity.

    Around 2014, he began to delve deeper into the carbon cycle as he began his doctoral studies at Imperial College London. The primary question Terrer sought to answer during his PhD was “will plants be able to absorb predicted future levels of CO2 in the atmosphere?” To answer the question, Terrer became an early adopter of artificial intelligence, machine learning, and remote sensing to analyze data from real-life, global climate change experiments. His findings from these “ground truth” values and observations resulted in a paper in the journal Science. In it, he claimed that climate models most likely overestimated how much carbon plants will be able to absorb by the end of the century, by a factor of three. 

    After postdoctoral positions at Stanford University and the Universitat Autonoma de Barcelona, followed by a prestigious Lawrence Fellowship, Terrer says he had “too many ideas and not enough time to accomplish all those ideas.” He knew it was time to lead his own group. Not long after applying for faculty positions, he landed at MIT. 

    New ways to monitor drought

    Terrer is employing similar methods to those he used during his PhD to analyze data from all over the world for his J-WAFS project. He and postdoc Wenzhe Jiao collect data from remote sensing satellites and field experiments and use machine learning to come up with new ways to monitor drought. Terrer says Jiao is a “remote sensing wizard,” who fuses data from different satellite products to understand the water cycle. With Jiao’s hydrology expertise and Terrer’s knowledge of plants, soil, and the carbon cycle, the duo is a formidable team to tackle this project.

    According to the U.N. World Meteorological Organization, the number and duration of droughts has increased by 29 percent since 2000, as compared to the two previous decades. From the Horn of Africa to the Western United States, drought is devastating vegetation and severely stressing water supplies, compromising food production and spiking food insecurity. Drought monitoring can offer fundamental information on drought location, frequency, and severity, but assessing the impact of drought on vegetation is extremely challenging. This is because plants’ sensitivity to water deficits varies across species and ecosystems. 

    Terrer and Jiao are able to obtain a clearer picture of how drought is affecting plants by employing the latest generation of remote sensing observations, which offer images of the planet with incredible spatial and temporal resolution. Satellite products such as Sentinel, Landsat, and Planet can provide daily images from space with such high resolution that individual trees can be discerned. Along with the images and datasets from satellites, the team is using ground-based observations from meteorological data. They are also using the MIT SuperCloud at MIT Lincoln Laboratory to process and analyze all of the data sets. The J-WAFS project is among one of the first to leverage high-resolution data to quantitatively measure plant drought impacts in the United States with the hopes of expanding to a global assessment in the future.

    Assisting farmers and resource managers 

    Every week, the U.S. Drought Monitor provides a map of drought conditions in the United States. The map has zero resolution and is more of a drought recap or summary, unable to predict future drought scenarios. The lack of a comprehensive spatiotemporal evaluation of historic and future drought impacts on global vegetation productivity is detrimental to farmers both in the United States and worldwide.  

    Terrer and Jiao plan to generate metrics for plant water stress at an unprecedented resolution of 10-30 meters. This means that they will be able to provide drought monitoring maps at the scale of a typical U.S. farm, giving farmers more precise, useful data every one to two days. The team will use the information from the satellites to monitor plant growth and soil moisture, as well as the time lag of plant growth response to soil moisture. In this way, Terrer and Jiao say they will eventually be able to create a kind of “plant water stress forecast” that may be able to predict adverse impacts of drought four weeks in advance. “According to the current soil moisture and lagged response time, we hope to predict plant water stress in the future,” says Jiao. 

    The expected outcomes of this project will give farmers, land and water resource managers, and decision-makers more accurate data at the farm-specific level, allowing for better drought preparation, mitigation, and adaptation. “We expect to make our data open-access online, after we finish the project, so that farmers and other stakeholders can use the maps as tools,” says Jiao. 

    Terrer adds that the project “has the potential to help us better understand the future states of climate systems, and also identify the regional hot spots more likely to experience water crises at the national, state, local, and tribal government scales.” He also expects the project will enhance our understanding of global carbon-water-energy cycle responses to drought, with applications in determining climate change impacts on natural ecosystems as a whole. More

  • in

    Exploring the nanoworld of biogenic gems

    A new research collaboration with The Bahrain Institute for Pearls and Gemstones (DANAT) will seek to develop advanced characterization tools for the analysis of the properties of pearls and to explore technologies to assign unique identifiers to individual pearls.

    The three-year project will be led by Admir Mašić, associate professor of civil and environmental engineering, in collaboration with Vladimir Bulović, the Fariborz Maseeh Chair in Emerging Technology and professor of electrical engineering and computer science.

    “Pearls are extremely complex and fascinating hierarchically ordered biological materials that are formed by a wide range of different species,” says Mašić. “Working with DANAT provides us a unique opportunity to apply our lab’s multi-scale materials characterization tools to identify potentially species-specific pearl fingerprints, while simultaneously addressing scientific research questions regarding the underlying biomineralization processes that could inform advances in sustainable building materials.”

    DANAT is a gemological laboratory specializing in the testing and study of natural pearls as a reflection of Bahrain’s pearling history and desire to protect and advance Bahrain’s pearling heritage. DANAT’s gemologists support clients and students through pearl, gemstone, and diamond identification services, as well as educational courses.

    Like many other precious gemstones, pearls have been human-made through scientific experimentation, says Noora Jamsheer, chief executive officer at DANAT. Over a century ago, cultured pearls entered markets as a competitive product to natural pearls, similar in appearance but different in value.

    “Gemological labs have been innovating scientific testing methods to differentiate between natural pearls and all other pearls that exist because of direct or indirect human intervention. Today the world knows natural pearls and cultured pearls. However, there are also pearls that fall in between these two categories,” says Jamsheer. “DANAT has the responsibility, as the leading gemological laboratory for pearl testing, to take the initiative necessary to ensure that testing methods keep pace with advances in the science of pearl cultivation.”

    Titled “Exploring the Nanoworld of Biogenic Gems,” the project will aim to improve the process of testing and identifying pearls by identifying morphological, micro-structural, optical, and chemical features sufficient to distinguish a pearl’s area of origin, method of growth, or both. MIT.nano, MIT’s open-access center for nanoscience and nanoengineering will be the organizational home for the project, where Mašić and his team will utilize the facility’s state-of-the-art characterization tools.

    In addition to discovering new methodologies for establishing a pearl’s origin, the project aims to utilize machine learning to automate pearl classification. Furthermore, researchers will investigate techniques to create a unique identifier associated with an individual pearl.

    The initial sponsored research project is expected to last three years, with potential for continued collaboration based on key findings or building upon the project’s success to open new avenues for research into the structure, properties, and growth of pearls. More