More stories

  • in

    Flow batteries for grid-scale energy storage

    In the coming decades, renewable energy sources such as solar and wind will increasingly dominate the conventional power grid. Because those sources only generate electricity when it’s sunny or windy, ensuring a reliable grid — one that can deliver power 24/7 — requires some means of storing electricity when supplies are abundant and delivering it later when they’re not. And because there can be hours and even days with no wind, for example, some energy storage devices must be able to store a large amount of electricity for a long time.

    A promising technology for performing that task is the flow battery, an electrochemical device that can store hundreds of megawatt-hours of energy — enough to keep thousands of homes running for many hours on a single charge. Flow batteries have the potential for long lifetimes and low costs in part due to their unusual design. In the everyday batteries used in phones and electric vehicles, the materials that store the electric charge are solid coatings on the electrodes. “A flow battery takes those solid-state charge-storage materials, dissolves them in electrolyte solutions, and then pumps the solutions through the electrodes,” says Fikile Brushett, an associate professor of chemical engineering at MIT. That design offers many benefits and poses a few challenges.

    Flow batteries: Design and operation

    A flow battery contains two substances that undergo electrochemical reactions in which electrons are transferred from one to the other. When the battery is being charged, the transfer of electrons forces the two substances into a state that’s “less energetically favorable” as it stores extra energy. (Think of a ball being pushed up to the top of a hill.) When the battery is being discharged, the transfer of electrons shifts the substances into a more energetically favorable state as the stored energy is released. (The ball is set free and allowed to roll down the hill.)

    At the core of a flow battery are two large tanks that hold liquid electrolytes, one positive and the other negative. Each electrolyte contains dissolved “active species” — atoms or molecules that will electrochemically react to release or store electrons. During charging, one species is “oxidized” (releases electrons), and the other is “reduced” (gains electrons); during discharging, they swap roles. Pumps are used to circulate the two electrolytes through separate electrodes, each made of a porous material that provides abundant surfaces on which the active species can react. A thin membrane between the adjacent electrodes keeps the two electrolytes from coming into direct contact and possibly reacting, which would release heat and waste energy that could otherwise be used on the grid.

    When the battery is being discharged, active species on the negative side oxidize, releasing electrons that flow through an external circuit to the positive side, causing the species there to be reduced. The flow of those electrons through the external circuit can power the grid. In addition to the movement of the electrons, “supporting” ions — other charged species in the electrolyte — pass through the membrane to help complete the reaction and keep the system electrically neutral.

    Once all the species have reacted and the battery is fully discharged, the system can be recharged. In that process, electricity from wind turbines, solar farms, and other generating sources drives the reverse reactions. The active species on the positive side oxidize to release electrons back through the wires to the negative side, where they rejoin their original active species. The battery is now reset and ready to send out more electricity when it’s needed. Brushett adds, “The battery can be cycled in this way over and over again for years on end.”

    Benefits and challenges

    A major advantage of this system design is that where the energy is stored (the tanks) is separated from where the electrochemical reactions occur (the so-called reactor, which includes the porous electrodes and membrane). As a result, the capacity of the battery — how much energy it can store — and its power — the rate at which it can be charged and discharged — can be adjusted separately. “If I want to have more capacity, I can just make the tanks bigger,” explains Kara Rodby PhD ’22, a former member of Brushett’s lab and now a technical analyst at Volta Energy Technologies. “And if I want to increase its power, I can increase the size of the reactor.” That flexibility makes it possible to design a flow battery to suit a particular application and to modify it if needs change in the future.

    However, the electrolyte in a flow battery can degrade with time and use. While all batteries experience electrolyte degradation, flow batteries in particular suffer from a relatively faster form of degradation called “crossover.” The membrane is designed to allow small supporting ions to pass through and block the larger active species, but in reality, it isn’t perfectly selective. Some of the active species in one tank can sneak through (or “cross over”) and mix with the electrolyte in the other tank. The two active species may then chemically react, effectively discharging the battery. Even if they don’t, some of the active species is no longer in the first tank where it belongs, so the overall capacity of the battery is lower.

    Recovering capacity lost to crossover requires some sort of remediation — for example, replacing the electrolyte in one or both tanks or finding a way to reestablish the “oxidation states” of the active species in the two tanks. (Oxidation state is a number assigned to an atom or compound to tell if it has more or fewer electrons than it has when it’s in its neutral state.) Such remediation is more easily — and therefore more cost-effectively — executed in a flow battery because all the components are more easily accessed than they are in a conventional battery.

    The state of the art: Vanadium

    A critical factor in designing flow batteries is the selected chemistry. The two electrolytes can contain different chemicals, but today the most widely used setup has vanadium in different oxidation states on the two sides. That arrangement addresses the two major challenges with flow batteries.

    First, vanadium doesn’t degrade. “If you put 100 grams of vanadium into your battery and you come back in 100 years, you should be able to recover 100 grams of that vanadium — as long as the battery doesn’t have some sort of a physical leak,” says Brushett.

    And second, if some of the vanadium in one tank flows through the membrane to the other side, there is no permanent cross-contamination of the electrolytes, only a shift in the oxidation states, which is easily remediated by re-balancing the electrolyte volumes and restoring the oxidation state via a minor charge step. Most of today’s commercial systems include a pipe connecting the two vanadium tanks that automatically transfers a certain amount of electrolyte from one tank to the other when the two get out of balance.

    However, as the grid becomes increasingly dominated by renewables, more and more flow batteries will be needed to provide long-duration storage. Demand for vanadium will grow, and that will be a problem. “Vanadium is found around the world but in dilute amounts, and extracting it is difficult,” says Rodby. “So there are limited places — mostly in Russia, China, and South Africa — where it’s produced, and the supply chain isn’t reliable.” As a result, vanadium prices are both high and extremely volatile — an impediment to the broad deployment of the vanadium flow battery.

    Beyond vanadium

    The question then becomes: If not vanadium, then what? Researchers worldwide are trying to answer that question, and many are focusing on promising chemistries using materials that are more abundant and less expensive than vanadium. But it’s not that easy, notes Rodby. While other chemistries may offer lower initial capital costs, they may be more expensive to operate over time. They may require periodic servicing to rejuvenate one or both of their electrolytes. “You may even need to replace them, so you’re essentially incurring that initial (low) capital cost again and again,” says Rodby.

    Indeed, comparing the economics of different options is difficult because “there are so many dependent variables,” says Brushett. “A flow battery is an electrochemical system, which means that there are multiple components working together in order for the device to function. Because of that, if you are trying to improve a system — performance, cost, whatever — it’s very difficult because when you touch one thing, five other things change.”

    So how can we compare these new and emerging chemistries — in a meaningful way — with today’s vanadium systems? And how do we compare them with one another, so we know which ones are more promising and what the potential pitfalls are with each one? “Addressing those questions can help us decide where to focus our research and where to invest our research and development dollars now,” says Brushett.

    Techno-economic modeling as a guide

    A good way to understand and assess the economic viability of new and emerging energy technologies is using techno-economic modeling. With certain models, one can account for the capital cost of a defined system and — based on the system’s projected performance — the operating costs over time, generating a total cost discounted over the system’s lifetime. That result allows a potential purchaser to compare options on a “levelized cost of storage” basis.

    Using that approach, Rodby developed a framework for estimating the levelized cost for flow batteries. The framework includes a dynamic physical model of the battery that tracks its performance over time, including any changes in storage capacity. The calculated operating costs therefore cover all services required over decades of operation, including the remediation steps taken in response to species degradation and crossover.

    Analyzing all possible chemistries would be impossible, so the researchers focused on certain classes. First, they narrowed the options down to those in which the active species are dissolved in water. “Aqueous systems are furthest along and are most likely to be successful commercially,” says Rodby. Next, they limited their analyses to “asymmetric” chemistries; that is, setups that use different materials in the two tanks. (As Brushett explains, vanadium is unusual in that using the same “parent” material in both tanks is rarely feasible.) Finally, they divided the possibilities into two classes: species that have a finite lifetime and species that have an infinite lifetime; that is, ones that degrade over time and ones that don’t.

    Results from their analyses aren’t clear-cut; there isn’t a particular chemistry that leads the pack. But they do provide general guidelines for choosing and pursuing the different options.

    Finite-lifetime materials

    While vanadium is a single element, the finite-lifetime materials are typically organic molecules made up of multiple elements, among them carbon. One advantage of organic molecules is that they can be synthesized in a lab and at an industrial scale, and the structure can be altered to suit a specific function. For example, the molecule can be made more soluble, so more will be present in the electrolyte and the energy density of the system will be greater; or it can be made bigger so it won’t fit through the membrane and cross to the other side. Finally, organic molecules can be made from simple, abundant, low-cost elements, potentially even waste streams from other industries.

    Despite those attractive features, there are two concerns. First, organic molecules would probably need to be made in a chemical plant, and upgrading the low-cost precursors as needed may prove to be more expensive than desired. Second, these molecules are large chemical structures that aren’t always very stable, so they’re prone to degradation. “So along with crossover, you now have a new degradation mechanism that occurs over time,” says Rodby. “Moreover, you may figure out the degradation process and how to reverse it in one type of organic molecule, but the process may be totally different in the next molecule you work on, making the discovery and development of each new chemistry require significant effort.”

    Research is ongoing, but at present, Rodby and Brushett find it challenging to make the case for the finite-lifetime chemistries, mostly based on their capital costs. Citing studies that have estimated the manufacturing costs of these materials, Rodby believes that current options cannot be made at low enough costs to be economically viable. “They’re cheaper than vanadium, but not cheap enough,” says Rodby.

    The results send an important message to researchers designing new chemistries using organic molecules: Be sure to consider operating challenges early on. Rodby and Brushett note that it’s often not until way down the “innovation pipeline” that researchers start to address practical questions concerning the long-term operation of a promising-looking system. The MIT team recommends that understanding the potential decay mechanisms and how they might be cost-effectively reversed or remediated should be an upfront design criterion.

    Infinite-lifetime species

    The infinite-lifetime species include materials that — like vanadium — are not going to decay. The most likely candidates are other metals; for example, iron or manganese. “These are commodity-scale chemicals that will certainly be low cost,” says Rodby.

    Here, the researchers found that there’s a wider “design space” of feasible options that could compete with vanadium. But there are still challenges to be addressed. While these species don’t degrade, they may trigger side reactions when used in a battery. For example, many metals catalyze the formation of hydrogen, which reduces efficiency and adds another form of capacity loss. While there are ways to deal with the hydrogen-evolution problem, a sufficiently low-cost and effective solution for high rates of this side reaction is still needed.

    In addition, crossover is a still a problem requiring remediation steps. The researchers evaluated two methods of dealing with crossover in systems combining two types of infinite-lifetime species.

    The first is the “spectator strategy.” Here, both of the tanks contain both active species. Explains Brushett, “You have the same electrolyte mixture on both sides of the battery, but only one of the species is ever working and the other is a spectator.” As a result, crossover can be remediated in similar ways to those used in the vanadium flow battery. The drawback is that half of the active material in each tank is unavailable for storing charge, so it’s wasted. “You’ve essentially doubled your electrolyte cost on a per-unit energy basis,” says Rodby.

    The second method calls for making a membrane that is perfectly selective: It must let through only the supporting ion needed to maintain the electrical balance between the two sides. However, that approach increases cell resistance, hurting system efficiency. In addition, the membrane would need to be made of a special material — say, a ceramic composite — that would be extremely expensive based on current production methods and scales. Rodby notes that work on such membranes is under way, but the cost and performance metrics are “far off from where they’d need to be to make sense.”

    Time is of the essence

    The researchers stress the urgency of the climate change threat and the need to have grid-scale, long-duration storage systems at the ready. “There are many chemistries now being looked at,” says Rodby, “but we need to hone in on some solutions that will actually be able to compete with vanadium and can be deployed soon and operated over the long term.”

    The techno-economic framework is intended to help guide that process. It can calculate the levelized cost of storage for specific designs for comparison with vanadium systems and with one another. It can identify critical gaps in knowledge related to long-term operation or remediation, thereby identifying technology development or experimental investigations that should be prioritized. And it can help determine whether the trade-off between lower upfront costs and greater operating costs makes sense in these next-generation chemistries.

    The good news, notes Rodby, is that advances achieved in research on one type of flow battery chemistry can often be applied to others. “A lot of the principles learned with vanadium can be translated to other systems,” she says. She believes that the field has advanced not only in understanding but also in the ability to design experiments that address problems common to all flow batteries, thereby helping to prepare the technology for its important role of grid-scale storage in the future.

    This research was supported by the MIT Energy Initiative. Kara Rodby PhD ’22 was supported by an ExxonMobil-MIT Energy Fellowship in 2021-22.

    This article appears in the Winter 2023 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Tackling counterfeit seeds with “unclonable” labels

    Average crop yields in Africa are consistently far below those expected, and one significant reason is the prevalence of counterfeit seeds whose germination rates are far lower than those of the genuine ones. The World Bank estimates that as much as half of all seeds sold in some African countries are fake, which could help to account for crop production that is far below potential.

    There have been many attempts to prevent this counterfeiting through tracking labels, but none have proved effective; among other issues, such labels have been vulnerable to hacking because of the deterministic nature of their encoding systems. But now, a team of MIT researchers has come up with a kind of tiny, biodegradable tag that can be applied directly to the seeds themselves, and that provides a unique randomly created code that cannot be duplicated.

    The new system, which uses minuscule dots of silk-based material, each containing a unique combination of different chemical signatures, is described today in the journal Science Advances in a paper by MIT’s dean of engineering Anantha Chandrakasan, professor of civil and environmental engineering Benedetto Marelli, postdoc Hui Sun, and graduate student Saurav Maji.

    The problem of counterfeiting is an enormous one globally, the researchers point out, affecting everything from drugs to luxury goods, and many different systems have been developed to try to combat this. But there has been less attention to the problem in the area of agriculture, even though the consequences can be severe. In sub-Saharan Africa, for example, the World Bank estimates that counterfeit seeds are a significant factor in crop yields that average less than one-fifth of the potential for maize, and less than one-third for rice.

    Marelli explains that a key to the new system is creating a randomly-produced physical object whose exact composition is virtually impossible to duplicate. The labels they create “leverage randomness and uncertainty in the process of application, to generate unique signature features that can be read, and that cannot be replicated,” he says.

    What they’re dealing with, Sun adds, “is the very old job of trying, basically, not to get your stuff stolen. And you can try as much as you can, but eventually somebody is always smart enough to figure out how to do it, so nothing is really unbreakable. But the idea is, it’s almost impossible, if not impossible, to replicate it, or it takes so much effort that it’s not worth it anymore.”

    The idea of an “unclonable” code was originally developed as a way of protecting the authenticity of computer chips, explains Chandrakasan, who is the Vannevar Bush Professor of Electrical Engineering and Computer Science. “In integrated circuits, individual transistors have slightly different properties coined device variations,” he explains, “and you could then use that variability and combine that variability with higher-level circuits to create a unique ID for the device. And once you have that, then you can use that unique ID as a part of a security protocol. Something like transistor variability is hard to replicate from device to device, so that’s what gives it its uniqueness, versus storing a particular fixed ID.” The concept is based on what are known as physically unclonable functions, or PUFs.

    The team decided to try to apply that PUF principle to the problem of fake seeds, and the use of silk proteins was a natural choice because the material is not only harmless to the environment but also classified by the Food and Drug Administration in the “generally recognized as safe” category, so it requires no special approval for use on food products.

    “You could coat it on top of seeds,” Maji says, “and if you synthesize silk in a certain way, it will also have natural random variations. So that’s the idea, that every seed or every bag could have a unique signature.”

    Developing effective secure system solutions has long been one of Chandrakasan’s specialties, while Marelli has spent many years developing systems for applying silk coatings to a variety of fruits, vegetables, and seeds, so their collaboration was a natural for developing such a silk-based coding system toward enhanced security.

    “The challenge was what type of form factor to give to silk,” Sun says, “so that it can be fabricated very easily.” They developed a simple drop-casting approach that produces tags that are less than one-tenth of an inch in diameter. The second challenge was to develop “a way where we can read the uniqueness, in also a very high throughput and easy way.”

    For the unique silk-based codes, Marelli says, “eventually we found a way to add a color to these microparticles so that they assemble in random structures.” The resulting unique patterns can be read out not only by a spectrograph or a portable microscope, but even by an ordinary cellphone camera with a macro lens. This image can be processed locally to generate the PUF code and then sent to the cloud and compared with a secure database to ensure the authenticity of the product. “It’s random so that people cannot easily replicate it,” says Sun. “People cannot predict it without measuring it.”

    And the number of possible permutations that could result from the way they mix four basic types of colored silk nanoparticles is astronomical. “We were able to show that with a minimal amount of silk, we were able to generate 128 random bits of security,” Maji says. “So this gives rise to 2 to the power 128 possible combinations, which is extremely difficult to crack given the computational capabilities of the state-of-the-art computing systems.”

    Marelli says that “for us, it’s a good test bed in order to think out-of-the-box, and how we can have a path that somehow is more democratic.” In this case, that means “something that you can literally read with your phone, and you can fabricate by simply drop casting a solution, without using any advanced manufacturing technique, without going in a clean room.”

    Some additional work will be needed to make this a practical commercial product, Chandrakasan says. “There will have to be a development for at-scale reading” via smartphones. “So, that’s clearly a future opportunity.” But the principle now shows a clear path to the day when “a farmer could at least, maybe not every seed, but could maybe take some random seeds in a particular batch and verify them,” he says.

    The research was partially supported by the U.S. Office of Naval research and the National Science Foundation, Analog Devices Inc., an EECS Mathworks fellowship, and a Paul M. Cook Career Development Professorship. More

  • in

    MIT-led teams win National Science Foundation grants to research sustainable materials

    Three MIT-led teams are among 16 nationwide to receive funding awards to address sustainable materials for global challenges through the National Science Foundation’s Convergence Accelerator program. Launched in 2019, the program targets solutions to especially compelling societal or scientific challenges at an accelerated pace, by incorporating a multidisciplinary research approach.

    “Solutions for today’s national-scale societal challenges are hard to solve within a single discipline. Instead, these challenges require convergence to merge ideas, approaches, and technologies from a wide range of diverse sectors, disciplines, and experts,” the NSF explains in its description of the Convergence Accelerator program. Phase 1 of the award involves planning to expand initial concepts, identify new team members, participate in an NSF development curriculum, and create an early prototype.

    Sustainable microchips

    One of the funded projects, “Building a Sustainable, Innovative Ecosystem for Microchip Manufacturing,” will be led by Anuradha Murthy Agarwal, a principal research scientist at the MIT Materials Research Laboratory. The aim of this project is to help transition the manufacturing of microchips to more sustainable processes that, for example, can reduce e-waste landfills by allowing repair of chips, or enable users to swap out a rogue chip in a motherboard rather than tossing out the entire laptop or cellphone.

    “Our goal is to help transition microchip manufacturing towards a sustainable industry,” says Agarwal. “We aim to do that by partnering with industry in a multimodal approach that prototypes technology designs to minimize energy consumption and waste generation, retrains the semiconductor workforce, and creates a roadmap for a new industrial ecology to mitigate materials-critical limitations and supply-chain constraints.”

    Agarwal’s co-principal investigators are Samuel Serna, an MIT visiting professor and assistant professor of physics at Bridgewater State University, and two MIT faculty affiliated with the Materials Research Laboratory: Juejun Hu, the John Elliott Professor of Materials Science and Engineering; and Lionel Kimerling, the Thomas Lord Professor of Materials Science and Engineering.

    The training component of the project will also create curricula for multiple audiences. “At Bridgewater State University, we will create a new undergraduate course on microchip manufacturing sustainability, and eventually adapt it for audiences from K-12, as well as incumbent employees,” says Serna.

    Sajan Saini and Erik Verlage of the MIT Department of Materials Science and Engineering (DMSE), and Randolph Kirchain from the MIT Materials Systems Laboratory, who have led MIT initiatives in virtual reality digital education, materials criticality, and roadmapping, are key contributors. The project also includes DMSE graduate students Drew Weninger and Luigi Ranno, and undergraduate Samuel Bechtold from Bridgewater State University’s Department of Physics.

    Sustainable topological materials

    Under the direction of Mingda Li, the Class of 1947 Career Development Professor and an Associate Professor of Nuclear Science and Engineering, the “Sustainable Topological Energy Materials (STEM) for Energy-efficient Applications” project will accelerate research in sustainable topological quantum materials.

    Topological materials are ones that retain a particular property through all external disturbances. Such materials could potentially be a boon for quantum computing, which has so far been plagued by instability, and would usher in a post-silicon era for microelectronics. Even better, says Li, topological materials can do their job without dissipating energy even at room temperatures.

    Topological materials can find a variety of applications in quantum computing, energy harvesting, and microelectronics. Despite their promise, and a few thousands of potential candidates, discovery and mass production of these materials has been challenging. Topology itself is not a measurable characteristic so researchers have to first develop ways to find hints of it. Synthesis of materials and related process optimization can take months, if not years, Li adds. Machine learning can accelerate the discovery and vetting stage.

    Given that a best-in-class topological quantum material has the potential to disrupt the semiconductor and computing industries, Li and team are paying special attention to the environmental sustainability of prospective materials. For example, some potential candidates include gold, lead, or cadmium, whose scarcity or toxicity does not lend itself to mass production and have been disqualified.

    Co-principal investigators on the project include Liang Fu, associate professor of physics at MIT; Tomas Palacios, professor of electrical engineering and computer science at MIT and director of the Microsystems Technology Laboratories; Susanne Stemmer of the University of California at Santa Barbara; and Qiong Ma of Boston College. The $750,000 one-year Phase 1 grant will focus on three priorities: building a topological materials database; identifying the most environmentally sustainable candidates for energy-efficient topological applications; and building the foundation for a Center for Sustainable Topological Energy Materials at MIT that will encourage industry-academia collaborations.

    At a time when the size of silicon-based electronic circuit boards is reaching its lower limit, the promise of topological materials whose conductivity increases with decreasing size is especially attractive, Li says. In addition, topological materials can harvest wasted heat: Imagine using your body heat to power your phone. “There are different types of application scenarios, and we can go much beyond the capabilities of existing materials,” Li says, “the possibilities of topological materials are endlessly exciting.”

    Socioresilient materials design

    Researchers in the MIT Department of Materials Science and Engineering (DMSE) have been awarded $750,000 in a cross-disciplinary project that aims to fundamentally redirect materials research and development toward more environmentally, socially, and economically sustainable and resilient materials. This “socioresilient materials design” will serve as the foundation for a new research and development framework that takes into account technical, environmental, and social factors from the beginning of the materials design and development process.

    Christine Ortiz, the Morris Cohen Professor of Materials Science and Engineering, and Ellan Spero PhD ’14, an instructor in DMSE, are leading this research effort, which includes Cornell University, the University of Swansea, Citrine Informatics, Station1, and 14 other organizations in academia, industry, venture capital, the social sector, government, and philanthropy.

    The team’s project, “Mind Over Matter: Socioresilient Materials Design,” emphasizes that circular design approaches, which aim to minimize waste and maximize the reuse, repair, and recycling of materials, are often insufficient to address negative repercussions for the planet and for human health and safety.

    Too often society understands the unintended negative consequences long after the materials that make up our homes and cities and systems have been in production and use for many years. Examples include disparate and negative public health impacts due to industrial scale manufacturing of materials, water and air contamination with harmful materials, and increased risk of fire in lower-income housing buildings due to flawed materials usage and design. Adverse climate events including drought, flood, extreme temperatures, and hurricanes have accelerated materials degradation, for example in critical infrastructure, leading to amplified environmental damage and social injustice. While classical materials design and selection approaches are insufficient to address these challenges, the new research project aims to do just that.

    “The imagination and technical expertise that goes into materials design is too often separated from the environmental and social realities of extraction, manufacturing, and end-of-life for materials,” says Ortiz. 

    Drawing on materials science and engineering, chemistry, and computer science, the project will develop a framework for materials design and development. It will incorporate powerful computational capabilities — artificial intelligence and machine learning with physics-based materials models — plus rigorous methodologies from the social sciences and the humanities to understand what impacts any new material put into production could have on society. More

  • in

    Detailed images from space offer clearer picture of drought effects on plants

    “MIT is a place where dreams come true,” says César Terrer, an assistant professor in the Department of Civil and Environmental Engineering. Here at MIT, Terrer says he’s given the resources needed to explore ideas he finds most exciting, and at the top of his list is climate science. In particular, he is interested in plant-soil interactions, and how the two can mitigate impacts of climate change. In 2022, Terrer received seed grant funding from the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) to produce drought monitoring systems for farmers. The project is leveraging a new generation of remote sensing devices to provide high-resolution plant water stress at regional to global scales.

    Growing up in Granada, Spain, Terrer always had an aptitude and passion for science. He studied environmental science at the University of Murcia, where he interned in the Department of Ecology. Using computational analysis tools, he worked on modeling species distribution in response to human development. Early on in his undergraduate experience, Terrer says he regarded his professors as “superheroes” with a kind of scholarly prowess. He knew he wanted to follow in their footsteps by one day working as a faculty member in academia. Of course, there would be many steps along the way before achieving that dream. 

    Upon completing his undergraduate studies, Terrer set his sights on exciting and adventurous research roles. He thought perhaps he would conduct field work in the Amazon, engaging with native communities. But when the opportunity arose to work in Australia on a state-of-the-art climate change experiment that simulates future levels of carbon dioxide, he headed south to study how plants react to CO2 in a biome of native Australian eucalyptus trees. It was during this experience that Terrer started to take a keen interest in the carbon cycle and the capacity of ecosystems to buffer rising levels of CO2 caused by human activity.

    Around 2014, he began to delve deeper into the carbon cycle as he began his doctoral studies at Imperial College London. The primary question Terrer sought to answer during his PhD was “will plants be able to absorb predicted future levels of CO2 in the atmosphere?” To answer the question, Terrer became an early adopter of artificial intelligence, machine learning, and remote sensing to analyze data from real-life, global climate change experiments. His findings from these “ground truth” values and observations resulted in a paper in the journal Science. In it, he claimed that climate models most likely overestimated how much carbon plants will be able to absorb by the end of the century, by a factor of three. 

    After postdoctoral positions at Stanford University and the Universitat Autonoma de Barcelona, followed by a prestigious Lawrence Fellowship, Terrer says he had “too many ideas and not enough time to accomplish all those ideas.” He knew it was time to lead his own group. Not long after applying for faculty positions, he landed at MIT. 

    New ways to monitor drought

    Terrer is employing similar methods to those he used during his PhD to analyze data from all over the world for his J-WAFS project. He and postdoc Wenzhe Jiao collect data from remote sensing satellites and field experiments and use machine learning to come up with new ways to monitor drought. Terrer says Jiao is a “remote sensing wizard,” who fuses data from different satellite products to understand the water cycle. With Jiao’s hydrology expertise and Terrer’s knowledge of plants, soil, and the carbon cycle, the duo is a formidable team to tackle this project.

    According to the U.N. World Meteorological Organization, the number and duration of droughts has increased by 29 percent since 2000, as compared to the two previous decades. From the Horn of Africa to the Western United States, drought is devastating vegetation and severely stressing water supplies, compromising food production and spiking food insecurity. Drought monitoring can offer fundamental information on drought location, frequency, and severity, but assessing the impact of drought on vegetation is extremely challenging. This is because plants’ sensitivity to water deficits varies across species and ecosystems. 

    Terrer and Jiao are able to obtain a clearer picture of how drought is affecting plants by employing the latest generation of remote sensing observations, which offer images of the planet with incredible spatial and temporal resolution. Satellite products such as Sentinel, Landsat, and Planet can provide daily images from space with such high resolution that individual trees can be discerned. Along with the images and datasets from satellites, the team is using ground-based observations from meteorological data. They are also using the MIT SuperCloud at MIT Lincoln Laboratory to process and analyze all of the data sets. The J-WAFS project is among one of the first to leverage high-resolution data to quantitatively measure plant drought impacts in the United States with the hopes of expanding to a global assessment in the future.

    Assisting farmers and resource managers 

    Every week, the U.S. Drought Monitor provides a map of drought conditions in the United States. The map has zero resolution and is more of a drought recap or summary, unable to predict future drought scenarios. The lack of a comprehensive spatiotemporal evaluation of historic and future drought impacts on global vegetation productivity is detrimental to farmers both in the United States and worldwide.  

    Terrer and Jiao plan to generate metrics for plant water stress at an unprecedented resolution of 10-30 meters. This means that they will be able to provide drought monitoring maps at the scale of a typical U.S. farm, giving farmers more precise, useful data every one to two days. The team will use the information from the satellites to monitor plant growth and soil moisture, as well as the time lag of plant growth response to soil moisture. In this way, Terrer and Jiao say they will eventually be able to create a kind of “plant water stress forecast” that may be able to predict adverse impacts of drought four weeks in advance. “According to the current soil moisture and lagged response time, we hope to predict plant water stress in the future,” says Jiao. 

    The expected outcomes of this project will give farmers, land and water resource managers, and decision-makers more accurate data at the farm-specific level, allowing for better drought preparation, mitigation, and adaptation. “We expect to make our data open-access online, after we finish the project, so that farmers and other stakeholders can use the maps as tools,” says Jiao. 

    Terrer adds that the project “has the potential to help us better understand the future states of climate systems, and also identify the regional hot spots more likely to experience water crises at the national, state, local, and tribal government scales.” He also expects the project will enhance our understanding of global carbon-water-energy cycle responses to drought, with applications in determining climate change impacts on natural ecosystems as a whole. More

  • in

    Exploring the nanoworld of biogenic gems

    A new research collaboration with The Bahrain Institute for Pearls and Gemstones (DANAT) will seek to develop advanced characterization tools for the analysis of the properties of pearls and to explore technologies to assign unique identifiers to individual pearls.

    The three-year project will be led by Admir Mašić, associate professor of civil and environmental engineering, in collaboration with Vladimir Bulović, the Fariborz Maseeh Chair in Emerging Technology and professor of electrical engineering and computer science.

    “Pearls are extremely complex and fascinating hierarchically ordered biological materials that are formed by a wide range of different species,” says Mašić. “Working with DANAT provides us a unique opportunity to apply our lab’s multi-scale materials characterization tools to identify potentially species-specific pearl fingerprints, while simultaneously addressing scientific research questions regarding the underlying biomineralization processes that could inform advances in sustainable building materials.”

    DANAT is a gemological laboratory specializing in the testing and study of natural pearls as a reflection of Bahrain’s pearling history and desire to protect and advance Bahrain’s pearling heritage. DANAT’s gemologists support clients and students through pearl, gemstone, and diamond identification services, as well as educational courses.

    Like many other precious gemstones, pearls have been human-made through scientific experimentation, says Noora Jamsheer, chief executive officer at DANAT. Over a century ago, cultured pearls entered markets as a competitive product to natural pearls, similar in appearance but different in value.

    “Gemological labs have been innovating scientific testing methods to differentiate between natural pearls and all other pearls that exist because of direct or indirect human intervention. Today the world knows natural pearls and cultured pearls. However, there are also pearls that fall in between these two categories,” says Jamsheer. “DANAT has the responsibility, as the leading gemological laboratory for pearl testing, to take the initiative necessary to ensure that testing methods keep pace with advances in the science of pearl cultivation.”

    Titled “Exploring the Nanoworld of Biogenic Gems,” the project will aim to improve the process of testing and identifying pearls by identifying morphological, micro-structural, optical, and chemical features sufficient to distinguish a pearl’s area of origin, method of growth, or both. MIT.nano, MIT’s open-access center for nanoscience and nanoengineering will be the organizational home for the project, where Mašić and his team will utilize the facility’s state-of-the-art characterization tools.

    In addition to discovering new methodologies for establishing a pearl’s origin, the project aims to utilize machine learning to automate pearl classification. Furthermore, researchers will investigate techniques to create a unique identifier associated with an individual pearl.

    The initial sponsored research project is expected to last three years, with potential for continued collaboration based on key findings or building upon the project’s success to open new avenues for research into the structure, properties, and growth of pearls. More

  • in

    Low-cost device can measure air pollution anywhere

    Air pollution is a major public health problem: The World Health Organization has estimated that it leads to over 4 million premature deaths worldwide annually. Still, it is not always extensively measured. But now an MIT research team is rolling out an open-source version of a low-cost, mobile pollution detector that could enable people to track air quality more widely.

    The detector, called Flatburn, can be made by 3D printing or by ordering inexpensive parts. The researchers have now tested and calibrated it in relation to existing state-of-the-art machines, and are publicly releasing all the information about it — how to build it, use it, and interpret the data.

    “The goal is for community groups or individual citizens anywhere to be able to measure local air pollution, identify its sources, and, ideally, create feedback loops with officials and stakeholders to create cleaner conditions,” says Carlo Ratti, director of MIT’s Senseable City Lab. 

    “We’ve been doing several pilots around the world, and we have refined a set of prototypes, with hardware, software, and protocols, to make sure the data we collect are robust from an environmental science point of view,” says Simone Mora, a research scientist at Senseable City Lab and co-author of a newly published paper detailing the scanner’s testing process. The Flatburn device is part of a larger project, known as City Scanner, using mobile devices to better understand urban life.

    “Hopefully with the release of the open-source Flatburn we can get grassroots groups, as well as communities in less developed countries, to follow our approach and build and share knowledge,” says An Wang, a researcher at Senseable City Lab and another of the paper’s co-authors.

    The paper, “Leveraging Machine Learning Algorithms to Advance Low-Cost Air Sensor Calibration in Stationary and Mobile Settings,” appears in the journal Atmospheric Environment.

    In addition to Wang, Mora, and Ratti the study’s authors are: Yuki Machida, a former research fellow at Senseable City Lab; Priyanka deSouza, an assistant professor of urban and regional planning at the University of Colorado at Denver; Tiffany Duhl, a researcher with the Massachusetts Department of Environmental Protection and a Tufts University research associate at the time of the project; Neelakshi Hudda, a research assistant professor at Tufts University; John L. Durant, a professor of civil and environmental engineering at Tufts University; and Fabio Duarte, principal research scientist at Senseable City Lab.

    The Flatburn concept at Senseable City Lab dates back to about 2017, when MIT researchers began prototyping a mobile pollution detector, originally to be deployed on garbage trucks in Cambridge, Massachusetts. The detectors are battery-powered and rechargable, either from power sources or a solar panel, with data stored on a card in the device that can be accessed remotely.

    The current extension of that project involved testing the devices in New York City and the Boston area, by seeing how they performed in comparison to already-working pollution detection systems. In New York, the researchers used 5 detectors to collect 1.6 million data points over four weeks in 2021, working with state officials to compare the results. In Boston, the team used mobile sensors, evaluating the Flatburn devices against a state-of-the-art system deployed by Tufts University along with a state agency.

    In both cases, the detectors were set up to measure concentrations of fine particulate matter as well as nitrogen dioxide, over an area of about 10 meters. Fine particular matter refers to tiny particles often associated with burning matter, from power plants, internal combustion engines in autos and fires, and more.

    The research team found that the mobile detectors estimated somewhat lower concentrations of fine particulate matter than the devices already in use, but with a strong enough correlation so that, with adjustments for weather conditions and other factors, the Flatburn devices can produce reliable results.

    “After following their deployment for a few months we can confidently say our low-cost monitors should behave the same way [as standard detectors],” Wang says. “We have a big vision, but we still have to make sure the data we collect is valid and can be used for regulatory and policy purposes,”

    Duarte adds: “If you follow these procedures with low-cost sensors you can still acquire good enough data to go back to [environmental] agencies with it, and say, ‘Let’s talk.’”

    The researchers did find that using the units in a mobile setting — on top of automobiles — means they will currently have an operating life of six months. They also identified a series of potential issues that people will have to deal with when using the Flatburn detectors generally. These include what the research team calls “drift,” the gradual changing of the detector’s readings over time, as well as “aging,” the more fundamental deterioration in a unit’s physical condition.

    Still, the researchers believe the units will function well, and they are providing complete instructions in their release of Flatburn as an open-source tool. That even includes guidance for working with officials, communities, and stakeholders to process the results and attempt to shape action.

    “It’s very important to engage with communities, to allow them to reflect on sources of pollution,” says Mora. 

    “The original idea of the project was to democratize environmental data, and that’s still the goal,” Duarte adds. “We want people to have the skills to analyze the data and engage with communities and officials.” More

  • in

    Minimizing electric vehicles’ impact on the grid

    National and global plans to combat climate change include increasing the electrification of vehicles and the percentage of electricity generated from renewable sources. But some projections show that these trends might require costly new power plants to meet peak loads in the evening when cars are plugged in after the workday. What’s more, overproduction of power from solar farms during the daytime can waste valuable electricity-generation capacity.

    In a new study, MIT researchers have found that it’s possible to mitigate or eliminate both these problems without the need for advanced technological systems of connected devices and real-time communications, which could add to costs and energy consumption. Instead, encouraging the placing of charging stations for electric vehicles (EVs) in strategic ways, rather than letting them spring up anywhere, and setting up systems to initiate car charging at delayed times could potentially make all the difference.

    The study, published today in the journal Cell Reports Physical Science, is by Zachary Needell PhD ’22, postdoc Wei Wei, and Professor Jessika Trancik of MIT’s Institute for Data, Systems, and Society.

    In their analysis, the researchers used data collected in two sample cities: New York and Dallas. The data were gathered from, among other sources, anonymized records collected via onboard devices in vehicles, and surveys that carefully sampled populations to cover variable travel behaviors. They showed the times of day cars are used and for how long, and how much time the vehicles spend at different kinds of locations — residential, workplace, shopping, entertainment, and so on.

    The findings, Trancik says, “round out the picture on the question of where to strategically locate chargers to support EV adoption and also support the power grid.”

    Better availability of charging stations at workplaces, for example, could help to soak up peak power being produced at midday from solar power installations, which might otherwise go to waste because it is not economical to build enough battery or other storage capacity to save all of it for later in the day. Thus, workplace chargers can provide a double benefit, helping to reduce the evening peak load from EV charging and also making use of the solar electricity output.

    These effects on the electric power system are considerable, especially if the system must meet charging demands for a fully electrified personal vehicle fleet alongside the peaks in other demand for electricity, for example on the hottest days of the year. If unmitigated, the evening peaks in EV charging demand could require installing upwards of 20 percent more power-generation capacity, the researchers say.

    “Slow workplace charging can be more preferable than faster charging technologies for enabling a higher utilization of midday solar resources,” Wei says.

    Meanwhile, with delayed home charging, each EV charger could be accompanied by a simple app to estimate the time to begin its charging cycle so that it charges just before it is needed the next day. Unlike other proposals that require a centralized control of the charging cycle, such a system needs no interdevice communication of information and can be preprogrammed — and can accomplish a major shift in the demand on the grid caused by increasing EV penetration. The reason it works so well, Trancik says, is because of the natural variability in driving behaviors across individuals in a population.

    By “home charging,” the researchers aren’t only referring to charging equipment in individual garages or parking areas. They say it’s essential to make charging stations available in on-street parking locations and in apartment building parking areas as well.

    Trancik says the findings highlight the value of combining the two measures — workplace charging and delayed home charging — to reduce peak electricity demand, store solar energy, and conveniently meet drivers’ charging needs on all days. As the team showed in earlier research, home charging can be a particularly effective component of a strategic package of charging locations; workplace charging, they have found, is not a good substitute for home charging for meeting drivers’ needs on all days.

    “Given that there’s a lot of public money going into expanding charging infrastructure,” Trancik says, “how do you incentivize the location such that this is going to be efficiently and effectively integrated into the power grid without requiring a lot of additional capacity expansion?” This research offers some guidance to policymakers on where to focus rules and incentives.

    “I think one of the fascinating things about these findings is that by being strategic you can avoid a lot of physical infrastructure that you would otherwise need,” she adds. “Your electric vehicles can displace some of the need for stationary energy storage, and you can also avoid the need to expand the capacity of power plants, by thinking about the location of chargers as a tool for managing demands — where they occur and when they occur.”

    Delayed home charging could make a surprising amount of difference, the team found. “It’s basically incentivizing people to begin charging later. This can be something that is preprogrammed into your chargers. You incentivize people to delay the onset of charging by a bit, so that not everyone is charging at the same time, and that smooths out the peak.”

    Such a program would require some advance commitment on the part of participants. “You would need to have enough people committing to this program in advance to avoid the investment in physical infrastructure,” Trancik says. “So, if you have enough people signing up, then you essentially don’t have to build those extra power plants.”

    It’s not a given that all of this would line up just right, and putting in place the right mix of incentives would be crucial. “If you want electric vehicles to act as an effective storage technology for solar energy, then the [EV] market needs to grow fast enough in order to be able to do that,” Trancik says.

    To best use public funds to help make that happen, she says, “you can incentivize charging installations, which would go through ideally a competitive process — in the private sector, you would have companies bidding for different projects, but you can incentivize installing charging at workplaces, for example, to tap into both of these benefits.” Chargers people can access when they are parked near their residences are also important, Trancik adds, but for other reasons. Home charging is one of the ways to meet charging needs while avoiding inconvenient disruptions to people’s travel activities.

    The study was supported by the European Regional Development Fund Operational Program for Competitiveness and Internationalization, the Lisbon Portugal Regional Operation Program, and the Portuguese Foundation for Science and Technology. More

  • in

    Study: Smoke particles from wildfires can erode the ozone layer

    A wildfire can pump smoke up into the stratosphere, where the particles drift for over a year. A new MIT study has found that while suspended there, these particles can trigger chemical reactions that erode the protective ozone layer shielding the Earth from the sun’s damaging ultraviolet radiation.

    The study, which appears today in Nature, focuses on the smoke from the “Black Summer” megafire in eastern Australia, which burned from December 2019 into January 2020. The fires — the country’s most devastating on record — scorched tens of millions of acres and pumped more than 1 million tons of smoke into the atmosphere.

    The MIT team identified a new chemical reaction by which smoke particles from the Australian wildfires made ozone depletion worse. By triggering this reaction, the fires likely contributed to a 3-5 percent depletion of total ozone at mid-latitudes in the Southern Hemisphere, in regions overlying Australia, New Zealand, and parts of Africa and South America.

    The researchers’ model also indicates the fires had an effect in the polar regions, eating away at the edges of the ozone hole over Antarctica. By late 2020, smoke particles from the Australian wildfires widened the Antarctic ozone hole by 2.5 million square kilometers — 10 percent of its area compared to the previous year.

    It’s unclear what long-term effect wildfires will have on ozone recovery. The United Nations recently reported that the ozone hole, and ozone depletion around the world, is on a recovery track, thanks to a sustained international effort to phase out ozone-depleting chemicals. But the MIT study suggests that as long as these chemicals persist in the atmosphere, large fires could spark a reaction that temporarily depletes ozone.

    “The Australian fires of 2020 were really a wake-up call for the science community,” says Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies at MIT and a leading climate scientist who first identified the chemicals responsible for the Antarctic ozone hole. “The effect of wildfires was not previously accounted for in [projections of] ozone recovery. And I think that effect may depend on whether fires become more frequent and intense as the planet warms.”

    The study is led by Solomon and MIT research scientist Kane Stone, along with collaborators from the Institute for Environmental and Climate Research in Guangzhou, China; the U.S. National Oceanic and Atmospheric Administration; the U.S. National Center for Atmospheric Research; and Colorado State University.

    Chlorine cascade

    The new study expands on a 2022 discovery by Solomon and her colleagues, in which they first identified a chemical link between wildfires and ozone depletion. The researchers found that chlorine-containing compounds, originally emitted by factories in the form of chlorofluorocarbons (CFCs), could react with the surface of fire aerosols. This interaction, they found, set off a chemical cascade that produced chlorine monoxide — the ultimate ozone-depleting molecule. Their results showed that the Australian wildfires likely depleted ozone through this newly identified chemical reaction.

    “But that didn’t explain all the changes that were observed in the stratosphere,” Solomon says. “There was a whole bunch of chlorine-related chemistry that was totally out of whack.”

    In the new study, the team took a closer look at the composition of molecules in the stratosphere following the Australian wildfires. They combed through three independent sets of satellite data and observed that in the months following the fires, concentrations of hydrochloric acid dropped significantly at mid-latitudes, while chlorine monoxide spiked.

    Hydrochloric acid (HCl) is present in the stratosphere as CFCs break down naturally over time. As long as chlorine is bound in the form of HCl, it doesn’t have a chance to destroy ozone. But if HCl breaks apart, chlorine can react with oxygen to form ozone-depleting chlorine monoxide.

    In the polar regions, HCl can break apart when it interacts with the surface of cloud particles at frigid temperatures of about 155 kelvins. However, this reaction was not expected to occur at mid-latitudes, where temperatures are much warmer.

    “The fact that HCl at mid-latitudes dropped by this unprecedented amount was to me kind of a danger signal,” Solomon says.

    She wondered: What if HCl could also interact with smoke particles, at warmer temperatures and in a way that released chlorine to destroy ozone? If such a reaction was possible, it would explain the imbalance of molecules and much of the ozone depletion observed following the Australian wildfires.

    Smoky drift

    Solomon and her colleagues dug through the chemical literature to see what sort of organic molecules could react with HCl at warmer temperatures to break it apart.

    “Lo and behold, I learned that HCl is extremely soluble in a whole broad range of organic species,” Solomon says. “It likes to glom on to lots of compounds.”

    The question then, was whether the Australian wildfires released any of those compounds that could have triggered HCl’s breakup and any subsequent depletion of ozone. When the team looked at the composition of smoke particles in the first days after the fires, the picture was anything but clear.

    “I looked at that stuff and threw up my hands and thought, there’s so much stuff in there, how am I ever going to figure this out?” Solomon recalls. “But then I realized it had actually taken some weeks before you saw the HCl drop, so you really need to look at the data on aged wildfire particles.”

    When the team expanded their search, they found that smoke particles persisted over months, circulating in the stratosphere at mid-latitudes, in the same regions and times when concentrations of HCl dropped.

    “It’s the aged smoke particles that really take up a lot of the HCl,” Solomon says. “And then you get, amazingly, the same reactions that you get in the ozone hole, but over mid-latitudes, at much warmer temperatures.”

    When the team incorporated this new chemical reaction into a model of atmospheric chemistry, and simulated the conditions of the Australian wildfires, they observed a 5 percent depletion of ozone throughout the stratosphere at mid-latitudes, and a 10 percent widening of the ozone hole over Antarctica.

    The reaction with HCl is likely the main pathway by which wildfires can deplete ozone. But Solomon guesses there may be other chlorine-containing compounds drifting in the stratosphere, that wildfires could unlock.

    “There’s now sort of a race against time,” Solomon says. “Hopefully, chlorine-containing compounds will have been destroyed, before the frequency of fires increases with climate change. This is all the more reason to be vigilant about global warming and these chlorine-containing compounds.”

    This research was supported, in part, by NASA and the U.S. National Science Foundation. More