More stories

  • in

    J-PAL North America announces five new partnerships with state and local governments

    J-PAL North America, a research center in MIT’s Department of Economics, has announced five new partnerships with state and local governments across the United States after a call for proposals in early February. Over the next year, these partners will work with J-PAL North America’s State and Local Innovation Initiative to evaluate policy-relevant questions critical to alleviating poverty in the United States.

    J-PAL North America will work with the Colorado Department of Higher Education, Ohio’s Franklin County Department of Job and Family Services, the New Mexico Public Education Department, Puerto Rico’s Department of Economic Development and Commerce, and Oregon’s Jackson County Fire District 3. Each partner will leverage support from J-PAL North America to develop randomized evaluations, which have the potential to reveal widely applicable lessons about which programs and policies are most effective. 

    State and local leaders are vital stakeholders in developing rigorous evidence in order to understand which policies and programs work to reduce poverty, and why. By supporting each government partner in developing these five evaluation projects, the voice of policymakers and practitioners will remain a central part of the research process. Each of this year’s selected projects seeks to address policy concerns that have been identified by state and local governments in J-PAL North America’s State and Local Learning Agenda as key areas for addressing barriers to mobility from poverty, including environment, education, economic security, and housing stability. 

    One project looks to mitigate the emission of carbon co-pollutants, which cause disproportionately high rates of health problems among communities experiencing poverty. 

    Oregon’s Jackson County Fire District 3 will investigate the impact of subsidies on the uptake of wildfire risk reduction activities in a county severely affected by wildfires. “Wildfires have become more prevalent, longer lasting, and more destructive in Oregon and across the western United States. We also know that wildfire is disproportionately impacting our most vulnerable populations,” says Bob Horton, fire chief of Jackson County Fire District 3. “With technical support from JPAL North America’s staff and this grant funding, we will devise the most current and effective strategy, deeply rooted in the evidence, to drive the take-up of home-hardening behaviors — methods to increase a home’s resistance to fire — and lower the risk to homes when faced with wildfire.” 

    This project is in alignment with the priorities of J-PAL’s Environment, Energy, and Climate Change sector and its agenda for catalyzing more policy-relevant research on adaptation strategies. 

    Policymakers and researchers have also identified programs aimed at increasing opportunity within education as a key priority for evaluation. In partnering with J-PAL North America, the Colorado Department of Higher Education will assess the impact of My Colorado Journey, an online platform available to all Coloradans that provides information on education, training, and career pathways. 

    “As Colorado builds back stronger from the pandemic, we know that education and workforce development are at the center of Colorado’s recovery agenda,” shares Executive Director Angie Paccione of the Colorado Department of Education. “Platforms like My Colorado Journey are key to supporting the education, training, and workforce exploration of Coloradans of any age. With support from J-PAL North America, we can better understand how to effectively serve Coloradans, further enhance this vital platform, and continue to build a Colorado for all.”

    Similarly, the New Mexico Public Education Department proposes their intervention within the context of New Mexico’s community school state initiative. They will look at the impact of case management and cash transfers on students at risk of multiple school transfers throughout their education, which include children who are experiencing homelessness, migrant children, children in the foster care system, and military-connected children, among others. “New Mexico is delighted to partner with J-PAL North America to explore visionary pathways to success for highly mobile students,” says New Mexico Public Education Secretary (Designate) Kurt Steinhaus. “We look forward to implementing and testing innovative solutions, such as cash transfers, that can expand our current nationally recognized community schools strategy. Together, we aim to find solutions that meet the needs of highly mobile students and families who lack stable housing.”

    Another key priority for the intersection of policy and research is economic security — fostering upward mobility by providing individuals with resources to promote stable incomes and increase standards of living. By adjusting caseworker employment services to better align with local needs, Puerto Rico’s Department of Economic Development and Commerce (DEDC) looks to understand how individualized services can impact employment and earnings. 

    “The commitment of the government of Puerto Rico is to develop human resources to the highest quality standards,” says DEDC Secretary Cidre Miranda, whose statement was provided in Spanish and translated. “For the DEDC, it is fundamental to contribute to the development of initiatives like this one, because they have the objective of forging the future professionals that Puerto Rico requires and needs.” J-PAL North America’s partnership with DEDC has the potential to provide valuable lessons for other state and local programs also seeking to promote economic security. 

    Finally, Ohio’s Franklin County Department of Job and Family Services seeks to understand the impact of an eviction prevention workshop in a county with eviction rates that are higher than both the state and national average. “Stable housing should not be a luxury, but for far too many Franklin County families it has become one,” Deputy Franklin County Administrator Joy Bivens says. “We need to view our community’s affordable housing crisis through both a social determinants of health and racial equity lens. We are grateful for the opportunity to partner with J-PAL North America to ensure we are pursuing research-based interventions that, yes, address immediate housing needs, but also provide long-term stability so they can climb the economic ladder.”

    Franklin County Department of Job and Family Services’ evaluation aligns with policymaker and researcher interests to ensure safe and affordable housing. This partnership will have great potential to not only improve resources local to Franklin County, but, along with each of the other four agencies, can also provide a useful model for other government agencies facing similar challenges.For more information on state and local policy priorities, see J-PAL North America’s State and Local Learning Agenda. To learn more about the State and Local Innovation Initiative, please visit the Initiative webpage or contact Initiative Manager Louise Geraghty. More

  • in

    Selective separation could help alleviate critical metals shortage

    New processing methods developed by MIT researchers could help ease looming shortages of the essential metals that power everything from phones to automotive batteries, by making it easier to separate these rare metals from mining ores and recycled materials.

    Selective adjustments within a chemical process called sulfidation allowed professor of metallurgy Antoine Allanore and his graduate student Caspar Stinn to successfully target and separate rare metals, such as the cobalt in a lithium-ion battery, from mixed-metal materials.

    As they report in the journal Nature, their processing techniques allow the metals to remain in solid form and be separated without dissolving the material. This avoids traditional but costly liquid separation methods that require significant energy. The researchers developed processing conditions for 56 elements and tested these conditions on 15 elements.

    Their sulfidation approach, they write in the paper, could reduce the capital costs of metal separation between 65 and 95 percent from mixed-metal oxides. Their selective processing could also reduce greenhouse gas emissions by 60 to 90 percent compared to traditional liquid-based separation.

    “We were excited to find replacements for processes that had really high levels of water usage and greenhouse gas emissions, such as lithium-ion battery recycling, rare-earth magnet recycling, and rare-earth separation,” says Stinn. “Those are processes that make materials for sustainability applications, but the processes themselves are very unsustainable.”

    The findings offer one way to alleviate a growing demand for minor metals like cobalt, lithium, and rare earth elements that are used in “clean” energy products like electric cars, solar cells, and electricity-generating windmills. According to a 2021 report by the International Energy Agency, the average amount of minerals needed for a new unit of power generation capacity has risen by 50 percent since 2010, as renewable energy technologies using these metals expand their reach.

    Opportunity for selectivity

    For more than a decade, the Allanore group has been studying the use of sulfide materials in developing new electrochemical routes for metal production. Sulfides are common materials, but the MIT scientists are experimenting with them under extreme conditions like very high temperatures — from 800 to 3,000 degrees Fahrenheit — that are used in manufacturing plants but not in a typical university lab.

    “We are looking at very well-established materials in conditions that are uncommon compared to what has been done before,” Allanore explains, “and that is why we are finding new applications or new realities.”

    In the process of synthetizing high-temperature sulfide materials to support electrochemical production, Stinn says, “we learned we could be very selective and very controlled about what products we made. And it was with that understanding that we realized, ‘OK, maybe there’s an opportunity for selectivity in separation here.’”

    The chemical reaction exploited by the researchers reacts a material containing a mix of metal oxides to form new metal-sulfur compounds or sulfides. By altering factors like temperature, gas pressure, and the addition of carbon in the reaction process, Stinn and Allanore found that they could selectively create a variety of sulfide solids that can be physically separated by a variety of methods, including crushing the material and sorting different-sized sulfides or using magnets to separate different sulfides from one another.

    Current methods of rare metal separation rely on large quantities of energy, water, acids, and organic solvents which have costly environmental impacts, says Stinn. “We are trying to use materials that are abundant, economical, and readily available for sustainable materials separation, and we have expanded that domain to now include sulfur and sulfides.”

    Stinn and Allanore used selective sulfidation to separate out economically important metals like cobalt in recycled lithium-ion batteries. They also used their techniques to separate dysprosium — a rare-earth element used in applications ranging from data storage devices to optoelectronics — from rare-earth-boron magnets, or from the typical mixture of oxides available from mining minerals such as bastnaesite.

    Leveraging existing technology

    Metals like cobalt and rare earths are only found in small amounts in mined materials, so industries must process large volumes of material to retrieve or recycle enough of these metals to be economically viable, Allanore explains. “It’s quite clear that these processes are not efficient. Most of the emissions come from the lack of selectivity and the low concentration at which they operate.”

    By eliminating the need for liquid separation and the extra steps and materials it requires to dissolve and then reprecipitate individual elements, the MIT researchers’ process significantly reduces the costs incurred and emissions produced during separation.

    “One of the nice things about separating materials using sulfidation is that a lot of existing technology and process infrastructure can be leveraged,” Stinn says. “It’s new conditions and new chemistries in established reactor styles and equipment.”

    The next step is to show that the process can work for large amounts of raw material — separating out 16 elements from rare-earth mining streams, for example. “Now we have shown that we can handle three or four or five of them together, but we have not yet processed an actual stream from an existing mine at a scale to match what’s required for deployment,” Allanore says.

    Stinn and colleagues in the lab have built a reactor that can process about 10 kilograms of raw material per day, and the researchers are starting conversations with several corporations about the possibilities.

    “We are discussing what it would take to demonstrate the performance of this approach with existing mineral and recycling streams,” Allanore says.

    This research was supported by the U.S. Department of Energy and the U.S. National Science Foundation. More

  • in

    A tool to speed development of new solar cells

    In the ongoing race to develop ever-better materials and configurations for solar cells, there are many variables that can be adjusted to try to improve performance, including material type, thickness, and geometric arrangement. Developing new solar cells has generally been a tedious process of making small changes to one of these parameters at a time. While computational simulators have made it possible to evaluate such changes without having to actually build each new variation for testing, the process remains slow.

    Now, researchers at MIT and Google Brain have developed a system that makes it possible not just to evaluate one proposed design at a time, but to provide information about which changes will provide the desired improvements. This could greatly increase the rate for the discovery of new, improved configurations.

    The new system, called a differentiable solar cell simulator, is described in a paper published today in the journal Computer Physics Communications, written by MIT junior Sean Mann, research scientist Giuseppe Romano of MIT’s Institute for Soldier Nanotechnologies, and four others at MIT and at Google Brain.

    Traditional solar cell simulators, Romano explains, take the details of a solar cell configuration and produce as their output a predicted efficiency — that is, what percentage of the energy of incoming sunlight actually gets converted to an electric current. But this new simulator both predicts the efficiency and shows how much that output is affected by any one of the input parameters. “It tells you directly what happens to the efficiency if we make this layer a little bit thicker, or what happens to the efficiency if we for example change the property of the material,” he says.

    In short, he says, “we didn’t discover a new device, but we developed a tool that will enable others to discover more quickly other higher performance devices.” Using this system, “we are decreasing the number of times that we need to run a simulator to give quicker access to a wider space of optimized structures.” In addition, he says, “our tool can identify a unique set of material parameters that has been hidden so far because it’s very complex to run those simulations.”

    While traditional approaches use essentially a random search of possible variations, Mann says, with his tool “we can follow a trajectory of change because the simulator tells you what direction you want to be changing your device. That makes the process much faster because instead of exploring the entire space of opportunities, you can just follow a single path” that leads directly to improved performance.

    Since advanced solar cells often are composed of multiple layers interlaced with conductive materials to carry electric charge from one to the other, this computational tool reveals how changing the relative thicknesses of these different layers will affect the device’s output. “This is very important because the thickness is critical. There is a strong interplay between light propagation and the thickness of each layer and the absorption of each layer,” Mann explains.

    Other variables that can be evaluated include the amount of doping (the introduction of atoms of another element) that each layer receives, or the dielectric constant of insulating layers, or the bandgap, a measure of the energy levels of photons of light that can be captured by different materials used in the layers.

    This simulator is now available as an open-source tool that can be used immediately to help guide research in this field, Romano says. “It is ready, and can be taken up by industry experts.” To make use of it, researchers would couple this device’s computations with an optimization algorithm, or even a machine learning system, to rapidly assess a wide variety of possible changes and home in quickly on the most promising alternatives.

    At this point, the simulator is based on just a one-dimensional version of the solar cell, so the next step will be to expand its capabilities to include two- and three-dimensional configurations. But even this 1D version “can cover the majority of cells that are currently under production,” Romano says. Certain variations, such as so-called tandem cells using different materials, cannot yet be simulated directly by this tool, but “there are ways to approximate a tandem solar cell by simulating each of the individual cells,” Mann says.

    The simulator is “end-to-end,” Romano says, meaning it computes the sensitivity of the efficiency, also taking into account light absorption. He adds: “An appealing future direction is composing our simulator with advanced existing differentiable light-propagation simulators, to achieve enhanced accuracy.”

    Moving forward, Romano says, because this is an open-source code, “that means that once it’s up there, the community can contribute to it. And that’s why we are really excited.” Although this research group is “just a handful of people,” he says, now anyone working in the field can make their own enhancements and improvements to the code and introduce new capabilities.

    “Differentiable physics is going to provide new capabilities for the simulations of engineered systems,” says Venkat Viswanathan, an associate professor of mechanical engineering at Carnegie Mellon University, who was not associated with this work. “The  differentiable solar cell simulator is an incredible example of differentiable physics, that can now provide new capabilities to optimize solar cell device performance,” he says, calling the study “an exciting step forward.”

    In addition to Mann and Romano, the team included Eric Fadel and Steven Johnson at MIT, and Samuel Schoenholz and Ekin Cubuk at Google Brain. The work was supported in part by Eni S.p.A. and the MIT Energy Initiative, and the MIT Quest for Intelligence. More

  • in

    Q&A: More-sustainable concrete with machine learning

    As a building material, concrete withstands the test of time. Its use dates back to early civilizations, and today it is the most popular composite choice in the world. However, it’s not without its faults. Production of its key ingredient, cement, contributes 8-9 percent of the global anthropogenic CO2 emissions and 2-3 percent of energy consumption, which is only projected to increase in the coming years. With aging United States infrastructure, the federal government recently passed a milestone bill to revitalize and upgrade it, along with a push to reduce greenhouse gas emissions where possible, putting concrete in the crosshairs for modernization, too.

    Elsa Olivetti, the Esther and Harold E. Edgerton Associate Professor in the MIT Department of Materials Science and Engineering, and Jie Chen, MIT-IBM Watson AI Lab research scientist and manager, think artificial intelligence can help meet this need by designing and formulating new, more sustainable concrete mixtures, with lower costs and carbon dioxide emissions, while improving material performance and reusing manufacturing byproducts in the material itself. Olivetti’s research improves environmental and economic sustainability of materials, and Chen develops and optimizes machine learning and computational techniques, which he can apply to materials reformulation. Olivetti and Chen, along with their collaborators, have recently teamed up for an MIT-IBM Watson AI Lab project to make concrete more sustainable for the benefit of society, the climate, and the economy.

    Q: What applications does concrete have, and what properties make it a preferred building material?

    Olivetti: Concrete is the dominant building material globally with an annual consumption of 30 billion metric tons. That is over 20 times the next most produced material, steel, and the scale of its use leads to considerable environmental impact, approximately 5-8 percent of global greenhouse gas (GHG) emissions. It can be made locally, has a broad range of structural applications, and is cost-effective. Concrete is a mixture of fine and coarse aggregate, water, cement binder (the glue), and other additives.

    Q: Why isn’t it sustainable, and what research problems are you trying to tackle with this project?

    Olivetti: The community is working on several ways to reduce the impact of this material, including alternative fuels use for heating the cement mixture, increasing energy and materials efficiency and carbon sequestration at production facilities, but one important opportunity is to develop an alternative to the cement binder.

    While cement is 10 percent of the concrete mass, it accounts for 80 percent of the GHG footprint. This impact is derived from the fuel burned to heat and run the chemical reaction required in manufacturing, but also the chemical reaction itself releases CO2 from the calcination of limestone. Therefore, partially replacing the input ingredients to cement (traditionally ordinary Portland cement or OPC) with alternative materials from waste and byproducts can reduce the GHG footprint. But use of these alternatives is not inherently more sustainable because wastes might have to travel long distances, which adds to fuel emissions and cost, or might require pretreatment processes. The optimal way to make use of these alternate materials will be situation-dependent. But because of the vast scale, we also need solutions that account for the huge volumes of concrete needed. This project is trying to develop novel concrete mixtures that will decrease the GHG impact of the cement and concrete, moving away from the trial-and-error processes towards those that are more predictive.

    Chen: If we want to fight climate change and make our environment better, are there alternative ingredients or a reformulation we could use so that less greenhouse gas is emitted? We hope that through this project using machine learning we’ll be able to find a good answer.

    Q: Why is this problem important to address now, at this point in history?

    Olivetti: There is urgent need to address greenhouse gas emissions as aggressively as possible, and the road to doing so isn’t necessarily straightforward for all areas of industry. For transportation and electricity generation, there are paths that have been identified to decarbonize those sectors. We need to move much more aggressively to achieve those in the time needed; further, the technological approaches to achieve that are more clear. However, for tough-to-decarbonize sectors, such as industrial materials production, the pathways to decarbonization are not as mapped out.

    Q: How are you planning to address this problem to produce better concrete?

    Olivetti: The goal is to predict mixtures that will both meet performance criteria, such as strength and durability, with those that also balance economic and environmental impact. A key to this is to use industrial wastes in blended cements and concretes. To do this, we need to understand the glass and mineral reactivity of constituent materials. This reactivity not only determines the limit of the possible use in cement systems but also controls concrete processing, and the development of strength and pore structure, which ultimately control concrete durability and life-cycle CO2 emissions.

    Chen: We investigate using waste materials to replace part of the cement component. This is something that we’ve hypothesized would be more sustainable and economic — actually waste materials are common, and they cost less. Because of the reduction in the use of cement, the final concrete product would be responsible for much less carbon dioxide production. Figuring out the right concrete mixture proportion that makes endurable concretes while achieving other goals is a very challenging problem. Machine learning is giving us an opportunity to explore the advancement of predictive modeling, uncertainty quantification, and optimization to solve the issue. What we are doing is exploring options using deep learning as well as multi-objective optimization techniques to find an answer. These efforts are now more feasible to carry out, and they will produce results with reliability estimates that we need to understand what makes a good concrete.

    Q: What kinds of AI and computational techniques are you employing for this?

    Olivetti: We use AI techniques to collect data on individual concrete ingredients, mix proportions, and concrete performance from the literature through natural language processing. We also add data obtained from industry and/or high throughput atomistic modeling and experiments to optimize the design of concrete mixtures. Then we use this information to develop insight into the reactivity of possible waste and byproduct materials as alternatives to cement materials for low-CO2 concrete. By incorporating generic information on concrete ingredients, the resulting concrete performance predictors are expected to be more reliable and transformative than existing AI models.

    Chen: The final objective is to figure out what constituents, and how much of each, to put into the recipe for producing the concrete that optimizes the various factors: strength, cost, environmental impact, performance, etc. For each of the objectives, we need certain models: We need a model to predict the performance of the concrete (like, how long does it last and how much weight does it sustain?), a model to estimate the cost, and a model to estimate how much carbon dioxide is generated. We will need to build these models by using data from literature, from industry, and from lab experiments.

    We are exploring Gaussian process models to predict the concrete strength, going forward into days and weeks. This model can give us an uncertainty estimate of the prediction as well. Such a model needs specification of parameters, for which we will use another model to calculate. At the same time, we also explore neural network models because we can inject domain knowledge from human experience into them. Some models are as simple as multi-layer perceptions, while some are more complex, like graph neural networks. The goal here is that we want to have a model that is not only accurate but also robust — the input data is noisy, and the model must embrace the noise, so that its prediction is still accurate and reliable for the multi-objective optimization.

    Once we have built models that we are confident with, we will inject their predictions and uncertainty estimates into the optimization of multiple objectives, under constraints and under uncertainties.

    Q: How do you balance cost-benefit trade-offs?

    Chen: The multiple objectives we consider are not necessarily consistent, and sometimes they are at odds with each other. The goal is to identify scenarios where the values for our objectives cannot be further pushed simultaneously without compromising one or a few. For example, if you want to further reduce the cost, you probably have to suffer the performance or suffer the environmental impact. Eventually, we will give the results to policymakers and they will look into the results and weigh the options. For example, they may be able to tolerate a slightly higher cost under a significant reduction in greenhouse gas. Alternatively, if the cost varies little but the concrete performance changes drastically, say, doubles or triples, then this is definitely a favorable outcome.

    Q: What kinds of challenges do you face in this work?

    Chen: The data we get either from industry or from literature are very noisy; the concrete measurements can vary a lot, depending on where and when they are taken. There are also substantial missing data when we integrate them from different sources, so, we need to spend a lot of effort to organize and make the data usable for building and training machine learning models. We also explore imputation techniques that substitute missing features, as well as models that tolerate missing features, in our predictive modeling and uncertainty estimate.

    Q: What do you hope to achieve through this work?

    Chen: In the end, we are suggesting either one or a few concrete recipes, or a continuum of recipes, to manufacturers and policymakers. We hope that this will provide invaluable information for both the construction industry and for the effort of protecting our beloved Earth.

    Olivetti: We’d like to develop a robust way to design cements that make use of waste materials to lower their CO2 footprint. Nobody is trying to make waste, so we can’t rely on one stream as a feedstock if we want this to be massively scalable. We have to be flexible and robust to shift with feedstocks changes, and for that we need improved understanding. Our approach to develop local, dynamic, and flexible alternatives is to learn what makes these wastes reactive, so we know how to optimize their use and do so as broadly as possible. We do that through predictive model development through software we have developed in my group to automatically extract data from literature on over 5 million texts and patents on various topics. We link this to the creative capabilities of our IBM collaborators to design methods that predict the final impact of new cements. If we are successful, we can lower the emissions of this ubiquitous material and play our part in achieving carbon emissions mitigation goals.

    Other researchers involved with this project include Stefanie Jegelka, the X-Window Consortium Career Development Associate Professor in the MIT Department of Electrical Engineering and Computer Science; Richard Goodwin, IBM principal researcher; Soumya Ghosh, MIT-IBM Watson AI Lab research staff member; and Kristen Severson, former research staff member. Collaborators included Nghia Hoang, former research staff member with MIT-IBM Watson AI Lab and IBM Research; and Jeremy Gregory, research scientist in the MIT Department of Civil and Environmental Engineering and executive director of the MIT Concrete Sustainability Hub.

    This research is supported by the MIT-IBM Watson AI Lab. More

  • in

    Climate modeling confirms historical records showing rise in hurricane activity

    When forecasting how storms may change in the future, it helps to know something about their past. Judging from historical records dating back to the 1850s, hurricanes in the North Atlantic have become more frequent over the last 150 years.

    However, scientists have questioned whether this upward trend is a reflection of reality, or simply an artifact of lopsided record-keeping. If 19th-century storm trackers had access to 21st-century technology, would they have recorded more storms? This inherent uncertainty has kept scientists from relying on storm records, and the patterns within them, for clues to how climate influences storms.

    A new MIT study published today in Nature Communications has used climate modeling, rather than storm records, to reconstruct the history of hurricanes and tropical cyclones around the world. The study finds that North Atlantic hurricanes have indeed increased in frequency over the last 150 years, similar to what historical records have shown.

    In particular, major hurricanes, and hurricanes in general, are more frequent today than in the past. And those that make landfall appear to have grown more powerful, carrying more destructive potential.

    Curiously, while the North Atlantic has seen an overall increase in storm activity, the same trend was not observed in the rest of the world. The study found that the frequency of tropical cyclones globally has not changed significantly in the last 150 years.

    “The evidence does point, as the original historical record did, to long-term increases in North Atlantic hurricane activity, but no significant changes in global hurricane activity,” says study author Kerry Emanuel, the Cecil and Ida Green Professor of Atmospheric Science in MIT’s Department of Earth, Atmospheric, and Planetary Sciences. “It certainly will change the interpretation of climate’s effects on hurricanes — that it’s really the regionality of the climate, and that something happened to the North Atlantic that’s different from the rest of the globe. It may have been caused by global warming, which is not necessarily globally uniform.”

    Chance encounters

    The most comprehensive record of tropical cyclones is compiled in a database known as the International Best Track Archive for Climate Stewardship (IBTrACS). This historical record includes modern measurements from satellites and aircraft that date back to the 1940s. The database’s older records are based on reports from ships and islands that happened to be in a storm’s path. These earlier records date back to 1851, and overall the database shows an increase in North Atlantic storm activity over the last 150 years.

    “Nobody disagrees that that’s what the historical record shows,” Emanuel says. “On the other hand, most sensible people don’t really trust the historical record that far back in time.”

    Recently, scientists have used a statistical approach to identify storms that the historical record may have missed. To do so, they consulted all the digitally reconstructed shipping routes in the Atlantic over the last 150 years and mapped these routes over modern-day hurricane tracks. They then estimated the chance that a ship would encounter or entirely miss a hurricane’s presence. This analysis found a significant number of early storms were likely missed in the historical record. Accounting for these missed storms, they concluded that there was a chance that storm activity had not changed over the last 150 years.

    But Emanuel points out that hurricane paths in the 19th century may have looked different from today’s tracks. What’s more, the scientists may have missed key shipping routes in their analysis, as older routes have not yet been digitized.

    “All we know is, if there had been a change (in storm activity), it would not have been detectable, using digitized ship records,” Emanuel says “So I thought, there’s an opportunity to do better, by not using historical data at all.”

    Seeding storms

    Instead, he estimated past hurricane activity using dynamical downscaling — a technique that his group developed and has applied over the last 15 years to study climate’s effect on hurricanes. The technique starts with a coarse global climate simulation and embeds within this model a finer-resolution model that simulates features as small as hurricanes. The combined models are then fed with real-world measurements of atmospheric and ocean conditions. Emanuel then scatters the realistic simulation with hurricane “seeds” and runs the simulation forward in time to see which seeds bloom into full-blown storms.

    For the new study, Emanuel embedded a hurricane model into a climate “reanalysis” — a type of climate model that combines observations from the past with climate simulations to generate accurate reconstructions of past weather patterns and climate conditions. He used a particular subset of climate reanalyses that only accounts for observations collected from the surface — for instance from ships, which have recorded weather conditions and sea surface temperatures consistently since the 1850s, as opposed to from satellites, which only began systematic monitoring in the 1970s.

    “We chose to use this approach to avoid any artificial trends brought about by the introduction of progressively different observations,” Emanuel explains.

    He ran an embedded hurricane model on three different climate reanalyses, simulating tropical cyclones around the world over the past 150 years. Across all three models, he observed “unequivocal increases” in North Atlantic hurricane activity.

    “There’s been this quite large increase in activity in the Atlantic since the mid-19th century, which I didn’t expect to see,” Emanuel says.

    Within this overall rise in storm activity, he also observed a “hurricane drought” — a period during the 1970s and 80s when the number of yearly hurricanes momentarily dropped. This pause in storm activity can also be seen in historical records, and Emanuel’s group proposes a cause: sulfate aerosols, which were byproducts of fossil fuel combustion, likely set off a cascade of climate effects that cooled the North Atlantic and temporarily suppressed hurricane formation.

    “The general trend over the last 150 years was increasing storm activity, interrupted by this hurricane drought,” Emanuel notes. “And at this point, we’re more confident of why there was a hurricane drought than why there is an ongoing, long-term increase in activity that began in the 19th century. That is still a mystery, and it bears on the question of how global warming might affect future Atlantic hurricanes.”

    This research was supported, in part, by the National Science Foundation. More

  • in

    SMART researchers develop method for early detection of bacterial infection in crops

    Researchers from the Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP) Interdisciplinary Research Group (IRG) ofSingapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, and their local collaborators from Temasek Life Sciences Laboratory (TLL), have developed a rapid Raman spectroscopy-based method for detecting and quantifying early bacterial infection in crops. The Raman spectral biomarkers and diagnostic algorithm enable the noninvasive and early diagnosis of bacterial infections in crop plants, which can be critical for the progress of plant disease management and agricultural productivity.

    Due to the increasing demand for global food supply and security, there is a growing need to improve agricultural production systems and increase crop productivity. Globally, bacterial pathogen infection in crop plants is one of the major contributors to agricultural yield losses. Climate change also adds to the problem by accelerating the spread of plant diseases. Hence, developing methods for rapid and early detection of pathogen-infected crops is important to improve plant disease management and reduce crop loss.

    The breakthrough by SMART and TLL researchers offers a faster and more accurate method to detect bacterial infection in crop plants at an earlier stage, as compared to existing techniques. The new results appear in a paper titled “Rapid detection and quantification of plant innate immunity response using Raman spectroscopy” published in the journal Frontiers in Plant Science.

    “The early detection of pathogen-infected crop plants is a significant step to improve plant disease management,” says Chua Nam Hai, DiSTAP co-lead principal investigator, professor, TLL deputy chair, and co-corresponding author. “It will allow the fast and selective removal of pathogen load and curb the further spread of disease to other neighboring crops.”

    Traditionally, plant disease diagnosis involves a simple visual inspection of plants for disease symptoms and severity. “Visual inspection methods are often ineffective, as disease symptoms usually manifest only at relatively later stages of infection, when the pathogen load is already high and reparative measures are limited. Hence, new methods are required for rapid and early detection of bacterial infection. The idea would be akin to having medical tests to identify human diseases at an early stage, instead of waiting for visual symptoms to show, so that early intervention or treatment can be applied,” says MIT Professor Rajeev Ram, who is a DiSTAP principal investigator and co-corresponding author on the paper.

    While existing techniques, such as current molecular detection methods, can detect bacterial infection in plants, they are often limited in their use. Molecular detection methods largely depend on the availability of pathogen-specific gene sequences or antibodies to identify bacterial infection in crops; the implementation is also time-consuming and nonadaptable for on-site field application due to the high cost and bulky equipment required, making it impractical for use in agricultural farms.

    “At DiSTAP, we have developed a quantitative Raman spectroscopy-based algorithm that can help farmers to identify bacterial infection rapidly. The developed diagnostic algorithm makes use of Raman spectral biomarkers and can be easily implemented in cloud-based computing and prediction platforms. It is more effective than existing techniques as it enables accurate identification and early detection of bacterial infection, both of which are crucial to saving crop plants that would otherwise be destroyed,” explains Gajendra Pratap Singh, scientific director and principal investigator at DiSTAP and co-lead author.

    A portable Raman system can be used on farms and provides farmers with an accurate and simple yes-or-no response when used to test for the presence of bacterial infections in crops. The development of this rapid and noninvasive method could improve plant disease management and have a transformative impact on agricultural farms by efficiently reducing agricultural yield loss and increasing productivity.

    “Using the diagnostic algorithm method, we experimented on several edible plants such as choy sum,” says DiSTAP and TLL principal investigator and co-corresponding author Rajani Sarojam. “The results showed that the Raman spectroscopy-based method can swiftly detect and quantify innate immunity response in plants infected with bacterial pathogens. We believe that this technology will be beneficial for agricultural farms to increase their productivity by reducing their yield loss due to plant diseases.”

    The researchers are currently working on the development of high-throughput, custom-made portable or hand-held Raman spectrometers that will allow Raman spectral analysis to be quickly and easily performed on field-grown crops.

    SMART and TLL developed and discovered the diagnostic algorithm and Raman spectral biomarkers. TLL also confirmed and validated the detection method through mutant plants. The research is carried out by SMART and supported by the National Research Foundation of Singapore under its Campus for Research Excellence And Technological Enterprise (CREATE) program.

    SMART was established by MIT and the NRF in 2007. The first entity in CREATE developed by NRF, SMART serves as an intellectual and innovation hub for research interactions between MIT and Singapore, undertaking cutting-edge research projects in areas of interest to both Singapore and MIT. SMART currently comprises an Innovation Center and five IRGs: Antimicrobial Resistance, Critical Analytics for Manufacturing Personalized-Medicine, DiSTAP, Future Urban Mobility, and Low Energy Electronic Systems. SMART research is funded by the NRF under the CREATE program.

    Led by Professor Michael Strano of MIT and Professor Chua Nam Hai of Temasek Lifesciences Laboratory, the DiSTAP program addresses deep problems in food production in Singapore and the world by developing a suite of impactful and novel analytical, genetic, and biomaterial technologies. The goal is to fundamentally change how plant biosynthetic pathways are discovered, monitored, engineered, and ultimately translated to meet the global demand for food and nutrients. Scientists from MIT, TTL, Nanyang Technological University, and National University of Singapore are collaboratively developing new tools for the continuous measurement of important plant metabolites and hormones for novel discovery, deeper understanding and control of plant biosynthetic pathways in ways not yet possible, especially in the context of green leafy vegetables; leveraging these new techniques to engineer plants with highly desirable properties for global food security, including high-yield density production, and drought and pathogen resistance; and applying these technologies to improve urban farming. More

  • in

    An energy-storage solution that flows like soft-serve ice cream

    Batteries made from an electrically conductive mixture the consistency of molasses could help solve a critical piece of the decarbonization puzzle. An interdisciplinary team from MIT has found that an electrochemical technology called a semisolid flow battery can be a cost-competitive form of energy storage and backup for variable renewable energy (VRE) sources such as wind and solar. The group’s research is described in a paper published in Joule.

    “The transition to clean energy requires energy storage systems of different durations for when the sun isn’t shining and the wind isn’t blowing,” says Emre Gençer, a research scientist with the MIT Energy Initiative (MITEI) and a member of the team. “Our work demonstrates that a semisolid flow battery could be a lifesaving as well as economical option when these VRE sources can’t generate power for a day or longer — in the case of natural disasters, for instance.”

    The rechargeable zinc-manganese dioxide (Zn-MnO2) battery the researchers created beat out other long-duration energy storage contenders. “We performed a comprehensive, bottom-up analysis to understand how the battery’s composition affects performance and cost, looking at all the trade-offs,” says Thaneer Malai Narayanan SM ’18, PhD ’21. “We showed that our system can be cheaper than others, and can be scaled up.”

    Narayanan, who conducted this work at MIT as part of his doctorate in mechanical engineering, is the lead author of the paper. Additional authors include Gençer, Yunguang Zhu, a postdoc in the MIT Electrochemical Energy Lab; Gareth McKinley, the School of Engineering Professor of Teaching Innovation and professor of mechanical engineering at MIT; and Yang Shao-Horn, the JR East Professor of Engineering, a professor of mechanical engineering and of materials science and engineering, and a member of the Research Laboratory of Electronics (RLE), who directs the MIT Electrochemical Energy Lab.

    Going with the flow

    In 2016, Narayanan began his graduate studies, joining the Electrochemical Energy Lab, a hotbed of research and exploration of solutions to mitigate climate change, which is centered on innovative battery chemistry and decarbonizing fuels and chemicals. One exciting opportunity for the lab: developing low- and no-carbon backup energy systems suitable for grid-scale needs when VRE generation flags.                                                  

    While the lab cast a wide net, investigating energy conversion and storage using solid oxide fuel cells, lithium-ion batteries, and metal-air batteries, among others, Narayanan took a particular interest in flow batteries. In these systems, two different chemical (electrolyte) solutions with either negative or positive ions are pumped from separate tanks, meeting across a membrane (called the stack). Here, the ion streams react, converting electrical energy to chemical energy — in effect, charging the battery. When there is demand for this stored energy, the solution gets pumped back to the stack to convert chemical energy into electrical energy again.

    The duration of time that flow batteries can discharge, releasing the stored electricity, is determined by the volume of positively and negatively charged electrolyte solutions streaming through the stack. In theory, as long as these solutions keep flowing, reacting, and converting the chemical energy to electrical energy, the battery systems can provide electricity.

    “For backup lasting more than a day, the architecture of flow batteries suggests they can be a cheap option,” says Narayanan. “You recharge the solution in the tanks from sun and wind power sources.” This renders the entire system carbon free.

    But while the promise of flow battery technologies has beckoned for at least a decade, the uneven performance and expense of materials required for these battery systems has slowed their implementation. So, Narayanan set out on an ambitious journey: to design and build a flow battery that could back up VRE systems for a day or more, storing and discharging energy with the same or greater efficiency than backup rivals; and to determine, through rigorous cost analysis, whether such a system could prove economically viable as a long-duration energy option.

    Multidisciplinary collaborators

    To attack this multipronged challenge, Narayanan’s project brought together, in his words, “three giants, scientists all well-known in their fields”:  Shao-Horn, who specializes in chemical physics and electrochemical science, and design of materials; Gençer, who creates detailed economic models of emergent energy systems at MITEI; and McKinley, an expert in rheology, the physics of flow. These three also served as his thesis advisors.

    “I was excited to work in such an interdisciplinary team, which offered a unique opportunity to create a novel battery architecture by designing charge transfer and ion transport within flowable semi-solid electrodes, and to guide battery engineering using techno-economics of such flowable batteries,” says Shao-Horn.

    While other flow battery systems in contention, such as the vanadium redox flow battery, offer the storage capacity and energy density to back up megawatt and larger power systems, they depend on expensive chemical ingredients that make them bad bets for long duration purposes. Narayanan was on the hunt for less-pricey chemical components that also feature rich energy potential.

    Through a series of bench experiments, the researchers came up with a novel electrode (electrical conductor) for the battery system: a mixture containing dispersed manganese dioxide (MnO2) particles, shot through with an electrically conductive additive, carbon black. This compound reacts with a conductive zinc solution or zinc plate at the stack, enabling efficient electrochemical energy conversion. The fluid properties of this battery are far removed from the watery solutions used by other flow batteries.

    “It’s a semisolid — a slurry,” says Narayanan. “Like thick, black paint, or perhaps a soft-serve ice cream,” suggests McKinley. The carbon black adds the pigment and the electric punch. To arrive at the optimal electrochemical mix, the researchers tweaked their formula many times.

    “These systems have to be able to flow under reasonable pressures, but also have a weak yield stress so that the active MnO2 particles don’t sink to the bottom of the flow tanks when the system isn’t being used, as well as not separate into a battery/oily clear fluid phase and a dense paste of carbon particles and MnO2,” says McKinley.

    This series of experiments informed the technoeconomic analysis. By “connecting the dots between composition, performance, and cost,” says Narayanan, he and Gençer were able to make system-level cost and efficiency calculations for the Zn-MnO2 battery.

    “Assessing the cost and performance of early technologies is very difficult, and this was an example of how to develop a standard method to help researchers at MIT and elsewhere,” says Gençer. “One message here is that when you include the cost analysis at the development stage of your experimental work, you get an important early understanding of your project’s cost implications.”

    In their final round of studies, Gençer and Narayanan compared the Zn-MnO2 battery to a set of equivalent electrochemical battery and hydrogen backup systems, looking at the capital costs of running them at durations of eight, 24, and 72 hours. Their findings surprised them: For battery discharges longer than a day, their semisolid flow battery beat out lithium-ion batteries and vanadium redox flow batteries. This was true even when factoring in the heavy expense of pumping the MnO2 slurry from tank to stack. “I was skeptical, and not expecting this battery would be competitive, but once I did the cost calculation, it was plausible,” says Gençer.

    But carbon-free battery backup is a very Goldilocks-like business: Different situations require different-duration solutions, whether an anticipated overnight loss of solar power, or a longer-term, climate-based disruption in the grid. “Lithium-ion is great for backup of eight hours and under, but the materials are too expensive for longer periods,” says Gençer. “Hydrogen is super expensive for very short durations, and good for very long durations, and we will need all of them.” This means it makes sense to continue working on the Zn-MnO2 system to see where it might fit in.

    “The next step is to take our battery system and build it up,” says Narayanan, who is working now as a battery engineer. “Our research also points the way to other chemistries that could be developed under the semi-solid flow battery platform, so we could be seeing this kind of technology used for energy storage in our lifetimes.”

    This research was supported by Eni S.p.A. through MITEI. Thaneer Malai Narayanan received an Eni-sponsored MIT Energy Fellowship during his work on the project. More

  • in

    Timber or steel? Study helps builders reduce carbon footprint of truss structures

    Buildings are a big contributor to global warming, not just in their ongoing operations but in the materials used in their construction. Truss structures — those crisscross arrays of diagonal struts used throughout modern construction, in everything from antenna towers to support beams for large buildings — are typically made of steel or wood or a combination of both. But little quantitative research has been done on how to pick the right materials to minimize these structures’ contribution global warming.

    The “embodied carbon” in a construction material includes the fuel used in the material’s production (for mining and smelting steel, for example, or for felling and processing trees) and in transporting the materials to a site. It also includes the equipment used for the construction itself.

    Now, researchers at MIT have done a detailed analysis and created a set of computational tools to enable architects and engineers to design truss structures in a way that can minimize their embodied carbon while maintaining all needed properties for a given building application. While in general wood produces a much lower carbon footprint, using steel in places where its properties can provide maximum benefit can provide an optimized result, they say.

    The analysis is described in a paper published today in the journal Engineering Structures, by graduate student Ernest Ching and MIT assistant professor of civil and environmental engineering Josephine Carstensen.

    “Construction is a huge greenhouse gas emitter that has kind of been flying under the radar for the past decades,” says Carstensen. But in recent years building designers “are starting to be more focused on how to not just reduce the operating energy associated with building use, but also the important carbon associated with the structure itself.” And that’s where this new analysis comes in.

    The two main options in reducing the carbon emissions associated with truss structures, she says, are substituting materials or changing the structure. However, there has been “surprisingly little work” on tools to help designers figure out emissions-minimizing strategies for a given situation, she says.

    The new system makes use of a technique called topology optimization, which allows for the input of basic parameters, such as the amount of load to be supported and the dimensions of the structure, and can be used to produce designs optimized for different characteristics, such as weight, cost, or, in this case, global warming impact.

    Wood performs very well under forces of compression, but not as well as steel when it comes to tension — that is, a tendency to pull the structure apart. Carstensen says that in general, wood is far better than steel in terms of embedded carbon, so “especially if you have a structure that doesn’t have any tension, then you should definitely only use timber” in order to minimize emissions. One tradeoff is that “the weight of the structure is going to be bigger than it would be with steel,” she says.

    The tools they developed, which were the basis for Ching’s master’s thesis, can be applied at different stages, either in the early planning phase of a structure, or later on in the final stages of a design.

    As an exercise, the team developed a proposal for reengineering several trusses using these optimization tools, and demonstrated that a significant savings in embodied greenhouse gas emissions could be achieved with no loss of performance. While they have shown improvements of at least 10 percent can be achieved, she says those estimates are “not exactly apples to apples” and likely savings could actually be two to three times that.

    “It’s about choosing materials more smartly,” she says, for the specifics of a given application. Often in existing buildings “you will have timber where there’s compression, and where that makes sense, and then it will have really skinny steel members, in tension, where that makes sense. And that’s also what we see in our design solutions that are suggested, but perhaps we can see it even more clearly.” The tools are not ready for commercial use though, she says, because they haven’t yet added a user interface.

    Carstensen sees a trend to increasing use of timber in large construction, which represents an important potential for reducing the world’s overall carbon emissions. “There’s a big interest in the construction industry in mass timber structures, and this speaks right into that area. So, the hope is that this would make inroads into the construction business and actually make a dent in that very large contribution to greenhouse gas emissions.” More