More stories

  • in

    A new concept for low-cost batteries

    As the world builds out ever larger installations of wind and solar power systems, the need is growing fast for economical, large-scale backup systems to provide power when the sun is down and the air is calm. Today’s lithium-ion batteries are still too expensive for most such applications, and other options such as pumped hydro require specific topography that’s not always available.

    Now, researchers at MIT and elsewhere have developed a new kind of battery, made entirely from abundant and inexpensive materials, that could help to fill that gap.

    The new battery architecture, which uses aluminum and sulfur as its two electrode materials, with a molten salt electrolyte in between, is described today in the journal Nature, in a paper by MIT Professor Donald Sadoway, along with 15 others at MIT and in China, Canada, Kentucky, and Tennessee.

    “I wanted to invent something that was better, much better, than lithium-ion batteries for small-scale stationary storage, and ultimately for automotive [uses],” explains Sadoway, who is the John F. Elliott Professor Emeritus of Materials Chemistry.

    In addition to being expensive, lithium-ion batteries contain a flammable electrolyte, making them less than ideal for transportation. So, Sadoway started studying the periodic table, looking for cheap, Earth-abundant metals that might be able to substitute for lithium. The commercially dominant metal, iron, doesn’t have the right electrochemical properties for an efficient battery, he says. But the second-most-abundant metal in the marketplace — and actually the most abundant metal on Earth — is aluminum. “So, I said, well, let’s just make that a bookend. It’s gonna be aluminum,” he says.

    Then came deciding what to pair the aluminum with for the other electrode, and what kind of electrolyte to put in between to carry ions back and forth during charging and discharging. The cheapest of all the non-metals is sulfur, so that became the second electrode material. As for the electrolyte, “we were not going to use the volatile, flammable organic liquids” that have sometimes led to dangerous fires in cars and other applications of lithium-ion batteries, Sadoway says. They tried some polymers but ended up looking at a variety of molten salts that have relatively low melting points — close to the boiling point of water, as opposed to nearly 1,000 degrees Fahrenheit for many salts. “Once you get down to near body temperature, it becomes practical” to make batteries that don’t require special insulation and anticorrosion measures, he says.

    The three ingredients they ended up with are cheap and readily available — aluminum, no different from the foil at the supermarket; sulfur, which is often a waste product from processes such as petroleum refining; and widely available salts. “The ingredients are cheap, and the thing is safe — it cannot burn,” Sadoway says.

    In their experiments, the team showed that the battery cells could endure hundreds of cycles at exceptionally high charging rates, with a projected cost per cell of about one-sixth that of comparable lithium-ion cells. They showed that the charging rate was highly dependent on the working temperature, with 110 degrees Celsius (230 degrees Fahrenheit) showing 25 times faster rates than 25 C (77 F).

    Surprisingly, the molten salt the team chose as an electrolyte simply because of its low melting point turned out to have a fortuitous advantage. One of the biggest problems in battery reliability is the formation of dendrites, which are narrow spikes of metal that build up on one electrode and eventually grow across to contact the other electrode, causing a short-circuit and hampering efficiency. But this particular salt, it happens, is very good at preventing that malfunction.

    The chloro-aluminate salt they chose “essentially retired these runaway dendrites, while also allowing for very rapid charging,” Sadoway says. “We did experiments at very high charging rates, charging in less than a minute, and we never lost cells due to dendrite shorting.”

    “It’s funny,” he says, because the whole focus was on finding a salt with the lowest melting point, but the catenated chloro-aluminates they ended up with turned out to be resistant to the shorting problem. “If we had started off with trying to prevent dendritic shorting, I’m not sure I would’ve known how to pursue that,” Sadoway says. “I guess it was serendipity for us.”

    What’s more, the battery requires no external heat source to maintain its operating temperature. The heat is naturally produced electrochemically by the charging and discharging of the battery. “As you charge, you generate heat, and that keeps the salt from freezing. And then, when you discharge, it also generates heat,” Sadoway says. In a typical installation used for load-leveling at a solar generation facility, for example, “you’d store electricity when the sun is shining, and then you’d draw electricity after dark, and you’d do this every day. And that charge-idle-discharge-idle is enough to generate enough heat to keep the thing at temperature.”

    This new battery formulation, he says, would be ideal for installations of about the size needed to power a single home or small to medium business, producing on the order of a few tens of kilowatt-hours of storage capacity.

    For larger installations, up to utility scale of tens to hundreds of megawatt hours, other technologies might be more effective, including the liquid metal batteries Sadoway and his students developed several years ago and which formed the basis for a spinoff company called Ambri, which hopes to deliver its first products within the next year. For that invention, Sadoway was recently awarded this year’s European Inventor Award.

    The smaller scale of the aluminum-sulfur batteries would also make them practical for uses such as electric vehicle charging stations, Sadoway says. He points out that when electric vehicles become common enough on the roads that several cars want to charge up at once, as happens today with gasoline fuel pumps, “if you try to do that with batteries and you want rapid charging, the amperages are just so high that we don’t have that amount of amperage in the line that feeds the facility.” So having a battery system such as this to store power and then release it quickly when needed could eliminate the need for installing expensive new power lines to serve these chargers.

    The new technology is already the basis for a new spinoff company called Avanti, which has licensed the patents to the system, co-founded by Sadoway and Luis Ortiz ’96 ScD ’00, who was also a co-founder of Ambri. “The first order of business for the company is to demonstrate that it works at scale,” Sadoway says, and then subject it to a series of stress tests, including running through hundreds of charging cycles.

    Would a battery based on sulfur run the risk of producing the foul odors associated with some forms of sulfur? Not a chance, Sadoway says. “The rotten-egg smell is in the gas, hydrogen sulfide. This is elemental sulfur, and it’s going to be enclosed inside the cells.” If you were to try to open up a lithium-ion cell in your kitchen, he says (and please don’t try this at home!), “the moisture in the air would react and you’d start generating all sorts of foul gases as well. These are legitimate questions, but the battery is sealed, it’s not an open vessel. So I wouldn’t be concerned about that.”

    The research team included members from Peking University, Yunnan University and the Wuhan University of Technology, in China; the University of Louisville, in Kentucky; the University of Waterloo, in Canada; Oak Ridge National Laboratory, in Tennessee; and MIT. The work was supported by the MIT Energy Initiative, the MIT Deshpande Center for Technological Innovation, and ENN Group. More

  • in

    Stranded assets could exact steep costs on fossil energy producers and investors

    A 2021 study in the journal Nature found that in order to avert the worst impacts of climate change, most of the world’s known fossil fuel reserves must remain untapped. According to the study, 90 percent of coal and nearly 60 percent of oil and natural gas must be kept in the ground in order to maintain a 50 percent chance that global warming will not exceed 1.5 degrees Celsius above preindustrial levels.

    As the world transitions away from greenhouse-gas-emitting activities to keep global warming well below 2 C (and ideally 1.5 C) in alignment with the Paris Agreement on climate change, fossil fuel companies and their investors face growing financial risks (known as transition risks), including the prospect of ending up with massive stranded assets. This ongoing transition is likely to significantly scale back fossil fuel extraction and coal-fired power plant operations, exacting steep costs — most notably asset value losses — on fossil-energy producers and shareholders.

    Now, a new study in the journal Climate Change Economics led by researchers at the MIT Joint Program on the Science and Policy of Global Change estimates the current global asset value of untapped fossil fuels through 2050 under four increasingly ambitious climate-policy scenarios. The least-ambitious scenario (“Paris Forever”) assumes that initial Paris Agreement greenhouse gas emissions-reduction pledges are upheld in perpetuity; the most stringent scenario (“Net Zero 2050”) adds coordinated international policy instruments aimed at achieving global net-zero emissions by 2050.

    Powered by the MIT Joint Program’s model of the world economy with detailed representation of the energy sector and energy industry assets over time, the study finds that the global net present value of untapped fossil fuel output through 2050 relative to a reference “No Policy” scenario ranges from $21.5 trillion (Paris Forever) to $30.6 trillion (Net Zero 2050). The estimated global net present value of stranded assets in coal power generation through 2050 ranges from $1.3 to $2.3 trillion.

    “The more stringent the climate policy, the greater the volume of untapped fossil fuels, and hence the higher the potential asset value loss for fossil-fuel owners and investors,” says Henry Chen, a research scientist at the MIT Joint Program and the study’s lead author.

    The global economy-wide analysis presented in the study provides a more fine-grained assessment of stranded assets than those performed in previous studies. Firms and financial institutions may combine the MIT analysis with details on their own investment portfolios to assess their exposure to climate-related transition risk. More

  • in

    A new method boosts wind farms’ energy output, without new equipment

    Virtually all wind turbines, which produce more than 5 percent of the world’s electricity, are controlled as if they were individual, free-standing units. In fact, the vast majority are part of larger wind farm installations involving dozens or even hundreds of turbines, whose wakes can affect each other.

    Now, engineers at MIT and elsewhere have found that, with no need for any new investment in equipment, the energy output of such wind farm installations can be increased by modeling the wind flow of the entire collection of turbines and optimizing the control of individual units accordingly.

    The increase in energy output from a given installation may seem modest — it’s about 1.2 percent overall, and 3 percent for optimal wind speeds. But the algorithm can be deployed at any wind farm, and the number of wind farms is rapidly growing to meet accelerated climate goals. If that 1.2 percent energy increase were applied to all the world’s existing wind farms, it would be the equivalent of adding more than 3,600 new wind turbines, or enough to power about 3 million homes, and a total gain to power producers of almost a billion dollars per year, the researchers say. And all of this for essentially no cost.

    The research is published today in the journal Nature Energy, in a study led by MIT Esther and Harold E. Edgerton Assistant Professor of Civil and Environmental Engineering Michael F. Howland.

    “Essentially all existing utility-scale turbines are controlled ‘greedily’ and independently,” says Howland. The term “greedily,” he explains, refers to the fact that they are controlled to maximize only their own power production, as if they were isolated units with no detrimental impact on neighboring turbines.

    But in the real world, turbines are deliberately spaced close together in wind farms to achieve economic benefits related to land use (on- or offshore) and to infrastructure such as access roads and transmission lines. This proximity means that turbines are often strongly affected by the turbulent wakes produced by others that are upwind from them — a factor that individual turbine-control systems do not currently take into account.

    “From a flow-physics standpoint, putting wind turbines close together in wind farms is often the worst thing you could do,” Howland says. “The ideal approach to maximize total energy production would be to put them as far apart as possible,” but that would increase the associated costs.

    That’s where the work of Howland and his collaborators comes in. They developed a new flow model which predicts the power production of each turbine in the farm depending on the incident winds in the atmosphere and the control strategy of each turbine. While based on flow-physics, the model learns from operational wind farm data to reduce predictive error and uncertainty. Without changing anything about the physical turbine locations and hardware systems of existing wind farms, they have used the physics-based, data-assisted modeling of the flow within the wind farm and the resulting power production of each turbine, given different wind conditions, to find the optimal orientation for each turbine at a given moment. This allows them to maximize the output from the whole farm, not just the individual turbines.

    Today, each turbine constantly senses the incoming wind direction and speed and uses its internal control software to adjust its yaw (vertical axis) angle position to align as closely as possible to the wind. But in the new system, for example, the team has found that by turning one turbine just slightly away from its own maximum output position — perhaps 20 degrees away from its individual peak output angle — the resulting increase in power output from one or more downwind units will more than make up for the slight reduction in output from the first unit. By using a centralized control system that takes all of these interactions into account, the collection of turbines was operated at power output levels that were as much as 32 percent higher under some conditions.

    In a months-long experiment in a real utility-scale wind farm in India, the predictive model was first validated by testing a wide range of yaw orientation strategies, most of which were intentionally suboptimal. By testing many control strategies, including suboptimal ones, in both the real farm and the model, the researchers could identify the true optimal strategy. Importantly, the model was able to predict the farm power production and the optimal control strategy for most wind conditions tested, giving confidence that the predictions of the model would track the true optimal operational strategy for the farm. This enables the use of the model to design the optimal control strategies for new wind conditions and new wind farms without needing to perform fresh calculations from scratch.

    Then, a second months-long experiment at the same farm, which implemented only the optimal control predictions from the model, proved that the algorithm’s real-world effects could match the overall energy improvements seen in simulations. Averaged over the entire test period, the system achieved a 1.2 percent increase in energy output at all wind speeds, and a 3 percent increase at speeds between 6 and 8 meters per second (about 13 to 18 miles per hour).

    While the test was run at one wind farm, the researchers say the model and cooperative control strategy can be implemented at any existing or future wind farm. Howland estimates that, translated to the world’s existing fleet of wind turbines, a 1.2 percent overall energy improvement would produce  more than 31 terawatt-hours of additional electricity per year, approximately equivalent to installing an extra 3,600 wind turbines at no cost. This would translate into some $950 million in extra revenue for the wind farm operators per year, he says.

    The amount of energy to be gained will vary widely from one wind farm to another, depending on an array of factors including the spacing of the units, the geometry of their arrangement, and the variations in wind patterns at that location over the course of a year. But in all cases, the model developed by this team can provide a clear prediction of exactly what the potential gains are for a given site, Howland says. “The optimal control strategy and the potential gain in energy will be different at every wind farm, which motivated us to develop a predictive wind farm model which can be used widely, for optimization across the wind energy fleet,” he adds.

    But the new system can potentially be adopted quickly and easily, he says. “We don’t require any additional hardware installation. We’re really just making a software change, and there’s a significant potential energy increase associated with it.” Even a 1 percent improvement, he points out, means that in a typical wind farm of about 100 units, operators could get the same output with one fewer turbine, thus saving the costs, usually millions of dollars, associated with purchasing, building, and installing that unit.

    Further, he notes, by reducing wake losses the algorithm could make it possible to place turbines more closely together within future wind farms, therefore increasing the power density of wind energy, saving on land (or sea) footprints. This power density increase and footprint reduction could help to achieve pressing greenhouse gas emission reduction goals, which call for a substantial expansion of wind energy deployment, both on and offshore.

    What’s more, he says, the biggest new area of wind farm development is offshore, and “the impact of wake losses is often much higher in offshore wind farms.” That means the impact of this new approach to controlling those wind farms could be significantly greater.

    The Howland Lab and the international team is continuing to refine the models further and working to improve the operational instructions they derive from the model, moving toward autonomous, cooperative control and striving for the greatest possible power output from a given set of conditions, Howland says.

    The research team includes Jesús Bas Quesada, Juan José Pena Martinez, and Felipe Palou Larrañaga of Siemens Gamesa Renewable Energy Innovation and Technology in Navarra, Spain; Neeraj Yadav and Jasvipul Chawla at ReNew Power Private Limited in Haryana, India; Varun Sivaram formerly at ReNew Power Private Limited in Haryana, India and presently at the Office of the U.S. Special Presidential Envoy for Climate, United States Department of State; and John Dabiri at California Institute of Technology. The work was supported by the MIT Energy Initiative and Siemens Gamesa Renewable Energy. More

  • in

    Solving a longstanding conundrum in heat transfer

    It is a problem that has beguiled scientists for a century. But, buoyed by a $625,000 Distinguished Early Career Award from the U.S. Department of Energy (DoE), Matteo Bucci, an associate professor in the Department of Nuclear Science and Engineering (NSE), hopes to be close to an answer.

    Tackling the boiling crisis

    Whether you’re heating a pot of water for pasta or are designing nuclear reactors, one phenomenon — boiling — is vital for efficient execution of both processes.

    “Boiling is a very effective heat transfer mechanism; it’s the way to remove large amounts of heat from the surface, which is why it is used in many high-power density applications,” Bucci says. An example use case: nuclear reactors.

    To the layperson, boiling appears simple — bubbles form and burst, removing heat. But what if so many bubbles form and coalesce that they form a band of vapor that prevents further heat transfer? Such a problem is a known entity and is labeled the boiling crisis. It would lead to runaway heat, and a failure of fuel rods in nuclear reactors. So “understanding and determining under which conditions the boiling crisis is likely to happen is critical to designing more efficient and cost-competitive nuclear reactors,” Bucci says.

    Early work on the boiling crisis dates back nearly a century ago, to 1926. And while much work has been done, “it is clear that we haven’t found an answer,” Bucci says. The boiling crisis remains a challenge because while models abound, the measurement of related phenomena to prove or disprove these models has been difficult. “[Boiling] is a process that happens on a very, very small length scale and over very, very short times,” Bucci says. “We are not able to observe it at the level of detail necessary to understand what really happens and validate hypotheses.”

    But, over the past few years, Bucci and his team have been developing diagnostics that can measure the phenomena related to boiling and thereby provide much-needed answers to a classic problem. Diagnostics are anchored in infrared thermometry and a technique using visible light. “By combining these two techniques I think we’re going to be ready to answer standing questions related to heat transfer, we can make our way out of the rabbit hole,” Bucci says. The grant award from the U.S. DoE for Nuclear Energy Projects will aid in this and Bucci’s other research efforts.

    An idyllic Italian childhood

    Tackling difficult problems is not new territory for Bucci, who grew up in the small town of Città di Castello near Florence, Italy. Bucci’s mother was an elementary school teacher. His father used to have a machine shop, which helped develop Bucci’s scientific bent. “I liked LEGOs a lot when I was a kid. It was a passion,” he adds.

    Despite Italy going through a severe pullback from nuclear engineering during his formative years, the subject fascinated Bucci. Job opportunities in the field were uncertain but Bucci decided to dig in. “If I have to do something for the rest of my life, it might as well be something I like,” he jokes. Bucci attended the University of Pisa for undergraduate and graduate studies in nuclear engineering.

    His interest in heat transfer mechanisms took root during his doctoral studies, a research subject he pursued in Paris at the French Alternative Energies and Atomic Energy Commission (CEA). It was there that a colleague suggested work on the boiling water crisis. This time Bucci set his sights on NSE at MIT and reached out to Professor Jacopo Buongiorno to inquire about research at the institution. Bucci had to fundraise at CEA to conduct research at MIT. He arrived just a couple of days before the Boston Marathon bombing in 2013 with a round-trip ticket. But Bucci has stayed ever since, moving on to become a research scientist and then associate professor at NSE.

    Bucci admits he struggled to adapt to the environment when he first arrived at MIT, but work and friendships with colleagues — he counts NSE’s Guanyu Su and Reza Azizian as among his best friends — helped conquer early worries.

    The integration of artificial intelligence

    In addition to diagnostics for boiling, Bucci and his team are working on ways of integrating artificial intelligence and experimental research. He is convinced that “the integration of advanced diagnostics, machine learning, and advanced modeling tools will blossom in a decade.”

    Bucci’s team is developing an autonomous laboratory for boiling heat transfer experiments. Running on machine learning, the setup decides which experiments to run based on a learning objective the team assigns. “We formulate a question and the machine will answer by optimizing the kinds of experiments that are necessary to answer those questions,” Bucci says, “I honestly think this is the next frontier for boiling,” he adds.

    “It’s when you climb a tree and you reach the top, that you realize that the horizon is much more vast and also more beautiful,” Bucci says of his zeal to pursue more research in the field.

    Even as he seeks new heights, Bucci has not forgotten his origins. Commemorating Italy’s hosting of the World Cup in 1990, a series of posters showcasing a soccer field fitted into the Roman Colosseum occupies pride of place in his home and office. Created by Alberto Burri, the posters are of sentimental value: The (now deceased) Italian artist also hailed from Bucci’s hometown — Città di Castello. More

  • in

    New J-WAFS-led project combats food insecurity

    Today the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) at MIT announced a new research project, supported by Community Jameel, to tackle one of the most urgent crises facing the planet: food insecurity. Approximately 276 million people worldwide are severely food insecure, and more than half a million face famine conditions.     To better understand and analyze food security, this three-year research project will develop a comprehensive index assessing countries’ food security vulnerability, called the Jameel Index for Food Trade and Vulnerability. Global changes spurred by social and economic transitions, energy and environmental policy, regional geopolitics, conflict, and of course climate change, can impact food demand and supply. The Jameel Index will measure countries’ dependence on global food trade and imports and how these regional-scale threats might affect the ability to trade food goods across diverse geographic regions. A main outcome of the research will be a model to project global food demand, supply balance, and bilateral trade under different likely future scenarios, with a focus on climate change. The work will help guide policymakers over the next 25 years while the global population is expected to grow, and the climate crisis is predicted to worsen.    

    The work will be the foundational project for the J-WAFS-led Food and Climate Systems Transformation Alliance, or FACT Alliance. Formally launched at the COP26 climate conference last November, the FACT Alliance is a global network of 20 leading research institutions and stakeholder organizations that are driving research and innovation and informing better decision-making for healthy, resilient, equitable, and sustainable food systems in a rapidly changing climate. The initiative is co-directed by Greg Sixt, research manager for climate and food systems at J-WAFS, and Professor Kenneth Strzepek, climate, water, and food specialist at J-WAFS.

    The dire state of our food systems

    The need for this project is evidenced by the hundreds of millions of people around the globe currently experiencing food shortages. While several factors contribute to food insecurity, climate change is one of the most notable. Devastating extreme weather events are increasingly crippling crop and livestock production around the globe. From Southwest Asia to the Arabian Peninsula to the Horn of Africa, communities are migrating in search of food. In the United States, extreme heat and lack of rainfall in the Southwest have drastically lowered Lake Mead’s water levels, restricting water access and drying out farmlands. 

    Social, political, and economic issues also disrupt food systems. The effects of the Covid-19 pandemic, supply chain disruptions, and inflation continue to exacerbate food insecurity. Russia’s invasion of Ukraine is dramatically worsening the situation, disrupting agricultural exports from both Russia and Ukraine — two of the world’s largest producers of wheat, sunflower seed oil, and corn. Other countries like Lebanon, Sri Lanka, and Cuba are confronting food insecurity due to domestic financial crises.

    Few countries are immune to threats to food security from sudden disruptions in food production or trade. When an enormous container ship became lodged in the Suez Canal in March 2021, the vital international trade route was blocked for three months. The resulting delays in international shipping affected food supplies around the world. These situations demonstrate the importance of food trade in achieving food security: a disaster in one part of the world can drastically affect the availability of food in another. This puts into perspective just how interconnected the earth’s food systems are and how vulnerable they remain to external shocks. 

    An index to prepare for the future of food

    Despite the need for more secure food systems, significant knowledge gaps exist when it comes to understanding how different climate scenarios may affect both agricultural productivity and global food supply chains and security. The Global Trade Analysis Project database from Purdue University, and the current IMPACT modeling system from the International Food Policy Research Institute (IFPRI), enable assessments of existing conditions but cannot project or model changes in the future.

    In 2021, Strzepek and Sixt developed an initial Food Import Vulnerability Index (FIVI) as part of a regional assessment of the threat of climate change to food security in the Gulf Cooperation Council states and West Asia. FIVI is also limited in that it can only assess current trade conditions and climate change threats to food production. Additionally, FIVI is a national aggregate index and does not address issues of hunger, poverty, or equity that stem from regional variations within a country.

    “Current models are really good at showing global food trade flows, but we don’t have systems for looking at food trade between individual countries and how different food systems stressors such as climate change and conflict disrupt that trade,” says Greg Sixt of J-WAFS and the FACT Alliance. “This timely index will be a valuable tool for policymakers to understand the vulnerabilities to their food security from different shocks in the countries they import their food from. The project will also illustrate the stakeholder-guided, transdisciplinary approach that is central to the FACT Alliance,” Sixt adds.

    Phase 1 of the project will support a collaboration between four FACT Alliance members: MIT J-WAFS, Ethiopian Institute of Agricultural Research, IFPRI (which is also part of the CGIAR network), and the Martin School at the University of Oxford. An external partner, United Arab Emirates University, will also assist with the project work. This first phase will build on Strzepek and Sixt’s previous work on FIVI by developing a comprehensive Global Food System Modeling Framework that takes into consideration climate and global changes projected out to 2050, and assesses their impacts on domestic production, world market prices, and national balance of payments and bilateral trade. The framework will also utilize a mixed-modeling approach that includes the assessment of bilateral trade and macroeconomic data associated with varying agricultural productivity under the different climate and economic policy scenarios. In this way, consistent and harmonized projections of global food demand and supply balance, and bilateral trade under climate and global change can be achieved. 

    “Just like in the global response to Covid-19, using data and modeling are critical to understanding and tackling vulnerabilities in the global supply of food,” says George Richards, director of Community Jameel. “The Jameel Index for Food Trade and Vulnerability will help inform decision-making to manage shocks and long-term disruptions to food systems, with the aim of ensuring food security for all.”

    On a national level, the researchers will enrich the Jameel Index through country-level food security analyses of regions within countries and across various socioeconomic groups, allowing for a better understanding of specific impacts on key populations. The research will present vulnerability scores for a variety of food security metrics for 126 countries. Case studies of food security and food import vulnerability in Ethiopia and Sudan will help to refine the applicability of the Jameel Index with on-the-ground information. The case studies will use an IFPRI-developed tool called the Rural Investment and Policy Analysis model, which allows for analysis of urban and rural populations and different income groups. Local capacity building and stakeholder engagement will be critical to enable the use of the tools developed by this research for national-level planning in priority countries, and ultimately to inform policy.  Phase 2 of the project will build on phase 1 and the lessons learned from the Ethiopian and Sudanese case studies. It will entail a number of deeper, country-level analyses to assess the role of food imports on future hunger, poverty, and equity across various regional and socioeconomic groups within the modeled countries. This work will link the geospatial national models with the global analysis. A scholarly paper is expected to be submitted to show findings from this work, and a website will be launched so that interested stakeholders and organizations can learn more information. More

  • in

    Silk offers an alternative to some microplastics

    Microplastics, tiny particles of plastic that are now found worldwide in the air, water, and soil, are increasingly recognized as a serious pollution threat, and have been found in the bloodstream of animals and people around the world.

    Some of these microplastics are intentionally added to a variety of products, including agricultural chemicals, paints, cosmetics, and detergents — amounting to an estimated 50,000 tons a year in the European Union alone, according to the European Chemicals Agency. The EU has already declared that these added, nonbiodegradable microplastics must be eliminated by 2025, so the search is on for suitable replacements, which do not currently exist.

    Now, a team of scientists at MIT and elsewhere has developed a system based on silk that could provide an inexpensive and easily manufactured substitute. The new process is described in a paper in the journal Small, written by MIT postdoc Muchun Liu, MIT professor of civil and environmental engineering Benedetto Marelli, and five others at the chemical company BASF in Germany and the U.S.

    The microplastics widely used in industrial products generally protect some specific active ingredient (or ingredients) from being degraded by exposure to air or moisture, until the time they are needed. They provide a slow release of the active ingredient for a targeted period of time and minimize adverse effects to its surroundings. For example, vitamins are often delivered in the form of microcapsules packed into a pill or capsule, and pesticides and herbicides are similarly enveloped. But the materials used today for such microencapsulation are plastics that persist in the environment for a long time. Until now, there has been no practical, economical substitute available that would biodegrade naturally.

    Much of the burden of environmental microplastics comes from other sources, such as the degradation over time of larger plastic objects such as bottles and packaging, and from the wear of car tires. Each of these sources may require its own kind of solutions for reducing its spread, Marelli says. The European Chemical Agency has estimated that the intentionally added microplastics represent approximately 10-15 percent of the total amount in the environment, but this source may be relatively easy to address using this nature-based biodegradable replacement, he says.

    “We cannot solve the whole microplastics problem with one solution that fits them all,” he says. “Ten percent of a big number is still a big number. … We’ll solve climate change and pollution of the world one percent at a time.”

    Unlike the high-quality silk threads used for fine fabrics, the silk protein used in the new alternative material is widely available and less expensive, Liu says. While silkworm cocoons must be painstakingly unwound to produce the fine threads needed for fabric, for this use, non-textile-quality cocoons can be used, and the silk fibers can simply be dissolved using a scalable water-based process. The processing is so simple and tunable that the resulting material can be adapted to work on existing manufacturing equipment, potentially providing a simple “drop in” solution using existing factories.

    Silk is recognized as safe for food or medical use, as it is nontoxic and degrades naturally in the body. In lab tests, the researchers demonstrated that the silk-based coating material could be used in existing, standard spray-based manufacturing equipment to make a standard water-soluble microencapsulated herbicide product, which was then tested in a greenhouse on a corn crop. The test showed it worked even better than an existing commercial product, inflicting less damage to the plants, Liu says.

    While other groups have proposed degradable encapsulation materials that may work at a small laboratory scale, Marelli says, “there is a strong need to achieve encapsulation of high-content actives to open the door to commercial use. The only way to have an impact is where we can not only replace a synthetic polymer with a biodegradable counterpart, but also achieve performance that is the same, if not better.”

    The secret to making the material compatible with existing equipment, Liu explains, is in the tunability of the silk material. By precisely adjusting the polymer chain arrangements of silk materials and addition of a surfactant, it is possible to fine-tune the properties of the resulting coatings once they dry out and harden. The material can be hydrophobic (water-repelling) even though it is made and processed in a water solution, or it can be hydrophilic (water-attracting), or anywhere in between, and for a given application it can be made to match the characteristics of the material it is being used to replace.

    In order to arrive at a practical solution, Liu had to develop a way of freezing the forming droplets of encapsulated materials as they were forming, to study the formation process in detail. She did this using a special spray-freezing system, and was able to observe exactly how the encapsulation works in order to control it better. Some of the encapsulated “payload” materials, whether they be pesticides or nutrients or enzymes, are water-soluble and some are not, and they interact in different ways with the coating material.

    “To encapsulate different materials, we have to study how the polymer chains interact and whether they are compatible with different active materials in suspension,” she says. The payload material and the coating material are mixed together in a solution and then sprayed. As droplets form, the payload tends to be embedded in a shell of the coating material, whether that’s the original synthetic plastic or the new silk material.

    The new method can make use of low-grade silk that is unusable for fabrics, and large quantities of which are currently discarded because they have no significant uses, Liu says. It can also use used, discarded silk fabric, diverting that material from being disposed of in landfills.

    Currently, 90 percent of the world’s silk production takes place in China, Marelli says, but that’s largely because China has perfected the production of the high-quality silk threads needed for fabrics. But because this process uses bulk silk and has no need for that level of quality, production could easily be ramped up in other parts of the world to meet local demand if this process becomes widely used, he says.

    “This elegant and clever study describes a sustainable and biodegradable silk-based replacement for microplastic encapsulants, which are a pressing environmental challenge,” says Alon Gorodetsky, an associate professor of chemical and biomolecular engineering at the University of California at Irvine, who was not associated with this research. “The modularity of the described materials and the scalability of the manufacturing processes are key advantages that portend well for translation to real-world applications.”

    This process “represents a potentially highly significant advance in active ingredient delivery for a range of industries, particularly agriculture,” says Jason White, director of the Connecticut Agricultural Experiment Station, who also was not associated with this work. “Given the current and future challenges related to food insecurity, agricultural production, and a changing climate, novel strategies such as this are greatly needed.”

    The research team also included Pierre-Eric Millard, Ophelie Zeyons, Henning Urch, Douglas Findley and Rupert Konradi from the BASF corporation, in Germany and in the U.S. The work was supported by BASF through the Northeast Research Alliance (NORA). More

  • in

    Explained: Why perovskites could take solar cells to new heights

    Perovskites hold promise for creating solar panels that could be easily deposited onto most surfaces, including flexible and textured ones. These materials would also be lightweight, cheap to produce, and as efficient as today’s leading photovoltaic materials, which are mainly silicon. They’re the subject of increasing research and investment, but companies looking to harness their potential do have to address some remaining hurdles before perovskite-based solar cells can be commercially competitive.

    The term perovskite refers not to a specific material, like silicon or cadmium telluride, other leading contenders in the photovoltaic realm, but to a whole family of compounds. The perovskite family of solar materials is named for its structural similarity to a mineral called perovskite, which was discovered in 1839 and named after Russian mineralogist L.A. Perovski.

    The original mineral perovskite, which is calcium titanium oxide (CaTiO3), has a distinctive crystal configuration. It has a three-part structure, whose components have come to be labeled A, B and X, in which lattices of the different components are interlaced. The family of perovskites consists of the many possible combinations of elements or molecules that can occupy each of the three components and form a structure similar to that of the original perovskite itself. (Some researchers even bend the rules a little by naming other crystal structures with similar elements “perovskites,” although this is frowned upon by crystallographers.)

    “You can mix and match atoms and molecules into the structure, with some limits. For instance, if you try to stuff a molecule that’s too big into the structure, you’ll distort it. Eventually you might cause the 3D crystal to separate into a 2D layered structure, or lose ordered structure entirely,” says Tonio Buonassisi, professor of mechanical engineering at MIT and director of the Photovoltaics Research Laboratory. “Perovskites are highly tunable, like a build-your-own-adventure type of crystal structure,” he says.

    That structure of interlaced lattices consists of ions or charged molecules, two of them (A and B) positively charged and the other one (X) negatively charged. The A and B ions are typically of quite different sizes, with the A being larger. 

    Within the overall category of perovskites, there are a number of types, including metal oxide perovskites, which have found applications in catalysis and in energy storage and conversion, such as in fuel cells and metal-air batteries. But a main focus of research activity for more than a decade has been on lead halide perovskites, according to Buonassisi says.

    Within that category, there is still a legion of possibilities, and labs around the world are racing through the tedious work of trying to find the variations that show the best performance in efficiency, cost, and durability — which has so far been the most challenging of the three.

    Many teams have also focused on variations that eliminate the use of lead, to avoid its environmental impact. Buonassisi notes, however, that “consistently over time, the lead-based devices continue to improve in their performance, and none of the other compositions got close in terms of electronic performance.” Work continues on exploring alternatives, but for now none can compete with the lead halide versions.

    One of the great advantages perovskites offer is their great tolerance of defects in the structure, he says. Unlike silicon, which requires extremely high purity to function well in electronic devices, perovskites can function well even with numerous imperfections and impurities.

    Searching for promising new candidate compositions for perovskites is a bit like looking for a needle in a haystack, but recently researchers have come up with a machine-learning system that can greatly streamline this process. This new approach could lead to a much faster development of new alternatives, says Buonassisi, who was a co-author of that research.

    While perovskites continue to show great promise, and several companies are already gearing up to begin some commercial production, durability remains the biggest obstacle they face. While silicon solar panels retain up to 90 percent of their power output after 25 years, perovskites degrade much faster. Great progress has been made — initial samples lasted only a few hours, then weeks or months, but newer formulations have usable lifetimes of up to a few years, suitable for some applications where longevity is not essential.

    From a research perspective, Buonassisi says, one advantage of perovskites is that they are relatively easy to make in the lab — the chemical constituents assemble readily. But that’s also their downside: “The material goes together very easily at room temperature,” he says, “but it also comes apart very easily at room temperature. Easy come, easy go!”

    To deal with that issue, most researchers are focused on using various kinds of protective materials to encapsulate the perovskite, protecting it from exposure to air and moisture. But others are studying the exact mechanisms that lead to that degradation, in hopes of finding formulations or treatments that are more inherently robust. A key finding is that a process called autocatalysis is largely to blame for the breakdown.

    In autocatalysis, as soon as one part of the material starts to degrade, its reaction products act as catalysts to start degrading the neighboring parts of the structure, and a runaway reaction gets underway. A similar problem existed in the early research on some other electronic materials, such as organic light-emitting diodes (OLEDs), and was eventually solved by adding additional purification steps to the raw materials, so a similar solution may be found in the case of perovskites, Buonassisi suggests.

    Buonassisi and his co-researchers recently completed a study showing that once perovskites reach a usable lifetime of at least a decade, thanks to their much lower initial cost that would be sufficient to make them economically viable as a substitute for silicon in large, utility-scale solar farms.

    Overall, progress in the development of perovskites has been impressive and encouraging, he says. With just a few years of work, it has already achieved efficiencies comparable to levels that cadmium telluride (CdTe), “which has been around for much longer, is still struggling to achieve,” he says. “The ease with which these higher performances are reached in this new material are almost stupefying.” Comparing the amount of research time spent to achieve a 1 percent improvement in efficiency, he says, the progress on perovskites has been somewhere between 100 and 1000 times faster than that on CdTe. “That’s one of the reasons it’s so exciting,” he says. More

  • in

    MIT engineers design surfaces that make water boil more efficiently

    The boiling of water or other fluids is an energy-intensive step at the heart of a wide range of industrial processes, including most electricity generating plants, many chemical production systems, and even cooling systems for electronics.

    Improving the efficiency of systems that heat and evaporate water could significantly reduce their energy use. Now, researchers at MIT have found a way to do just that, with a specially tailored surface treatment for the materials used in these systems.

    The improved efficiency comes from a combination of three different kinds of surface modifications, at different size scales. The new findings are described in the journal Advanced Materials in a paper by recent MIT graduate Youngsup Song PhD ’21, Ford Professor of Engineering Evelyn Wang, and four others at MIT. The researchers note that this initial finding is still at a laboratory scale, and more work is needed to develop a practical, industrial-scale process.

    There are two key parameters that describe the boiling process: the heat transfer coefficient (HTC) and the critical heat flux (CHF). In materials design, there’s generally a tradeoff between the two, so anything that improves one of these parameters tends to make the other worse. But both are important for the efficiency of the system, and now, after years of work, the team has achieved a way of significantly improving both properties at the same time, through their combination of different textures added to a material’s surface.

    “Both parameters are important,” Song says, “but enhancing both parameters together is kind of tricky because they have intrinsic trade off.” The reason for that, he explains, is “because if we have lots of bubbles on the boiling surface, that means boiling is very efficient, but if we have too many bubbles on the surface, they can coalesce together, which can form a vapor film over the boiling surface.” That film introduces resistance to the heat transfer from the hot surface to the water. “If we have vapor in between the surface and water, that prevents the heat transfer efficiency and lowers the CHF value,” he says.

    Song, who is now a postdoc at Lawrence Berkeley National Laboratory, carried out much of the research as part of his doctoral thesis work at MIT. While the various components of the new surface treatment he developed had been previously studied, the researchers say this work is the first to show that these methods could be combined to overcome the tradeoff between the two competing parameters.

    Adding a series of microscale cavities, or dents, to a surface is a way of controlling the way bubbles form on that surface, keeping them effectively pinned to the locations of the dents and preventing them from spreading out into a heat-resisting film. In this work, the researchers created an array of 10-micrometer-wide dents separated by about 2 millimeters to prevent film formation. But that separation also reduces the concentration of bubbles at the surface, which can reduce the boiling efficiency. To compensate for that, the team introduced a much smaller-scale surface treatment, creating tiny bumps and ridges at the nanometer scale, which increases the surface area and promotes the rate of evaporation under the bubbles.

    In these experiments, the cavities were made in the centers of a series of pillars on the material’s surface. These pillars, combined with nanostructures, promote wicking of liquid from the base to their tops, and this enhances the boiling process by providing more surface area exposed to the water. In combination, the three “tiers” of the surface texture — the cavity separation, the posts, and the nanoscale texturing — provide a greatly enhanced efficiency for the boiling process, Song says.

    “Those micro cavities define the position where bubbles come up,” he says. “But by separating those cavities by 2 millimeters, we separate the bubbles and minimize the coalescence of bubbles.” At the same time, the nanostructures promote evaporation under the bubbles, and the capillary action induced by the pillars supplies liquid to the bubble base. That maintains a layer of liquid water between the boiling surface and the bubbles of vapor, which enhances the maximum heat flux.

    Although their work has confirmed that the combination of these kinds of surface treatments can work and achieve the desired effects, this work was done under small-scale laboratory conditions that could not easily be scaled up to practical devices, Wang says. “These kinds of structures we’re making are not meant to be scaled in its current form,” she says, but rather were used to prove that such a system can work. One next step will be to find alternative ways of creating these kinds of surface textures so these methods could more easily be scaled up to practical dimensions.

    “Showing that we can control the surface in this way to get enhancement is a first step,” she says. “Then the next step is to think about more scalable approaches.” For example, though the pillars on the surface in these experiments were created using clean-room methods commonly used to produce semiconductor chips, there are other, less demanding ways of creating such structures, such as electrodeposition. There are also a number of different ways to produce the surface nanostructure textures, some of which may be more easily scalable.

    There may be some significant small-scale applications that could use this process in its present form, such as the thermal management of electronic devices, an area that is becoming more important as semiconductor devices get smaller and managing their heat output becomes ever more important. “There’s definitely a space there where this is really important,” Wang says.

    Even those kinds of applications will take some time to develop because typically thermal management systems for electronics use liquids other than water, known as dielectric liquids. These liquids have different surface tension and other properties than water, so the dimensions of the surface features would have to be adjusted accordingly. Work on these differences is one of the next steps for the ongoing research, Wang says.

    This same multiscale structuring technique could also be applied to different liquids, Song says, by adjusting the dimensions to account for the different properties of the liquids. “Those kinds of details can be changed, and that can be our next step,” he says.

    The team also included Carlos Diaz-Martin, Lenan Zhang, Hyeongyun Cha, and Yajing Zhao, all at MIT. The work was supported by the Advanced Research Projects Agency-Energy (ARPA-E), the Air Force Office of Scientific Research, and the Singapore-MIT Alliance for Research and Technology, and made use of the MIT.nano facilities. More