More stories

  • in

    Using liquid air for grid-scale energy storage

    As the world moves to reduce carbon emissions, solar and wind power will play an increasing role on electricity grids. But those renewable sources only generate electricity when it’s sunny or windy. So to ensure a reliable power grid — one that can deliver electricity 24/7 — it’s crucial to have a means of storing electricity when supplies are abundant and delivering it later, when they’re not. And sometimes large amounts of electricity will need to be stored not just for hours, but for days, or even longer.Some methods of achieving “long-duration energy storage” are promising. For example, with pumped hydro energy storage, water is pumped from a lake to another, higher lake when there’s extra electricity and released back down through power-generating turbines when more electricity is needed. But that approach is limited by geography, and most potential sites in the United States have already been used. Lithium-ion batteries could provide grid-scale storage, but only for about four hours. Longer than that and battery systems get prohibitively expensive.A team of researchers from MIT and the Norwegian University of Science and Technology (NTNU) has been investigating a less-familiar option based on an unlikely-sounding concept: liquid air, or air that is drawn in from the surroundings, cleaned and dried, and then cooled to the point that it liquefies. “Liquid air energy storage” (LAES) systems have been built, so the technology is technically feasible. Moreover, LAES systems are totally clean and can be sited nearly anywhere, storing vast amounts of electricity for days or longer and delivering it when it’s needed. But there haven’t been conclusive studies of its economic viability. Would the income over time warrant the initial investment and ongoing costs? With funding from the MIT Energy Initiative’s Future Energy Systems Center, the researchers developed a model that takes detailed information on LAES systems and calculates when and where those systems would be economically viable, assuming future scenarios in line with selected decarbonization targets as well as other conditions that may prevail on future energy grids.They found that under some of the scenarios they modeled, LAES could be economically viable in certain locations. Sensitivity analyses showed that policies providing a subsidy on capital expenses could make LAES systems economically viable in many locations. Further calculations showed that the cost of storing a given amount of electricity with LAES would be lower than with more familiar systems such as pumped hydro and lithium-ion batteries. They conclude that LAES holds promise as a means of providing critically needed long-duration storage when future power grids are decarbonized and dominated by intermittent renewable sources of electricity.The researchers — Shaylin A. Cetegen, a PhD candidate in the MIT Department of Chemical Engineering (ChemE); Professor Emeritus Truls Gundersen of the NTNU Department of Energy and Process Engineering; and MIT Professor Emeritus Paul I. Barton of ChemE — describe their model and their findings in a new paper published in the journal Energy.The LAES technology and its benefitsLAES systems consists of three steps: charging, storing, and discharging. When supply on the grid exceeds demand and prices are low, the LAES system is charged. Air is then drawn in and liquefied. A large amount of electricity is consumed to cool and liquefy the air in the LAES process. The liquid air is then sent to highly insulated storage tanks, where it’s held at a very low temperature and atmospheric pressure. When the power grid needs added electricity to meet demand, the liquid air is first pumped to a higher pressure and then heated, and it turns back into a gas. This high-pressure, high-temperature, vapor-phase air expands in a turbine that generates electricity to be sent back to the grid.According to Cetegen, a primary advantage of LAES is that it’s clean. “There are no contaminants involved,” she says. “It takes in and releases only ambient air and electricity, so it’s as clean as the electricity that’s used to run it.” In addition, a LAES system can be built largely from commercially available components and does not rely on expensive or rare materials. And the system can be sited almost anywhere, including near other industrial processes that produce waste heat or cold that can be used by the LAES system to increase its energy efficiency.Economic viabilityIn considering the potential role of LAES on future power grids, the first question is: Will LAES systems be attractive to investors? Answering that question requires calculating the technology’s net present value (NPV), which represents the sum of all discounted cash flows — including revenues, capital expenditures, operating costs, and other financial factors — over the project’s lifetime. (The study assumed a cash flow discount rate of 7 percent.)To calculate the NPV, the researchers needed to determine how LAES systems will perform in future energy markets. In those markets, various sources of electricity are brought online to meet the current demand, typically following a process called “economic dispatch:” The lowest-cost source that’s available is always deployed next. Determining the NPV of liquid air storage therefore requires predicting how that technology will fare in future markets competing with other sources of electricity when demand exceeds supply — and also accounting for prices when supply exceeds demand, so excess electricity is available to recharge the LAES systems.For their study, the MIT and NTNU researchers designed a model that starts with a description of an LAES system, including details such as the sizes of the units where the air is liquefied and the power is recovered, and also capital expenses based on estimates reported in the literature. The model then draws on state-of-the-art pricing data that’s released every year by the National Renewable Energy Laboratory (NREL) and is widely used by energy modelers worldwide. The NREL dataset forecasts prices, construction and retirement of specific types of electricity generation and storage facilities, and more, assuming eight decarbonization scenarios for 18 regions of the United States out to 2050.The new model then tracks buying and selling in energy markets for every hour of every day in a year, repeating the same schedule for five-year intervals. Based on the NREL dataset and details of the LAES system — plus constraints such as the system’s physical storage capacity and how often it can switch between charging and discharging — the model calculates how much money LAES operators would make selling power to the grid when it’s needed and how much they would spend buying electricity when it’s available to recharge their LAES system. In line with the NREL dataset, the model generates results for 18 U.S. regions and eight decarbonization scenarios, including 100 percent decarbonization by 2035 and 95 percent decarbonization by 2050, and other assumptions about future energy grids, including high-demand growth plus high and low costs for renewable energy and for natural gas.Cetegen describes some of their results: “Assuming a 100-megawatt (MW) system — a standard sort of size — we saw economic viability pop up under the decarbonization scenario calling for 100 percent decarbonization by 2035.” So, positive NPVs (indicating economic viability) occurred only under the most aggressive — therefore the least realistic — scenario, and they occurred in only a few southern states, including Texas and Florida, likely because of how those energy markets are structured and operate.The researchers also tested the sensitivity of NPVs to different storage capacities, that is, how long the system could continuously deliver power to the grid. They calculated the NPVs of a 100 MW system that could provide electricity supply for one day, one week, and one month. “That analysis showed that under aggressive decarbonization, weekly storage is more economically viable than monthly storage, because [in the latter case] we’re paying for more storage capacity than we need,” explains Cetegen.Improving the NPV of the LAES systemThe researchers next analyzed two possible ways to improve the NPV of liquid air storage: by increasing the system’s energy efficiency and by providing financial incentives. Their analyses showed that increasing the energy efficiency, even up to the theoretical limit of the process, would not change the economic viability of LAES under the most realistic decarbonization scenarios. On the other hand, a major improvement resulted when they assumed policies providing subsidies on capital expenditures on new installations. Indeed, assuming subsidies of between 40 percent and 60 percent made the NPVs for a 100 MW system become positive under all the realistic scenarios.Thus, their analysis showed that financial incentives could be far more effective than technical improvements in making LAES economically viable. While engineers may find that outcome disappointing, Cetegen notes that from a broader perspective, it’s good news. “You could spend your whole life trying to optimize the efficiency of this process, and it wouldn’t translate to securing the investment needed to scale the technology,” she says. “Policies can take a long time to implement as well. But theoretically you could do it overnight. So if storage is needed [on a future decarbonized grid], then this is one way to encourage adoption of LAES right away.”Cost comparison with other energy storage technologiesCalculating the economic viability of a storage technology is highly dependent on the assumptions used. As a result, a different measure — the “levelized cost of storage” (LCOS) — is typically used to compare the costs of different storage technologies. In simple terms, the LCOS is the cost of storing each unit of energy over the lifetime of a project, not accounting for any income that results.On that measure, the LAES technology excels. The researchers’ model yielded an LCOS for liquid air storage of about $60 per megawatt-hour, regardless of the decarbonization scenario. That LCOS is about a third that of lithium-ion battery storage and half that of pumped hydro. Cetegen cites another interesting finding: the LCOS of their assumed LAES system varied depending on where it’s being used. The standard practice of reporting a single LCOS for a given energy storage technology may not provide the full picture.Cetegen has adapted the model and is now calculating the NPV and LCOS for energy storage using lithium-ion batteries. But she’s already encouraged by the LCOS of liquid air storage. “While LAES systems may not be economically viable from an investment perspective today, that doesn’t mean they won’t be implemented in the future,” she concludes. “With limited options for grid-scale storage expansion and the growing need for storage technologies to ensure energy security, if we can’t find economically viable alternatives, we’ll likely have to turn to least-cost solutions to meet storage needs. This is why the story of liquid air storage is far from over. We believe our findings justify the continued exploration of LAES as a key energy storage solution for the future.” More

  • in

    Study: Burning heavy fuel oil with scrubbers is the best available option for bulk maritime shipping

    When the International Maritime Organization enacted a mandatory cap on the sulfur content of marine fuels in 2020, with an eye toward reducing harmful environmental and health impacts, it left shipping companies with three main options.They could burn low-sulfur fossil fuels, like marine gas oil, or install cleaning systems to remove sulfur from the exhaust gas produced by burning heavy fuel oil. Biofuels with lower sulfur content offer another alternative, though their limited availability makes them a less feasible option.While installing exhaust gas cleaning systems, known as scrubbers, is the most feasible and cost-effective option, there has been a great deal of uncertainty among firms, policymakers, and scientists as to how “green” these scrubbers are.Through a novel lifecycle assessment, researchers from MIT, Georgia Tech, and elsewhere have now found that burning heavy fuel oil with scrubbers in the open ocean can match or surpass using low-sulfur fuels, when a wide variety of environmental factors is considered.The scientists combined data on the production and operation of scrubbers and fuels with emissions measurements taken onboard an oceangoing cargo ship.They found that, when the entire supply chain is considered, burning heavy fuel oil with scrubbers was the least harmful option in terms of nearly all 10 environmental impact factors they studied, such as greenhouse gas emissions, terrestrial acidification, and ozone formation.“In our collaboration with Oldendorff Carriers to broadly explore reducing the environmental impact of shipping, this study of scrubbers turned out to be an unexpectedly deep and important transitional issue,” says Neil Gershenfeld, an MIT professor, director of the Center for Bits and Atoms (CBA), and senior author of the study.“Claims about environmental hazards and policies to mitigate them should be backed by science. You need to see the data, be objective, and design studies that take into account the full picture to be able to compare different options from an apples-to-apples perspective,” adds lead author Patricia Stathatou, an assistant professor at Georgia Tech, who began this study as a postdoc in the CBA.Stathatou is joined on the paper by Michael Triantafyllou, the Henry L. and Grace Doherty and others at the National Technical University of Athens in Greece and the maritime shipping firm Oldendorff Carriers. The research appears today in Environmental Science and Technology.Slashing sulfur emissionsHeavy fuel oil, traditionally burned by bulk carriers that make up about 30 percent of the global maritime fleet, usually has a sulfur content around 2 to 3 percent. This is far higher than the International Maritime Organization’s 2020 cap of 0.5 percent in most areas of the ocean and 0.1 percent in areas near population centers or environmentally sensitive regions.Sulfur oxide emissions contribute to air pollution and acid rain, and can damage the human respiratory system.In 2018, fewer than 1,000 vessels employed scrubbers. After the cap went into place, higher prices of low-sulfur fossil fuels and limited availability of alternative fuels led many firms to install scrubbers so they could keep burning heavy fuel oil.Today, more than 5,800 vessels utilize scrubbers, the majority of which are wet, open-loop scrubbers.“Scrubbers are a very mature technology. They have traditionally been used for decades in land-based applications like power plants to remove pollutants,” Stathatou says.A wet, open-loop marine scrubber is a huge, metal, vertical tank installed in a ship’s exhaust stack, above the engines. Inside, seawater drawn from the ocean is sprayed through a series of nozzles downward to wash the hot exhaust gases as they exit the engines.The seawater interacts with sulfur dioxide in the exhaust, converting it to sulfates — water-soluble, environmentally benign compounds that naturally occur in seawater. The washwater is released back into the ocean, while the cleaned exhaust escapes to the atmosphere with little to no sulfur dioxide emissions.But the acidic washwater can contain other combustion byproducts like heavy metals, so scientists wondered if scrubbers were comparable, from a holistic environmental point of view, to burning low-sulfur fuels.Several studies explored toxicity of washwater and fuel system pollution, but none painted a full picture.The researchers set out to fill that scientific gap.A “well-to-wake” analysisThe team conducted a lifecycle assessment using a global environmental database on production and transport of fossil fuels, such as heavy fuel oil, marine gas oil, and very-low sulfur fuel oil. Considering the entire lifecycle of each fuel is key, since producing low-sulfur fuel requires extra processing steps in the refinery, causing additional emissions of greenhouse gases and particulate matter.“If we just look at everything that happens before the fuel is bunkered onboard the vessel, heavy fuel oil is significantly more low-impact, environmentally, than low-sulfur fuels,” she says.The researchers also collaborated with a scrubber manufacturer to obtain detailed information on all materials, production processes, and transportation steps involved in marine scrubber fabrication and installation.“If you consider that the scrubber has a lifetime of about 20 years, the environmental impacts of producing the scrubber over its lifetime are negligible compared to producing heavy fuel oil,” she adds.For the final piece, Stathatou spent a week onboard a bulk carrier vessel in China to measure emissions and gather seawater and washwater samples. The ship burned heavy fuel oil with a scrubber and low-sulfur fuels under similar ocean conditions and engine settings.Collecting these onboard data was the most challenging part of the study.“All the safety gear, combined with the heat and the noise from the engines on a moving ship, was very overwhelming,” she says.Their results showed that scrubbers reduce sulfur dioxide emissions by 97 percent, putting heavy fuel oil on par with low-sulfur fuels according to that measure. The researchers saw similar trends for emissions of other pollutants like carbon monoxide and nitrous oxide.In addition, they tested washwater samples for more than 60 chemical parameters, including nitrogen, phosphorus, polycyclic aromatic hydrocarbons, and 23 metals.The concentrations of chemicals regulated by the IMO were far below the organization’s requirements. For unregulated chemicals, the researchers compared the concentrations to the strictest limits for industrial effluents from the U.S. Environmental Protection Agency and European Union.Most chemical concentrations were at least an order of magnitude below these requirements.In addition, since washwater is diluted thousands of times as it is dispersed by a moving vessel, the concentrations of such chemicals would be even lower in the open ocean.These findings suggest that the use of scrubbers with heavy fuel oil can be considered as equal to or more environmentally friendly than low-sulfur fuels across many of the impact categories the researchers studied.“This study demonstrates the scientific complexity of the waste stream of scrubbers. Having finally conducted a multiyear, comprehensive, and peer-reviewed study, commonly held fears and assumptions are now put to rest,” says Scott Bergeron, managing director at Oldendorff Carriers and co-author of the study.“This first-of-its-kind study on a well-to-wake basis provides very valuable input to ongoing discussion at the IMO,” adds Thomas Klenum, executive vice president of innovation and regulatory affairs at the Liberian Registry, emphasizing the need “for regulatory decisions to be made based on scientific studies providing factual data and conclusions.”Ultimately, this study shows the importance of incorporating lifecycle assessments into future environmental impact reduction policies, Stathatou says.“There is all this discussion about switching to alternative fuels in the future, but how green are these fuels? We must do our due diligence to compare them equally with existing solutions to see the costs and benefits,” she adds.This study was supported, in part, by Oldendorff Carriers. More

  • in

    Surprise discovery could lead to improved catalysts for industrial reactions

    The process of catalysis — in which a material speeds up a chemical reaction — is crucial to the production of many of the chemicals used in our everyday lives. But even though these catalytic processes are widespread, researchers often lack a clear understanding of exactly how they work.A new analysis by researchers at MIT has shown that an important industrial synthesis process, the production of vinyl acetate, requires a catalyst to take two different forms, which cycle back and forth from one to the other as the chemical process unfolds.Previously, it had been thought that only one of the two forms was needed. The new findings are published today in the journal Science, in a paper by MIT graduate students Deiaa Harraz and Kunal Lodaya, Bryan Tang PhD ’23, and MIT professor of chemistry and chemical engineering Yogesh Surendranath.There are two broad classes of catalysts: homogeneous catalysts, which consist of dissolved molecules, and heterogeneous catalysts, which are solid materials whose surface provides the site for the chemical reaction. “For the longest time,” Surendranath says, “there’s been a general view that you either have catalysis happening on these surfaces, or you have them happening on these soluble molecules.” But the new research shows that in the case of vinyl acetate — an important material that goes into many polymer products such as the rubber in the soles of your shoes — there is an interplay between both classes of catalysis.“What we discovered,” Surendranath explains, “is that you actually have these solid metal materials converting into molecules, and then converting back into materials, in a cyclic dance.”He adds: “This work calls into question this paradigm where there’s either one flavor of catalysis or another. Really, there could be an interplay between both of them in certain cases, and that could be really advantageous for having a process that’s selective and efficient.”The synthesis of vinyl acetate has been a large-scale industrial reaction since the 1960s, and it has been well-researched and refined over the years to improve efficiency. This has happened largely through a trial-and-error approach, without a precise understanding of the underlying mechanisms, the researchers say.While chemists are often more familiar with homogeneous catalysis mechanisms, and chemical engineers are often more familiar with surface catalysis mechanisms, fewer researchers study both. This is perhaps part of the reason that the full complexity of this reaction was not previously captured. But Harraz says he and his colleagues are working at the interface between disciplines. “We’ve been able to appreciate both sides of this reaction and find that both types of catalysis are critical,” he says.The reaction that produces vinyl acetate requires something to activate the oxygen molecules that are one of the constituents of the reaction, and something else to activate the other ingredients, acetic acid and ethylene. The researchers found that the form of the catalyst that worked best for one part of the process was not the best for the other. It turns out that the molecular form of the catalyst does the key chemistry with the ethylene and the acetic acid, while it’s the surface that ends up doing the activation of the oxygen.They found that the underlying process involved in interconverting the two forms of the catalyst is actually corrosion, similar to the process of rusting. “It turns out that in rusting, you actually go through a soluble molecular species somewhere in the sequence,” Surendranath says.The team borrowed techniques traditionally used in corrosion research to study the process. They used electrochemical tools to study the reaction, even though the overall reaction does not require a supply of electricity. By making potential measurements, the researchers determined that the corrosion of the palladium catalyst material to soluble palladium ions is driven by an electrochemical reaction with the oxygen, converting it to water. Corrosion is “one of the oldest topics in electrochemistry,” says Lodaya, “but applying the science of corrosion to understand catalysis is much newer, and was essential to our findings.”By correlating measurements of catalyst corrosion with other measurements of the chemical reaction taking place, the researchers proposed that it was the corrosion rate that was limiting the overall reaction. “That’s the choke point that’s controlling the rate of the overall process,” Surendranath says.The interplay between the two types of catalysis works efficiently and selectively “because it actually uses the synergy of a material surface doing what it’s good at and a molecule doing what it’s good at,” Surendranath says. The finding suggests that, when designing new catalysts, rather than focusing on either solid materials or soluble molecules alone, researchers should think about how the interplay of both may open up new approaches.“Now, with an improved understanding of what makes this catalyst so effective, you can try to design specific materials or specific interfaces that promote the desired chemistry,” Harraz says. Since this process has been worked on for so long, these findings may not necessarily lead to improvements in this specific process of making vinyl acetate, but it does provide a better understanding of why the materials work as they do, and could lead to improvements in other catalytic processes.Understanding that “catalysts can transit between molecule and material and back, and the role that electrochemistry plays in those transformations, is a concept that we are really excited to expand on,” Lodaya says.Harraz adds: “With this new understanding that both types of catalysis could play a role, what other catalytic processes are out there that actually involve both? Maybe those have a lot of room for improvement that could benefit from this understanding.”This work is “illuminating, something that will be worth teaching at the undergraduate level,” says Christophe Coperet, a professor of inorganic chemistry at ETH Zurich, who was not associated with the research. “The work highlights new ways of thinking. … [It] is notable in the sense that it not only reconciles homogeneous and heterogeneous catalysis, but it describes these complex processes as half reactions, where electron transfers can cycle between distinct entities.”The research was supported, in part, by the National Science Foundation as a Phase I Center for Chemical Innovation; the Center for Interfacial Ionics; and the Gordon and Betty Moore Foundation. More

  • in

    For plants, urban heat islands don’t mimic global warming

    It’s tricky to predict precisely what the impacts of climate change will be, given the many variables involved. To predict the impacts of a warmer world on plant life, some researchers look at urban “heat islands,” where, because of the effects of urban structures, temperatures consistently run a few degrees higher than those of the surrounding rural areas. This enables side-by-side comparisons of plant responses.But a new study by researchers at MIT and Harvard University has found that, at least for forests, urban heat islands are a poor proxy for global warming, and this may have led researchers to underestimate the impacts of warming in some cases. The discrepancy, they found, has a lot to do with the limited genetic diversity of urban tree species.The findings appear in the journal PNAS, in a paper by MIT postdoc Meghan Blumstein, professor of civil and environmental engineering David Des Marais, and four others.“The appeal of these urban temperature gradients is, well, it’s already there,” says Des Marais. “We can’t look into the future, so why don’t we look across space, comparing rural and urban areas?” Because such data is easily obtainable, methods comparing the growth of plants in cities with similar plants outside them have been widely used, he says, and have been quite useful. Researchers did recognize some shortcomings to this approach, including significant differences in availability of some nutrients such as nitrogen. Still, “a lot of ecologists recognized that they weren’t perfect, but it was what we had,” he says.Most of the research by Des Marais’ group is lab-based, under conditions tightly controlled for temperature, humidity, and carbon dioxide concentration. While there are a handful of experimental sites where conditions are modified out in the field, for example using heaters around one or a few trees, “those are super small-scale,” he says. “When you’re looking at these longer-term trends that are occurring over space that’s quite a bit larger than you could reasonably manipulate, an important question is, how do you control the variables?”Temperature gradients have offered one approach to this problem, but Des Marais and his students have also been focusing on the genetics of the tree species involved, comparing those sampled in cities to the same species sampled in a natural forest nearby. And it turned out there were differences, even between trees that appeared similar.“So, lo and behold, you think you’re only letting one variable change in your model, which is the temperature difference from an urban to a rural setting,” he says, “but in fact, it looks like there was also a genotypic diversity that was not being accounted for.”The genetic differences meant that the plants being studied were not representative of those in the natural environment, and the researchers found that the difference was actually masking the impact of warming. The urban trees, they found, were less affected than their natural counterparts in terms of when the plants’ leaves grew and unfurled, or “leafed out,” in the spring.The project began during the pandemic lockdown, when Blumstein was a graduate student. She had a grant to study red oak genotypes across New England, but was unable to travel because of lockdowns. So, she concentrated on trees that were within reach in Cambridge, Massachusetts. She then collaborated with people doing research at the Harvard Forest, a research forest in rural central Massachusetts. They collected three years of data from both locations, including the temperature profiles, the leafing-out timing, and the genetic profiles of the trees. Though the study was looking at red oaks specifically, the researchers say the findings are likely to apply to trees broadly.At the time, researchers had just sequenced the oak tree genome, and that allowed Blumstein and her colleagues to look for subtle differences among the red oaks in the two locations. The differences they found showed that the urban trees were more resistant to the effects of warmer temperatures than were those in the natural environment.“Initially, we saw these results and we were sort of like, oh, this is a bad thing,” Des Marais says. “Ecologists are getting this heat island effect wrong, which is true.” Fortunately, this can be easily corrected by factoring in genomic data. “It’s not that much more work, because sequencing genomes is so cheap and so straightforward. Now, if someone wants to look at an urban-rural gradient and make these kinds of predictions, well, that’s fine. You just have to add some information about the genomes.”It’s not surprising that this genetic variation exists, he says, since growers have learned by trial and error over the decades which varieties of trees tend to thrive in the difficult urban environment, with typically poor soil, poor drainage, and pollution. “As a result, there’s just not much genetic diversity in our trees within cities.”The implications could be significant, Des Marais says. When the Intergovernmental Panel on Climate Change (IPCC) releases its regular reports on the status of the climate, “one of the tools the IPCC has to predict future responses to climate change with respect to temperature are these urban-to-rural gradients.” He hopes that these new findings will be incorporated into their next report, which is just being drafted. “If these results are generally true beyond red oaks, this suggests that the urban heat island approach to studying plant response to temperature is underpredicting how strong that response is.”The research team included Sophie Webster, Robin Hopkins, and David Basler from Harvard University and Jie Yun from MIT. The work was supported by the National Science Foundation, the Bullard Fellowship at the Harvard Forest, and MIT. More

  • in

    MIT Maritime Consortium sets sail

    Around 11 billion tons of goods, or about 1.5 tons per person worldwide, are transported by sea each year, representing about 90 percent of global trade by volume. Internationally, the merchant shipping fleet numbers around 110,000 vessels. These ships, and the ports that service them, are significant contributors to the local and global economy — and they’re significant contributors to greenhouse gas emissions.A new consortium, formalized in a signing ceremony at MIT last week, aims to address climate-harming emissions in the maritime shipping industry, while supporting efforts for environmentally friendly operation in compliance with the decarbonization goals set by the International Maritime Organization.“This is a timely collaboration with key stakeholders from the maritime industry with a very bold and interdisciplinary research agenda that will establish new technologies and evidence-based standards,” says Themis Sapsis, the William Koch Professor of Marine Technology at MIT and the director of MIT’s Center for Ocean Engineering. “It aims to bring the best from MIT in key areas for commercial shipping, such as nuclear technology for commercial settings, autonomous operation and AI methods, improved hydrodynamics and ship design, cybersecurity, and manufacturing.” Co-led by Sapsis and Fotini Christia, the Ford International Professor of the Social Sciences; director of the Institute for Data, Systems, and Society (IDSS); and director of the MIT Sociotechnical Systems Research Center, the newly-launched MIT Maritime Consortium (MC) brings together MIT collaborators from across campus, including the Center for Ocean Engineering, which is housed in the Department of Mechanical Engineering; IDSS, which is housed in the MIT Schwarzman College of Computing; the departments of Nuclear Science and Engineering and Civil and Environmental Engineering; MIT Sea Grant; and others, with a national and an international community of industry experts.The Maritime Consortium’s founding members are the American Bureau of Shipping (ABS), Capital Clean Energy Carriers Corp., and HD Korea Shipbuilding and Offshore Engineering. Innovation members are Foresight-Group, Navios Maritime Partners L.P., Singapore Maritime Institute, and Dorian LPG.“The challenges the maritime industry faces are challenges that no individual company or organization can address alone,” says Christia. “The solution involves almost every discipline from the School of Engineering, as well as AI and data-driven algorithms, and policy and regulation — it’s a true MIT problem.”Researchers will explore new designs for nuclear systems consistent with the techno-economic needs and constraints of commercial shipping, economic and environmental feasibility of alternative fuels, new data-driven algorithms and rigorous evaluation criteria for autonomous platforms in the maritime space, cyber-physical situational awareness and anomaly detection, as well as 3D printing technologies for onboard manufacturing. Collaborators will also advise on research priorities toward evidence-based standards related to MIT presidential priorities around climate, sustainability, and AI.MIT has been a leading center of ship research and design for over a century, and is widely recognized for contributions to hydrodynamics, ship structural mechanics and dynamics, propeller design, and overall ship design, and its unique educational program for U.S. Navy Officers, the Naval Construction and Engineering Program. Research today is at the forefront of ocean science and engineering, with significant efforts in fluid mechanics and hydrodynamics, acoustics, offshore mechanics, marine robotics and sensors, and ocean sensing and forecasting. The consortium’s academic home at MIT also opens the door to cross-departmental collaboration across the Institute.The MC will launch multiple research projects designed to tackle challenges from a variety of angles, all united by cutting-edge data analysis and computation techniques. Collaborators will research new designs and methods that improve efficiency and reduce greenhouse gas emissions, explore feasibility of alternative fuels, and advance data-driven decision-making, manufacturing and materials, hydrodynamic performance, and cybersecurity.“This consortium brings a powerful collection of significant companies that, together, has the potential to be a global shipping shaper in itself,” says Christopher J. Wiernicki SM ’85, chair and chief executive officer of ABS. “The strength and uniqueness of this consortium is the members, which are all world-class organizations and real difference makers. The ability to harness the members’ experience and know-how, along with MIT’s technology reach, creates real jet fuel to drive progress,” Wiernicki says. “As well as researching key barriers, bottlenecks, and knowledge gaps in the emissions challenge, the consortium looks to enable development of the novel technology and policy innovation that will be key. Long term, the consortium hopes to provide the gravity we will need to bend the curve.” More

  • in

    Technology developed by MIT engineers makes pesticides stick to plant leaves

    Reducing the amount of agricultural sprays used by farmers — including fertilizers, pesticides and herbicides — could cut down the amount of polluting runoff that ends up in the environment while at the same time reducing farmers’ costs and perhaps even enhancing their productivity. A classic win-win-win.A team of researchers at MIT and a spinoff company they launched has developed a system to do just that. Their technology adds a thin coating around droplets as they are being sprayed onto a field, greatly reducing their tendency to bounce off leaves and end up wasted on the ground. Instead, the coated droplets stick to the leaves as intended.The research is described today in the journal Soft Matter, in a paper by recent MIT alumni Vishnu Jayaprakash PhD ’22 and Sreedath Panat PhD ’23, graduate student Simon Rufer, and MIT professor of mechanical engineering Kripa Varanasi.A recent study found that if farmers didn’t use pesticides, they would lose 78 percent of fruit, 54 percent of vegetable, and 32 percent of cereal production. Despite their importance, a lack of technology that monitors and optimizes sprays has forced farmers to rely on personal experience and rules of thumb to decide how to apply these chemicals. As a result, these chemicals tend to be over-sprayed, leading to runoff and chemicals ending up in waterways or building up in the soil.Pesticides take a significant toll on global health and the environment, the researchers point out. A recent study found that 31 percent of agricultural soils around the world were at high risk from pesticide pollution. And agricultural chemicals are a major expense for farmers: In the U.S., they spend $16 billion a year just on pesticides.Making spraying more efficient is one of the best ways to make food production more sustainable and economical. Agricultural spraying essentially boils down to mixing chemicals into water and spraying water droplets onto plant leaves, which are often inherently water-repellent. “Over more than a decade of research in my lab at MIT, we have developed fundamental understandings of spraying and the interaction between droplets and plants — studying when they bounce and all the ways we have to make them stick better and enhance coverage,” Varanasi says.The team had previously found a way to reduce the amount of sprayed liquid that bounces away from the leaves it strikes, which involved using two spray nozzles instead of one and spraying mixtures with opposite electrical charges. But they found that farmers were reluctant to take on the expense and effort of converting their spraying equipment to a two-nozzle system. So, the team looked for a simpler alternative.They discovered they could achieve the same improvement in droplet retention using a single-nozzle system that can be easily adapted to existing sprayers. Instead of giving the droplets of pesticide an electric charge, they coat each droplet with a vanishingly thin layer of an oily material.In their new study, they conducted lab experiments with high-speed cameras. When they sprayed droplets with no special treatment onto a water-repelling (hydrophobic) surface similar to that of many plant leaves, the droplets initially spread out into a pancake-like disk, then rebounded back into a ball and bounced away. But when the researchers coated the surface of the droplets with a tiny amount of oil — making up less than 1 percent of the droplet’s liquid — the droplets spread out and then stayed put. The treatment improved the droplets’ “stickiness” by as much as a hundredfold.“When these droplets are hitting the surface and as they expand, they form this oil ring that essentially pins the droplet to the surface,” Rufer says. The researchers tried a wide variety of conditions, he says, explaining that they conducted hundreds of experiments, “with different impact velocities, different droplet sizes, different angles of inclination, all the things that fully characterize this phenomenon.” Though different oils varied in their effectiveness, all of them were effective. “Regardless of the impact velocity and the oils, we saw that the rebound height was significantly lower,” he says.The effect works with remarkably small amounts of oil. In their initial tests they used 1 percent oil compared to the water, then they tried a 0.1 percent, and even .01. The improvement in droplets sticking to the surface continued at a 0.1 percent, but began to break down beyond that. “Basically, this oil film acts as a way to trap that droplet on the surface, because oil is very attracted to the surface and sort of holds the water in place,” Rufer says.In the researchers’ initial tests they used soybean oil for the coating, figuring this would be a familiar material for the farmers they were working with, many of whom were growing soybeans. But it turned out that though they were producing the beans, the oil was not part of their usual supply chain for use on the farm. In further tests, the researchers found that several chemicals that farmers were already routinely using in their spraying, called surfactants and adjuvants, could be used instead, and that some of these provided the same benefits in keeping the droplets stuck on the leaves.“That way,” Varanasi says, “we’re not introducing a new chemical or changed chemistries into their field, but they’re using things they’ve known for a long time.”Varanasi and Jayaprakash formed a company called AgZen to commercialize the system. In order to prove how much their coating system improves the amount of spray that stays on the plant, they first had to develop a system to monitor spraying in real time. That system, which they call RealCoverage, has been deployed on farms ranging in size from a few dozen acres to hundreds of thousands of acres, and many different crop types, and has saved farmers 30 to 50 percent on their pesticide expenditures, just by improving the controls on the existing sprays. That system is being deployed to 920,000 acres of crops in 2025, the company says, including some in California, Texas, the Midwest, France and Italy. Adding the cloaking system using new nozzles, the researchers say, should yield at least another doubling of efficiency.“You could give back a billion dollars to U.S. growers if you just saved 6 percent of their pesticide budget,” says Jayaprakash, lead author of the research paper and CEO of AgZen. “In the lab we got 300 percent of extra product on the plant. So that means we could get orders of magnitude reductions in the amount of pesticides that farmers are spraying.”Farmers had already been using these surfactant and adjuvant chemicals as a way to enhance spraying effectiveness, but they were mixing it with a water solution. For it to have any effect, they had to use much more of these materials, risking causing burns to the plants. The new coating system reduces the amount of these materials needed, while improving their effectiveness.In field tests conducted by AgZen, “we doubled the amount of product on kale and soybeans just by changing where the adjuvant was,” from mixed in to being a coating, Jayaprakash says. It’s convenient for farmers because “all they’re doing is changing their nozzle. They’re getting all their existing chemicals to work better, and they’re getting more product on the plant.”And it’s not just for pesticides. “The really cool thing is this is useful for every chemistry that’s going on the leaf, be it an insecticide, a herbicide, a fungicide, or foliar nutrition,” Varanasi says. This year, they plan to introduce the new spray system on about 30,000 acres of cropland.Varanasi says that with projected world population growth, “the amount of food production has got to double, and we are limited in so many resources, for example we cannot double the arable land. … This means that every acre we currently farm must become more efficient and able to do more with less.” These improved spraying technologies, for both monitoring the spraying and coating the droplets, Varanasi says, “I think is fundamentally changing agriculture.”AgZen has recently raised $10 million in venture financing to support rapid commercial deployment of these technologies that can improve the control of chemical inputs into agriculture. “The knowledge we are gathering from every leaf, combined with our expertise in interfacial science and fluid mechanics, is giving us unparalleled insights into how chemicals are used and developed — and it’s clear that we can deliver value across the entire agrochemical supply chain,” Varanasi says  “Our mission is to use these technologies to deliver improved outcomes and reduced costs for the ag industry.”  More

  • in

    Study: Climate change will reduce the number of satellites that can safely orbit in space

    MIT aerospace engineers have found that greenhouse gas emissions are changing the environment of near-Earth space in ways that, over time, will reduce the number of satellites that can sustainably operate there.In a study appearing today in Nature Sustainability, the researchers report that carbon dioxide and other greenhouse gases can cause the upper atmosphere to shrink. An atmospheric layer of special interest is the thermosphere, where the International Space Station and most satellites orbit today. When the thermosphere contracts, the decreasing density reduces atmospheric drag — a force that pulls old satellites and other debris down to altitudes where they will encounter air molecules and burn up.Less drag therefore means extended lifetimes for space junk, which will litter sought-after regions for decades and increase the potential for collisions in orbit.The team carried out simulations of how carbon emissions affect the upper atmosphere and orbital dynamics, in order to estimate the “satellite carrying capacity” of low Earth orbit. These simulations predict that by the year 2100, the carrying capacity of the most popular regions could be reduced by 50-66 percent due to the effects of greenhouse gases.“Our behavior with greenhouse gases here on Earth over the past 100 years is having an effect on how we operate satellites over the next 100 years,” says study author Richard Linares, associate professor in MIT’s Department of Aeronautics and Astronautics (AeroAstro).“The upper atmosphere is in a fragile state as climate change disrupts the status quo,” adds lead author William Parker, a graduate student in AeroAstro. “At the same time, there’s been a massive increase in the number of satellites launched, especially for delivering broadband internet from space. If we don’t manage this activity carefully and work to reduce our emissions, space could become too crowded, leading to more collisions and debris.”The study includes co-author Matthew Brown of the University of Birmingham.Sky fallThe thermosphere naturally contracts and expands every 11 years in response to the sun’s regular activity cycle. When the sun’s activity is low, the Earth receives less radiation, and its outermost atmosphere temporarily cools and contracts before expanding again during solar maximum.In the 1990s, scientists wondered what response the thermosphere might have to greenhouse gases. Their preliminary modeling showed that, while the gases trap heat in the lower atmosphere, where we experience global warming and weather, the same gases radiate heat at much higher altitudes, effectively cooling the thermosphere. With this cooling, the researchers predicted that the thermosphere should shrink, reducing atmospheric density at high altitudes.In the last decade, scientists have been able to measure changes in drag on satellites, which has provided some evidence that the thermosphere is contracting in response to something more than the sun’s natural, 11-year cycle.“The sky is quite literally falling — just at a rate that’s on the scale of decades,” Parker says. “And we can see this by how the drag on our satellites is changing.”The MIT team wondered how that response will affect the number of satellites that can safely operate in Earth’s orbit. Today, there are over 10,000 satellites drifting through low Earth orbit, which describes the region of space up to 1,200 miles (2,000 kilometers), from Earth’s surface. These satellites deliver essential services, including internet, communications, navigation, weather forecasting, and banking. The satellite population has ballooned in recent years, requiring operators to perform regular collision-avoidance maneuvers to keep safe. Any collisions that do occur can generate debris that remains in orbit for decades or centuries, increasing the chance for follow-on collisions with satellites, both old and new.“More satellites have been launched in the last five years than in the preceding 60 years combined,” Parker says. “One of key things we’re trying to understand is whether the path we’re on today is sustainable.”Crowded shellsIn their new study, the researchers simulated different greenhouse gas emissions scenarios over the next century to investigate impacts on atmospheric density and drag. For each “shell,” or altitude range of interest, they then modeled the orbital dynamics and the risk of satellite collisions based on the number of objects within the shell. They used this approach to identify each shell’s “carrying capacity” — a term that is typically used in studies of ecology to describe the number of individuals that an ecosystem can support.“We’re taking that carrying capacity idea and translating it to this space sustainability problem, to understand how many satellites low Earth orbit can sustain,” Parker explains.The team compared several scenarios: one in which greenhouse gas concentrations remain at their level from the year 2000 and others where emissions change according to the Intergovernmental Panel on Climate Change (IPCC) Shared Socioeconomic Pathways (SSPs). They found that scenarios with continuing increases in emissions would lead to a significantly reduced carrying capacity throughout low Earth orbit.In particular, the team estimates that by the end of this century, the number of satellites safely accommodated within the altitudes of 200 and 1,000 kilometers could be reduced by 50 to 66 percent compared with a scenario in which emissions remain at year-2000 levels. If satellite capacity is exceeded, even in a local region, the researchers predict that the region will experience a “runaway instability,” or a cascade of collisions that would create so much debris that satellites could no longer safely operate there.Their predictions forecast out to the year 2100, but the team says that certain shells in the atmosphere today are already crowding up with satellites, particularly from recent “megaconstellations” such as SpaceX’s Starlink, which comprises fleets of thousands of small internet satellites.“The megaconstellation is a new trend, and we’re showing that because of climate change, we’re going to have a reduced capacity in orbit,” Linares says. “And in local regions, we’re close to approaching this capacity value today.”“We rely on the atmosphere to clean up our debris. If the atmosphere is changing, then the debris environment will change too,” Parker adds. “We show the long-term outlook on orbital debris is critically dependent on curbing our greenhouse gas emissions.”This research is supported, in part, by the U.S. National Science Foundation, the U.S. Air Force, and the U.K. Natural Environment Research Council. More

  • in

    Study: The ozone hole is healing, thanks to global reduction of CFCs

    A new MIT-led study confirms that the Antarctic ozone layer is healing, as a direct result of global efforts to reduce ozone-depleting substances.Scientists including the MIT team have observed signs of ozone recovery in the past. But the new study is the first to show, with high statistical confidence, that this recovery is due primarily to the reduction of ozone-depleting substances, versus other influences such as natural weather variability or increased greenhouse gas emissions to the stratosphere.“There’s been a lot of qualitative evidence showing that the Antarctic ozone hole is getting better. This is really the first study that has quantified confidence in the recovery of the ozone hole,” says study author Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies and Chemistry. “The conclusion is, with 95 percent confidence, it is recovering. Which is awesome. And it shows we can actually solve environmental problems.”The new study appears today in the journal Nature. Graduate student Peidong Wang from the Solomon group in the Department of Earth, Atmospheric and Planetary Sciences (EAPS) is the lead author. His co-authors include Solomon and EAPS Research Scientist Kane Stone, along with collaborators from multiple other institutions.Roots of ozone recoveryWithin the Earth’s stratosphere, ozone is a naturally occurring gas that acts as a sort of sunscreen, protecting the planet from the sun’s harmful ultraviolet radiation. In 1985, scientists discovered a “hole” in the ozone layer over Antarctica that opened up during the austral spring, between September and December. This seasonal ozone depletion was suddenly allowing UV rays to filter down to the surface, leading to skin cancer and other adverse health effects.In 1986, Solomon, who was then working at the National Oceanic and Atmospheric Administration (NOAA), led expeditions to the Antarctic, where she and her colleagues gathered evidence that quickly confirmed the ozone hole’s cause: chlorofluorocarbons, or CFCs — chemicals that were then used in refrigeration, air conditioning, insulation, and aerosol propellants. When CFCs drift up into the stratosphere, they can break down ozone under certain seasonal conditions.The following year, those relevations led to the drafting of the Montreal Protocol — an international treaty that aimed to phase out the production of CFCs and other ozone-depleting substances, in hopes of healing the ozone hole.In 2016, Solomon led a study reporting key signs of ozone recovery. The ozone hole seemed to be shrinking with each year, especially in September, the time of year when it opens up. Still, these observations were qualitative. The study showed large uncertainties regarding how much of this recovery was due to concerted efforts to reduce ozone-depleting substances, or if the shrinking ozone hole was a result of other “forcings,” such as year-to-year weather variability from El Niño, La Niña, and the polar vortex.“While detecting a statistically significant increase in ozone is relatively straightforward, attributing these changes to specific forcings is more challenging,” says Wang.Anthropogenic healingIn their new study, the MIT team took a quantitative approach to identify the cause of Antarctic ozone recovery. The researchers borrowed a method from the climate change community, known as “fingerprinting,” which was pioneered by Klaus Hasselmann, who was awarded the Nobel Prize in Physics in 2021 for the technique. In the context of climate, fingerprinting refers to a method that isolates the influence of specific climate factors, apart from natural, meteorological noise. Hasselmann applied fingerprinting to identify, confirm, and quantify the anthropogenic fingerprint of climate change.Solomon and Wang looked to apply the fingerprinting method to identify another anthropogenic signal: the effect of human reductions in ozone-depleting substances on the recovery of the ozone hole.“The atmosphere has really chaotic variability within it,” Solomon says. “What we’re trying to detect is the emerging signal of ozone recovery against that kind of variability, which also occurs in the stratosphere.”The researchers started with simulations of the Earth’s atmosphere and generated multiple “parallel worlds,” or simulations of the same global atmosphere, under different starting conditions. For instance, they ran simulations under conditions that assumed no increase in greenhouse gases or ozone-depleting substances. Under these conditions, any changes in ozone should be the result of natural weather variability. They also ran simulations with only increasing greenhouse gases, as well as only decreasing ozone-depleting substances.They compared these simulations to observe how ozone in the Antarctic stratosphere changed, both with season, and across different altitudes, in response to different starting conditions. From these simulations, they mapped out the times and altitudes where ozone recovered from month to month, over several decades, and identified a key “fingerprint,” or pattern, of ozone recovery that was specifically due to conditions of declining ozone-depleting substances.The team then looked for this fingerprint in actual satellite observations of the Antarctic ozone hole from 2005 to the present day. They found that, over time, the fingerprint that they identified in simulations became clearer and clearer in observations. In 2018, the fingerprint was at its strongest, and the team could say with 95 percent confidence that ozone recovery was due mainly to reductions in ozone-depleting substances.“After 15 years of observational records, we see this signal to noise with 95 percent confidence, suggesting there’s only a very small chance that the observed pattern similarity can be explained by variability noise,” Wang says. “This gives us confidence in the fingerprint. It also gives us confidence that we can solve environmental problems. What we can learn from ozone studies is how different countries can swiftly follow these treaties to decrease emissions.”If the trend continues, and the fingerprint of ozone recovery grows stronger, Solomon anticipates that soon there will be a year, here and there, when the ozone layer stays entirely intact. And eventually, the ozone hole should stay shut for good.“By something like 2035, we might see a year when there’s no ozone hole depletion at all in the Antarctic. And that will be very exciting for me,” she says. “And some of you will see the ozone hole go away completely in your lifetimes. And people did that.”This research was supported, in part, by the National Science Foundation and NASA. More