More stories

  • in

    Workshop explores new advanced materials for a growing world

    It is clear that humankind needs increasingly more resources, from computing power to steel and concrete, to meet the growing demands associated with data centers, infrastructure, and other mainstays of society. New, cost-effective approaches for producing the advanced materials key to that growth were the focus of a two-day workshop at MIT on March 11 and 12.A theme throughout the event was the importance of collaboration between and within universities and industries. The goal is to “develop concepts that everybody can use together, instead of everybody doing something different and then trying to sort it out later at great cost,” said Lionel Kimerling, the Thomas Lord Professor of Materials Science and Engineering at MIT.The workshop was produced by MIT’s Materials Research Laboratory (MRL), which has an industry collegium, and MIT’s Industrial Liaison Program. The program included an address by Javier Sanfelix, lead of the Advanced Materials Team for the European Union. Sanfelix gave an overview of the EU’s strategy to developing advanced materials, which he said are “key enablers of the green and digital transition for European industry.”That strategy has already led to several initiatives. These include a material commons, or shared digital infrastructure for the design and development of advanced materials, and an advanced materials academy for educating new innovators and designers. Sanfelix also described an Advanced Materials Act for 2026 that aims to put in place a legislative framework that supports the entire innovation cycle.Sanfelix was visiting MIT to learn more about how the Institute is approaching the future of advanced materials. “We see MIT as a leader worldwide in technology, especially on materials, and there is a lot to learn about [your] industry collaborations and technology transfer with industry,” he said.Innovations in steel and concreteThe workshop began with talks about innovations involving two of the most common human-made materials in the world: steel and cement. We’ll need more of both but must reckon with the huge amounts of energy required to produce them and their impact on the environment due to greenhouse-gas emissions during that production.One way to address our need for more steel is to reuse what we have, said C. Cem Tasan, the POSCO Associate Professor of Metallurgy in the Department of Materials Science and Engineering (DMSE) and director of the Materials Research Laboratory.But most of the existing approaches to recycling scrap steel involve melting the metal. “And whenever you are dealing with molten metal, everything goes up, from energy use to carbon-dioxide emissions. Life is more difficult,” Tasan said.The question he and his team asked is whether they could reuse scrap steel without melting it. Could they consolidate solid scraps, then roll them together using existing equipment to create new sheet metal? From the materials-science perspective, Tasan said, that shouldn’t work, for several reasons.But it does. “We’ve demonstrated the potential in two papers and two patent applications already,” he said. Tasan noted that the approach focuses on high-quality manufacturing scrap. “This is not junkyard scrap,” he said.Tasan went on to explain how and why the new process works from a materials-science perspective, then gave examples of how the recycled steel could be used. “My favorite example is the stainless-steel countertops in restaurants. Do you really need the mechanical performance of stainless steel there?” You could use the recycled steel instead.Hessam Azarijafari addressed another common, indispensable material: concrete. This year marks the 16th anniversary of the MIT Concrete Sustainability Hub (CSHub), which began when a set of industry leaders and politicians reached out to MIT to learn more about the benefits and environmental impacts of concrete.The hub’s work now centers around three main themes: working toward a carbon-neutral concrete industry; the development of a sustainable infrastructure, with a focus on pavement; and how to make our cities more resilient to natural hazards through investment in stronger, cooler construction.Azarijafari, the deputy director of the CSHub, went on to give several examples of research results that have come out of the CSHub. These include many models to identify different pathways to decarbonize the cement and concrete sector. Other work involves pavements, which the general public thinks of as inert, Azarijafari said. “But we have [created] a state-of-the-art model that can assess interactions between pavement and vehicles.” It turns out that pavement surface characteristics and structural performance “can influence excess fuel consumption by inducing an additional rolling resistance.”Azarijafari emphasized  the importance of working closely with policymakers and industry. That engagement is key “to sharing the lessons that we have learned so far.”Toward a resource-efficient microchip industryConsider the following: In 2020 the number of cell phones, GPS units, and other devices connected to the “cloud,” or large data centers, exceeded 50 billion. And data-center traffic in turn is scaling by 1,000 times every 10 years.But all of that computation takes energy. And “all of it has to happen at a constant cost of energy, because the gross domestic product isn’t changing at that rate,” said Kimerling. The solution is to either produce much more energy, or make information technology much more energy-efficient. Several speakers at the workshop focused on the materials and components behind the latter.Key to everything they discussed: adding photonics, or using light to carry information, to the well-established electronics behind today’s microchips. “The bottom line is that integrating photonics with electronics in the same package is the transistor for the 21st century. If we can’t figure out how to do that, then we’re not going to be able to scale forward,” said Kimerling, who is director of the MIT Microphotonics Center.MIT has long been a leader in the integration of photonics with electronics. For example, Kimerling described the Integrated Photonics System Roadmap – International (IPSR-I), a global network of more than 400 industrial and R&D partners working together to define and create photonic integrated circuit technology. IPSR-I is led by the MIT Microphotonics Center and PhotonDelta. Kimerling began the organization in 1997.Last year IPSR-I released its latest roadmap for photonics-electronics integration, “which  outlines a clear way forward and specifies an innovative learning curve for scaling performance and applications for the next 15 years,” Kimerling said.Another major MIT program focused on the future of the microchip industry is FUTUR-IC, a new global alliance for sustainable microchip manufacturing. Begun last year, FUTUR-IC is funded by the National Science Foundation.“Our goal is to build a resource-efficient microchip industry value chain,” said Anuradha Murthy Agarwal, a principal research scientist at the MRL and leader of FUTUR-IC. That includes all of the elements that go into manufacturing future microchips, including workforce education and techniques to mitigate potential environmental effects.FUTUR-IC is also focused on electronic-photonic integration. “My mantra is to use electronics for computation, [and] shift to photonics for communication to bring this energy crisis in control,” Agarwal said.But integrating electronic chips with photonic chips is not easy. To that end, Agarwal described some of the challenges involved. For example, currently it is difficult to connect the optical fibers carrying communications to a microchip. That’s because the alignment between the two must be almost perfect or the light will disperse. And the dimensions involved are minuscule. An optical fiber has a diameter of only millionths of a meter. As a result, today each connection must be actively tested with a laser to ensure that the light will come through.That said, Agarwal went on to describe a new coupler between the fiber and chip that could solve the problem and allow robots to passively assemble the chips (no laser needed). The work, which was conducted by researchers including MIT graduate student Drew Wenninger, Agarwal, and Kimerling, has been patented, and is reported in two papers. A second recent breakthrough in this area involving a printed micro-reflector was described by Juejun “JJ” Hu, John F. Elliott Professor of Materials Science and Engineering.FUTUR-IC is also leading educational efforts for training a future workforce, as well as techniques for detecting — and potentially destroying — the perfluroalkyls (PFAS, or “forever chemicals”) released during microchip manufacturing. FUTUR-IC educational efforts, including virtual reality and game-based learning, were described by Sajan Saini, education director for FUTUR-IC. PFAS detection and remediation were discussed by Aristide Gumyusenge, an assistant professor in DMSE, and Jesus Castro Esteban, a postdoc in the Department of Chemistry.Other presenters at the workshop included Antoine Allanore, the Heather N. Lechtman Professor of Materials Science and Engineering; Katrin Daehn, a postdoc in the Allanore lab; Xuanhe Zhao, the Uncas (1923) and Helen Whitaker Professor in the Department of Mechanical Engineering; Richard Otte, CEO of Promex; and Carl Thompson, the Stavros V. Salapatas Professor in Materials Science and Engineering. More

  • in

    Study: Burning heavy fuel oil with scrubbers is the best available option for bulk maritime shipping

    When the International Maritime Organization enacted a mandatory cap on the sulfur content of marine fuels in 2020, with an eye toward reducing harmful environmental and health impacts, it left shipping companies with three main options.They could burn low-sulfur fossil fuels, like marine gas oil, or install cleaning systems to remove sulfur from the exhaust gas produced by burning heavy fuel oil. Biofuels with lower sulfur content offer another alternative, though their limited availability makes them a less feasible option.While installing exhaust gas cleaning systems, known as scrubbers, is the most feasible and cost-effective option, there has been a great deal of uncertainty among firms, policymakers, and scientists as to how “green” these scrubbers are.Through a novel lifecycle assessment, researchers from MIT, Georgia Tech, and elsewhere have now found that burning heavy fuel oil with scrubbers in the open ocean can match or surpass using low-sulfur fuels, when a wide variety of environmental factors is considered.The scientists combined data on the production and operation of scrubbers and fuels with emissions measurements taken onboard an oceangoing cargo ship.They found that, when the entire supply chain is considered, burning heavy fuel oil with scrubbers was the least harmful option in terms of nearly all 10 environmental impact factors they studied, such as greenhouse gas emissions, terrestrial acidification, and ozone formation.“In our collaboration with Oldendorff Carriers to broadly explore reducing the environmental impact of shipping, this study of scrubbers turned out to be an unexpectedly deep and important transitional issue,” says Neil Gershenfeld, an MIT professor, director of the Center for Bits and Atoms (CBA), and senior author of the study.“Claims about environmental hazards and policies to mitigate them should be backed by science. You need to see the data, be objective, and design studies that take into account the full picture to be able to compare different options from an apples-to-apples perspective,” adds lead author Patricia Stathatou, an assistant professor at Georgia Tech, who began this study as a postdoc in the CBA.Stathatou is joined on the paper by Michael Triantafyllou, the Henry L. and Grace Doherty and others at the National Technical University of Athens in Greece and the maritime shipping firm Oldendorff Carriers. The research appears today in Environmental Science and Technology.Slashing sulfur emissionsHeavy fuel oil, traditionally burned by bulk carriers that make up about 30 percent of the global maritime fleet, usually has a sulfur content around 2 to 3 percent. This is far higher than the International Maritime Organization’s 2020 cap of 0.5 percent in most areas of the ocean and 0.1 percent in areas near population centers or environmentally sensitive regions.Sulfur oxide emissions contribute to air pollution and acid rain, and can damage the human respiratory system.In 2018, fewer than 1,000 vessels employed scrubbers. After the cap went into place, higher prices of low-sulfur fossil fuels and limited availability of alternative fuels led many firms to install scrubbers so they could keep burning heavy fuel oil.Today, more than 5,800 vessels utilize scrubbers, the majority of which are wet, open-loop scrubbers.“Scrubbers are a very mature technology. They have traditionally been used for decades in land-based applications like power plants to remove pollutants,” Stathatou says.A wet, open-loop marine scrubber is a huge, metal, vertical tank installed in a ship’s exhaust stack, above the engines. Inside, seawater drawn from the ocean is sprayed through a series of nozzles downward to wash the hot exhaust gases as they exit the engines.The seawater interacts with sulfur dioxide in the exhaust, converting it to sulfates — water-soluble, environmentally benign compounds that naturally occur in seawater. The washwater is released back into the ocean, while the cleaned exhaust escapes to the atmosphere with little to no sulfur dioxide emissions.But the acidic washwater can contain other combustion byproducts like heavy metals, so scientists wondered if scrubbers were comparable, from a holistic environmental point of view, to burning low-sulfur fuels.Several studies explored toxicity of washwater and fuel system pollution, but none painted a full picture.The researchers set out to fill that scientific gap.A “well-to-wake” analysisThe team conducted a lifecycle assessment using a global environmental database on production and transport of fossil fuels, such as heavy fuel oil, marine gas oil, and very-low sulfur fuel oil. Considering the entire lifecycle of each fuel is key, since producing low-sulfur fuel requires extra processing steps in the refinery, causing additional emissions of greenhouse gases and particulate matter.“If we just look at everything that happens before the fuel is bunkered onboard the vessel, heavy fuel oil is significantly more low-impact, environmentally, than low-sulfur fuels,” she says.The researchers also collaborated with a scrubber manufacturer to obtain detailed information on all materials, production processes, and transportation steps involved in marine scrubber fabrication and installation.“If you consider that the scrubber has a lifetime of about 20 years, the environmental impacts of producing the scrubber over its lifetime are negligible compared to producing heavy fuel oil,” she adds.For the final piece, Stathatou spent a week onboard a bulk carrier vessel in China to measure emissions and gather seawater and washwater samples. The ship burned heavy fuel oil with a scrubber and low-sulfur fuels under similar ocean conditions and engine settings.Collecting these onboard data was the most challenging part of the study.“All the safety gear, combined with the heat and the noise from the engines on a moving ship, was very overwhelming,” she says.Their results showed that scrubbers reduce sulfur dioxide emissions by 97 percent, putting heavy fuel oil on par with low-sulfur fuels according to that measure. The researchers saw similar trends for emissions of other pollutants like carbon monoxide and nitrous oxide.In addition, they tested washwater samples for more than 60 chemical parameters, including nitrogen, phosphorus, polycyclic aromatic hydrocarbons, and 23 metals.The concentrations of chemicals regulated by the IMO were far below the organization’s requirements. For unregulated chemicals, the researchers compared the concentrations to the strictest limits for industrial effluents from the U.S. Environmental Protection Agency and European Union.Most chemical concentrations were at least an order of magnitude below these requirements.In addition, since washwater is diluted thousands of times as it is dispersed by a moving vessel, the concentrations of such chemicals would be even lower in the open ocean.These findings suggest that the use of scrubbers with heavy fuel oil can be considered as equal to or more environmentally friendly than low-sulfur fuels across many of the impact categories the researchers studied.“This study demonstrates the scientific complexity of the waste stream of scrubbers. Having finally conducted a multiyear, comprehensive, and peer-reviewed study, commonly held fears and assumptions are now put to rest,” says Scott Bergeron, managing director at Oldendorff Carriers and co-author of the study.“This first-of-its-kind study on a well-to-wake basis provides very valuable input to ongoing discussion at the IMO,” adds Thomas Klenum, executive vice president of innovation and regulatory affairs at the Liberian Registry, emphasizing the need “for regulatory decisions to be made based on scientific studies providing factual data and conclusions.”Ultimately, this study shows the importance of incorporating lifecycle assessments into future environmental impact reduction policies, Stathatou says.“There is all this discussion about switching to alternative fuels in the future, but how green are these fuels? We must do our due diligence to compare them equally with existing solutions to see the costs and benefits,” she adds.This study was supported, in part, by Oldendorff Carriers. More

  • in

    MIT Maritime Consortium sets sail

    Around 11 billion tons of goods, or about 1.5 tons per person worldwide, are transported by sea each year, representing about 90 percent of global trade by volume. Internationally, the merchant shipping fleet numbers around 110,000 vessels. These ships, and the ports that service them, are significant contributors to the local and global economy — and they’re significant contributors to greenhouse gas emissions.A new consortium, formalized in a signing ceremony at MIT last week, aims to address climate-harming emissions in the maritime shipping industry, while supporting efforts for environmentally friendly operation in compliance with the decarbonization goals set by the International Maritime Organization.“This is a timely collaboration with key stakeholders from the maritime industry with a very bold and interdisciplinary research agenda that will establish new technologies and evidence-based standards,” says Themis Sapsis, the William Koch Professor of Marine Technology at MIT and the director of MIT’s Center for Ocean Engineering. “It aims to bring the best from MIT in key areas for commercial shipping, such as nuclear technology for commercial settings, autonomous operation and AI methods, improved hydrodynamics and ship design, cybersecurity, and manufacturing.” Co-led by Sapsis and Fotini Christia, the Ford International Professor of the Social Sciences; director of the Institute for Data, Systems, and Society (IDSS); and director of the MIT Sociotechnical Systems Research Center, the newly-launched MIT Maritime Consortium (MC) brings together MIT collaborators from across campus, including the Center for Ocean Engineering, which is housed in the Department of Mechanical Engineering; IDSS, which is housed in the MIT Schwarzman College of Computing; the departments of Nuclear Science and Engineering and Civil and Environmental Engineering; MIT Sea Grant; and others, with a national and an international community of industry experts.The Maritime Consortium’s founding members are the American Bureau of Shipping (ABS), Capital Clean Energy Carriers Corp., and HD Korea Shipbuilding and Offshore Engineering. Innovation members are Foresight-Group, Navios Maritime Partners L.P., Singapore Maritime Institute, and Dorian LPG.“The challenges the maritime industry faces are challenges that no individual company or organization can address alone,” says Christia. “The solution involves almost every discipline from the School of Engineering, as well as AI and data-driven algorithms, and policy and regulation — it’s a true MIT problem.”Researchers will explore new designs for nuclear systems consistent with the techno-economic needs and constraints of commercial shipping, economic and environmental feasibility of alternative fuels, new data-driven algorithms and rigorous evaluation criteria for autonomous platforms in the maritime space, cyber-physical situational awareness and anomaly detection, as well as 3D printing technologies for onboard manufacturing. Collaborators will also advise on research priorities toward evidence-based standards related to MIT presidential priorities around climate, sustainability, and AI.MIT has been a leading center of ship research and design for over a century, and is widely recognized for contributions to hydrodynamics, ship structural mechanics and dynamics, propeller design, and overall ship design, and its unique educational program for U.S. Navy Officers, the Naval Construction and Engineering Program. Research today is at the forefront of ocean science and engineering, with significant efforts in fluid mechanics and hydrodynamics, acoustics, offshore mechanics, marine robotics and sensors, and ocean sensing and forecasting. The consortium’s academic home at MIT also opens the door to cross-departmental collaboration across the Institute.The MC will launch multiple research projects designed to tackle challenges from a variety of angles, all united by cutting-edge data analysis and computation techniques. Collaborators will research new designs and methods that improve efficiency and reduce greenhouse gas emissions, explore feasibility of alternative fuels, and advance data-driven decision-making, manufacturing and materials, hydrodynamic performance, and cybersecurity.“This consortium brings a powerful collection of significant companies that, together, has the potential to be a global shipping shaper in itself,” says Christopher J. Wiernicki SM ’85, chair and chief executive officer of ABS. “The strength and uniqueness of this consortium is the members, which are all world-class organizations and real difference makers. The ability to harness the members’ experience and know-how, along with MIT’s technology reach, creates real jet fuel to drive progress,” Wiernicki says. “As well as researching key barriers, bottlenecks, and knowledge gaps in the emissions challenge, the consortium looks to enable development of the novel technology and policy innovation that will be key. Long term, the consortium hopes to provide the gravity we will need to bend the curve.” More

  • in

    Study: Climate change will reduce the number of satellites that can safely orbit in space

    MIT aerospace engineers have found that greenhouse gas emissions are changing the environment of near-Earth space in ways that, over time, will reduce the number of satellites that can sustainably operate there.In a study appearing today in Nature Sustainability, the researchers report that carbon dioxide and other greenhouse gases can cause the upper atmosphere to shrink. An atmospheric layer of special interest is the thermosphere, where the International Space Station and most satellites orbit today. When the thermosphere contracts, the decreasing density reduces atmospheric drag — a force that pulls old satellites and other debris down to altitudes where they will encounter air molecules and burn up.Less drag therefore means extended lifetimes for space junk, which will litter sought-after regions for decades and increase the potential for collisions in orbit.The team carried out simulations of how carbon emissions affect the upper atmosphere and orbital dynamics, in order to estimate the “satellite carrying capacity” of low Earth orbit. These simulations predict that by the year 2100, the carrying capacity of the most popular regions could be reduced by 50-66 percent due to the effects of greenhouse gases.“Our behavior with greenhouse gases here on Earth over the past 100 years is having an effect on how we operate satellites over the next 100 years,” says study author Richard Linares, associate professor in MIT’s Department of Aeronautics and Astronautics (AeroAstro).“The upper atmosphere is in a fragile state as climate change disrupts the status quo,” adds lead author William Parker, a graduate student in AeroAstro. “At the same time, there’s been a massive increase in the number of satellites launched, especially for delivering broadband internet from space. If we don’t manage this activity carefully and work to reduce our emissions, space could become too crowded, leading to more collisions and debris.”The study includes co-author Matthew Brown of the University of Birmingham.Sky fallThe thermosphere naturally contracts and expands every 11 years in response to the sun’s regular activity cycle. When the sun’s activity is low, the Earth receives less radiation, and its outermost atmosphere temporarily cools and contracts before expanding again during solar maximum.In the 1990s, scientists wondered what response the thermosphere might have to greenhouse gases. Their preliminary modeling showed that, while the gases trap heat in the lower atmosphere, where we experience global warming and weather, the same gases radiate heat at much higher altitudes, effectively cooling the thermosphere. With this cooling, the researchers predicted that the thermosphere should shrink, reducing atmospheric density at high altitudes.In the last decade, scientists have been able to measure changes in drag on satellites, which has provided some evidence that the thermosphere is contracting in response to something more than the sun’s natural, 11-year cycle.“The sky is quite literally falling — just at a rate that’s on the scale of decades,” Parker says. “And we can see this by how the drag on our satellites is changing.”The MIT team wondered how that response will affect the number of satellites that can safely operate in Earth’s orbit. Today, there are over 10,000 satellites drifting through low Earth orbit, which describes the region of space up to 1,200 miles (2,000 kilometers), from Earth’s surface. These satellites deliver essential services, including internet, communications, navigation, weather forecasting, and banking. The satellite population has ballooned in recent years, requiring operators to perform regular collision-avoidance maneuvers to keep safe. Any collisions that do occur can generate debris that remains in orbit for decades or centuries, increasing the chance for follow-on collisions with satellites, both old and new.“More satellites have been launched in the last five years than in the preceding 60 years combined,” Parker says. “One of key things we’re trying to understand is whether the path we’re on today is sustainable.”Crowded shellsIn their new study, the researchers simulated different greenhouse gas emissions scenarios over the next century to investigate impacts on atmospheric density and drag. For each “shell,” or altitude range of interest, they then modeled the orbital dynamics and the risk of satellite collisions based on the number of objects within the shell. They used this approach to identify each shell’s “carrying capacity” — a term that is typically used in studies of ecology to describe the number of individuals that an ecosystem can support.“We’re taking that carrying capacity idea and translating it to this space sustainability problem, to understand how many satellites low Earth orbit can sustain,” Parker explains.The team compared several scenarios: one in which greenhouse gas concentrations remain at their level from the year 2000 and others where emissions change according to the Intergovernmental Panel on Climate Change (IPCC) Shared Socioeconomic Pathways (SSPs). They found that scenarios with continuing increases in emissions would lead to a significantly reduced carrying capacity throughout low Earth orbit.In particular, the team estimates that by the end of this century, the number of satellites safely accommodated within the altitudes of 200 and 1,000 kilometers could be reduced by 50 to 66 percent compared with a scenario in which emissions remain at year-2000 levels. If satellite capacity is exceeded, even in a local region, the researchers predict that the region will experience a “runaway instability,” or a cascade of collisions that would create so much debris that satellites could no longer safely operate there.Their predictions forecast out to the year 2100, but the team says that certain shells in the atmosphere today are already crowding up with satellites, particularly from recent “megaconstellations” such as SpaceX’s Starlink, which comprises fleets of thousands of small internet satellites.“The megaconstellation is a new trend, and we’re showing that because of climate change, we’re going to have a reduced capacity in orbit,” Linares says. “And in local regions, we’re close to approaching this capacity value today.”“We rely on the atmosphere to clean up our debris. If the atmosphere is changing, then the debris environment will change too,” Parker adds. “We show the long-term outlook on orbital debris is critically dependent on curbing our greenhouse gas emissions.”This research is supported, in part, by the U.S. National Science Foundation, the U.S. Air Force, and the U.K. Natural Environment Research Council. More

  • in

    Study: The ozone hole is healing, thanks to global reduction of CFCs

    A new MIT-led study confirms that the Antarctic ozone layer is healing, as a direct result of global efforts to reduce ozone-depleting substances.Scientists including the MIT team have observed signs of ozone recovery in the past. But the new study is the first to show, with high statistical confidence, that this recovery is due primarily to the reduction of ozone-depleting substances, versus other influences such as natural weather variability or increased greenhouse gas emissions to the stratosphere.“There’s been a lot of qualitative evidence showing that the Antarctic ozone hole is getting better. This is really the first study that has quantified confidence in the recovery of the ozone hole,” says study author Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies and Chemistry. “The conclusion is, with 95 percent confidence, it is recovering. Which is awesome. And it shows we can actually solve environmental problems.”The new study appears today in the journal Nature. Graduate student Peidong Wang from the Solomon group in the Department of Earth, Atmospheric and Planetary Sciences (EAPS) is the lead author. His co-authors include Solomon and EAPS Research Scientist Kane Stone, along with collaborators from multiple other institutions.Roots of ozone recoveryWithin the Earth’s stratosphere, ozone is a naturally occurring gas that acts as a sort of sunscreen, protecting the planet from the sun’s harmful ultraviolet radiation. In 1985, scientists discovered a “hole” in the ozone layer over Antarctica that opened up during the austral spring, between September and December. This seasonal ozone depletion was suddenly allowing UV rays to filter down to the surface, leading to skin cancer and other adverse health effects.In 1986, Solomon, who was then working at the National Oceanic and Atmospheric Administration (NOAA), led expeditions to the Antarctic, where she and her colleagues gathered evidence that quickly confirmed the ozone hole’s cause: chlorofluorocarbons, or CFCs — chemicals that were then used in refrigeration, air conditioning, insulation, and aerosol propellants. When CFCs drift up into the stratosphere, they can break down ozone under certain seasonal conditions.The following year, those relevations led to the drafting of the Montreal Protocol — an international treaty that aimed to phase out the production of CFCs and other ozone-depleting substances, in hopes of healing the ozone hole.In 2016, Solomon led a study reporting key signs of ozone recovery. The ozone hole seemed to be shrinking with each year, especially in September, the time of year when it opens up. Still, these observations were qualitative. The study showed large uncertainties regarding how much of this recovery was due to concerted efforts to reduce ozone-depleting substances, or if the shrinking ozone hole was a result of other “forcings,” such as year-to-year weather variability from El Niño, La Niña, and the polar vortex.“While detecting a statistically significant increase in ozone is relatively straightforward, attributing these changes to specific forcings is more challenging,” says Wang.Anthropogenic healingIn their new study, the MIT team took a quantitative approach to identify the cause of Antarctic ozone recovery. The researchers borrowed a method from the climate change community, known as “fingerprinting,” which was pioneered by Klaus Hasselmann, who was awarded the Nobel Prize in Physics in 2021 for the technique. In the context of climate, fingerprinting refers to a method that isolates the influence of specific climate factors, apart from natural, meteorological noise. Hasselmann applied fingerprinting to identify, confirm, and quantify the anthropogenic fingerprint of climate change.Solomon and Wang looked to apply the fingerprinting method to identify another anthropogenic signal: the effect of human reductions in ozone-depleting substances on the recovery of the ozone hole.“The atmosphere has really chaotic variability within it,” Solomon says. “What we’re trying to detect is the emerging signal of ozone recovery against that kind of variability, which also occurs in the stratosphere.”The researchers started with simulations of the Earth’s atmosphere and generated multiple “parallel worlds,” or simulations of the same global atmosphere, under different starting conditions. For instance, they ran simulations under conditions that assumed no increase in greenhouse gases or ozone-depleting substances. Under these conditions, any changes in ozone should be the result of natural weather variability. They also ran simulations with only increasing greenhouse gases, as well as only decreasing ozone-depleting substances.They compared these simulations to observe how ozone in the Antarctic stratosphere changed, both with season, and across different altitudes, in response to different starting conditions. From these simulations, they mapped out the times and altitudes where ozone recovered from month to month, over several decades, and identified a key “fingerprint,” or pattern, of ozone recovery that was specifically due to conditions of declining ozone-depleting substances.The team then looked for this fingerprint in actual satellite observations of the Antarctic ozone hole from 2005 to the present day. They found that, over time, the fingerprint that they identified in simulations became clearer and clearer in observations. In 2018, the fingerprint was at its strongest, and the team could say with 95 percent confidence that ozone recovery was due mainly to reductions in ozone-depleting substances.“After 15 years of observational records, we see this signal to noise with 95 percent confidence, suggesting there’s only a very small chance that the observed pattern similarity can be explained by variability noise,” Wang says. “This gives us confidence in the fingerprint. It also gives us confidence that we can solve environmental problems. What we can learn from ozone studies is how different countries can swiftly follow these treaties to decrease emissions.”If the trend continues, and the fingerprint of ozone recovery grows stronger, Solomon anticipates that soon there will be a year, here and there, when the ozone layer stays entirely intact. And eventually, the ozone hole should stay shut for good.“By something like 2035, we might see a year when there’s no ozone hole depletion at all in the Antarctic. And that will be very exciting for me,” she says. “And some of you will see the ozone hole go away completely in your lifetimes. And people did that.”This research was supported, in part, by the National Science Foundation and NASA. More

  • in

    Reducing carbon emissions from residential heating: A pathway forward

    In the race to reduce climate-warming carbon emissions, the buildings sector is falling behind. While carbon dioxide (CO2) emissions in the U.S. electric power sector dropped by 34 percent between 2005 and 2021, emissions in the building sector declined by only 18 percent in that same time period. Moreover, in extremely cold locations, burning natural gas to heat houses can make up a substantial share of the emissions portfolio. Therefore, steps to electrify buildings in general, and residential heating in particular, are essential for decarbonizing the U.S. energy system.But that change will increase demand for electricity and decrease demand for natural gas. What will be the net impact of those two changes on carbon emissions and on the cost of decarbonizing? And how will the electric power and natural gas sectors handle the new challenges involved in their long-term planning for future operations and infrastructure investments?A new study by MIT researchers with support from the MIT Energy Initiative (MITEI) Future Energy Systems Center unravels the impacts of various levels of electrification of residential space heating on the joint power and natural gas systems. A specially devised modeling framework enabled them to estimate not only the added costs and emissions for the power sector to meet the new demand, but also any changes in costs and emissions that result for the natural gas sector.The analyses brought some surprising outcomes. For example, they show that — under certain conditions — switching 80 percent of homes to heating by electricity could cut carbon emissions and at the same time significantly reduce costs over the combined natural gas and electric power sectors relative to the case in which there is only modest switching. That outcome depends on two changes: Consumers must install high-efficiency heat pumps plus take steps to prevent heat losses from their homes, and planners in the power and the natural gas sectors must work together as they make long-term infrastructure and operations decisions. Based on their findings, the researchers stress the need for strong state, regional, and national policies that encourage and support the steps that homeowners and industry planners can take to help decarbonize today’s building sector.A two-part modeling approachTo analyze the impacts of electrification of residential heating on costs and emissions in the combined power and gas sectors, a team of MIT experts in building technology, power systems modeling, optimization techniques, and more developed a two-part modeling framework. Team members included Rahman Khorramfar, a senior postdoc in MITEI and the Laboratory for Information and Decision Systems (LIDS); Morgan Santoni-Colvin SM ’23, a former MITEI graduate research assistant, now an associate at Energy and Environmental Economics, Inc.; Saurabh Amin, a professor in the Department of Civil and Environmental Engineering and principal investigator in LIDS; Audun Botterud, a principal research scientist in LIDS; Leslie Norford, a professor in the Department of Architecture; and Dharik Mallapragada, a former MITEI principal research scientist, now an assistant professor at New York University, who led the project. They describe their new methods and findings in a paper published in the journal Cell Reports Sustainability on Feb. 6.The first model in the framework quantifies how various levels of electrification will change end-use demand for electricity and for natural gas, and the impacts of possible energy-saving measures that homeowners can take to help. “To perform that analysis, we built a ‘bottom-up’ model — meaning that it looks at electricity and gas consumption of individual buildings and then aggregates their consumption to get an overall demand for power and for gas,” explains Khorramfar. By assuming a wide range of building “archetypes” — that is, groupings of buildings with similar physical characteristics and properties — coupled with trends in population growth, the team could explore how demand for electricity and for natural gas would change under each of five assumed electrification pathways: “business as usual” with modest electrification, medium electrification (about 60 percent of homes are electrified), high electrification (about 80 percent of homes make the change), and medium and high electrification with “envelope improvements,” such as sealing up heat leaks and adding insulation.The second part of the framework consists of a model that takes the demand results from the first model as inputs and “co-optimizes” the overall electricity and natural gas system to minimize annual investment and operating costs while adhering to any constraints, such as limits on emissions or on resource availability. The modeling framework thus enables the researchers to explore the impact of each electrification pathway on the infrastructure and operating costs of the two interacting sectors.The New England case study: A challenge for electrificationAs a case study, the researchers chose New England, a region where the weather is sometimes extremely cold and where burning natural gas to heat houses contributes significantly to overall emissions. “Critics will say that electrification is never going to happen [in New England]. It’s just too expensive,” comments Santoni-Colvin. But he notes that most studies focus on the electricity sector in isolation. The new framework considers the joint operation of the two sectors and then quantifies their respective costs and emissions. “We know that electrification will require large investments in the electricity infrastructure,” says Santoni-Colvin. “But what hasn’t been well quantified in the literature is the savings that we generate on the natural gas side by doing that — so, the system-level savings.”Using their framework, the MIT team performed model runs aimed at an 80 percent reduction in building-sector emissions relative to 1990 levels — a target consistent with regional policy goals for 2050. The researchers defined parameters including details about building archetypes, the regional electric power system, existing and potential renewable generating systems, battery storage, availability of natural gas, and other key factors describing New England.They then performed analyses assuming various scenarios with different mixes of home improvements. While most studies assume typical weather, they instead developed 20 projections of annual weather data based on historical weather patterns and adjusted for the effects of climate change through 2050. They then analyzed their five levels of electrification.Relative to business-as-usual projections, results from the framework showed that high electrification of residential heating could more than double the demand for electricity during peak periods and increase overall electricity demand by close to 60 percent. Assuming that building-envelope improvements are deployed in parallel with electrification reduces the magnitude and weather sensitivity of peak loads and creates overall efficiency gains that reduce the combined demand for electricity plus natural gas for home heating by up to 30 percent relative to the present day. Notably, a combination of high electrification and envelope improvements resulted in the lowest average cost for the overall electric power-natural gas system in 2050.Lessons learnedReplacing existing natural gas-burning furnaces and boilers with heat pumps reduces overall energy consumption. Santoni-Colvin calls it “something of an intuitive result” that could be expected because heat pumps are “just that much more efficient than old, fossil fuel-burning systems. But even so, we were surprised by the gains.”Other unexpected results include the importance of homeowners making more traditional energy efficiency improvements, such as adding insulation and sealing air leaks — steps supported by recent rebate policies. Those changes are critical to reducing costs that would otherwise be incurred for upgrading the electricity grid to accommodate the increased demand. “You can’t just go wild dropping heat pumps into everybody’s houses if you’re not also considering other ways to reduce peak loads. So it really requires an ‘all of the above’ approach to get to the most cost-effective outcome,” says Santoni-Colvin.Testing a range of weather outcomes also provided important insights. Demand for heating fuel is very weather-dependent, yet most studies are based on a limited set of weather data — often a “typical year.” The researchers found that electrification can lead to extended peak electric load events that can last for a few days during cold winters. Accordingly, the researchers conclude that there will be a continuing need for a “firm, dispatchable” source of electricity; that is, a power-generating system that can be relied on to produce power any time it’s needed — unlike solar and wind systems. As examples, they modeled some possible technologies, including power plants fired by a low-carbon fuel or by natural gas equipped with carbon capture equipment. But they point out that there’s no way of knowing what types of firm generators will be available in 2050. It could be a system that’s not yet mature, or perhaps doesn’t even exist today.In presenting their findings, the researchers note several caveats. For one thing, their analyses don’t include the estimated cost to homeowners of installing heat pumps. While that cost is widely discussed and debated, that issue is outside the scope of their current project.In addition, the study doesn’t specify what happens to existing natural gas pipelines. “Some homes are going to electrify and get off the gas system and not have to pay for it, leaving other homes with increasing rates because the gas system cost now has to be divided among fewer customers,” says Khorramfar. “That will inevitably raise equity questions that need to be addressed by policymakers.”Finally, the researchers note that policies are needed to drive residential electrification. Current financial support for installation of heat pumps and steps to make homes more thermally efficient are a good start. But such incentives must be coupled with a new approach to planning energy infrastructure investments. Traditionally, electric power planning and natural gas planning are performed separately. However, to decarbonize residential heating, the two sectors should coordinate when planning future operations and infrastructure needs. Results from the MIT analysis indicate that such cooperation could significantly reduce both emissions and costs for residential heating — a change that would yield a much-needed step toward decarbonizing the buildings sector as a whole. More

  • in

    Pivot Bio is using microbial nitrogen to make agriculture more sustainable

    The Haber-Bosch process, which converts atmospheric nitrogen to make ammonia fertilizer, revolutionized agriculture and helped feed the world’s growing population, but it also created huge environmental problems. It is one of the most energy-intensive chemical processes in the world, responsible for 1-2 percent of global energy consumption. It also releases nitrous oxide, a potent greenhouse gas that harms the ozone layer. Excess nitrogen also routinely runs off farms into waterways, harming marine life and polluting groundwater.In place of synthetic fertilizer, Pivot Bio has engineered nitrogen-producing microbes to make farming more sustainable. The company, which was co-founded by Professor Chris Voigt, Karsten Temme, and Alvin Tamsir, has engineered its microbes to grow on plant roots, where they feed on the root’s sugars and precisely deliver nitrogen in return.Pivot’s microbial colonies grow with the plant and produce more nitrogen at exactly the time the plant needs it, minimizing nitrogen runoff.“The way we have delivered nutrients to support plant growth historically is fertilizer, but that’s an inefficient way to get all the nutrients you need,” says Temme, Pivot’s chief innovation officer. “We have the ability now to help farmers be more efficient and productive with microbes.”Farmers can replace up to 40 pounds per acre of traditional nitrogen with Pivot’s product, which amounts to about a quarter of the total nitrogen needed for a crop like corn.Pivot’s products are already being used to grow corn, wheat, barley, oats, and other grains across millions of acres of American farmland, eliminating hundreds of thousands of tons of CO2 equivalent in the process. The company’s impact is even more striking given its unlikely origins, which trace back to one of the most challenging times of Voigt’s career.A Pivot from despairThe beginning of every faculty member’s career can be a sink-or-swim moment, and by Voigt’s own account, he was drowning. As a freshly minted assistant professor at the University of California at San Francisco, Voigt was struggling to stand up his lab, attract funding, and get experiments started.Around 2008, Voigt joined a research group out of the University of California at Berkeley that was writing a grant proposal focused on photovoltaic materials. His initial role was minor, but a senior researcher pulled out of the group a week before the proposal had to be submitted, so Voigt stepped up.“I said ‘I’ll finish this section in a week,’” Voigt recalls. “It was my big chance.”For the proposal, Voigt detailed an ambitious plan to rearrange the genetics of biologic photosynthetic systems to make them more efficient. He barely submitted it in time.A few months went by, then the proposal reviews finally came back. Voigt hurried to the meeting with some of the most senior researchers at UC Berkeley to discuss the responses.“My part of the proposal got completely slammed,” Voigt says. “There were something like 15 reviews on it — they were longer than the actual grant — and it’s just one after another tearing into my proposal. All the most famous people are in this meeting, future energy secretaries, future leaders of the university, and it was totally embarrassing. After that meeting, I was considering leaving academia.”A few discouraging months later, Voigt got a call from Paul Ludden, the dean of the School of Science at UC Berkeley. He wanted to talk.“As I walk into Paul’s office, he’s reading my proposal,” Voigt recalls. “He sits me down and says, ‘Everybody’s telling me how terrible this is.’ I’m thinking, ‘Oh my God.’ But then he says, ‘I think there’s something here. Your idea is good, you just picked the wrong system.’”Ludden went on to explain to Voigt that he should apply his gene-swapping idea to nitrogen fixation. He even offered to send Voigt a postdoc from his lab, Dehua Zhao, to help. Voigt paired Zhao with Temme, and sure enough, the resulting 2011 paper of their work was well-received by the nitrogen fixation community.“Nitrogen fixation has been a holy grail for scientists, agronomists, and farmers for almost a century, ever since somebody discovered the first microbe that can fix nitrogen for legumes like soybeans,” Temme says. “Everybody always said that someday we’ll be able to do this for the cereal crops. The excitement with Pivot was this is the first time that technology became accessible.”Voigt had moved to MIT in 2010. When the paper came out, he founded Pivot Bio with Temme and another Berkeley researcher, Alvin Tamsir. Since then, Voigt, who is the Daniel I.C. Wang Professor at MIT and the head of the Department of Biological Engineering, has continued collaborating with Pivot on things like increasing nitrogen production, making strains more stable, and making them inducible to different signals from the plant. Pivot has licensed technology from MIT, and the research has also received support from MIT’s Abdul Latif Jameel Water and Food Systems Lab (J-WAFS).Pivot’s first goals were to gain regulatory approval and prove themselves in the marketplace. To gain approval in the U.S., Pivot’s team focused on using DNA from within the same organism rather than bringing in totally new DNA, which simplified the approval process. It also partnered with independent corn seed dealers to get its product to farms. Early deployments occurred in 2019.Farmers apply Pivot’s product at planting, either as a liquid that gets sprayed on the soil or as a dry powder that is rehydrated and applied to the seeds as a coating. The microbes live on the surface of the growing root system, eating plant sugars and releasing nitrogen throughout the plant’s life cycle.“Today, our microbes colonize just a fraction of the total sugars provided by the plant,” Temme explains. “They’re also sharing ammonia with the plant, and all of those things are just a portion of what’s possible technically. Our team is always trying to figure out how to make those microbes more efficient at getting the energy they need to grow or at fixing nitrogen and sharing it with the crop.”In 2023, Pivot started the N-Ovator program to connect companies with growers who practice sustainable farming using Pivot’s microbial nitrogen. Through the program, companies buy nitrogen credits and farmers can get paid by verifying their practices. The program was named one of the Inventions of the Year by Time Magazine last year and has paid out millions of dollars to farmers to date.Microbial nitrogen and beyondPivot is currently selling to farmers across the U.S. and working with smallholder farmers in Kenya. It’s also hoping to gain approval for its microbial solution in Brazil and Canada, which it hopes will be its next markets.”How do we get the economics to make sense for everybody — the farmers, our partners, and the company?” Temme says of Pivot’s mission. “Because this truly can be a deflationary technology that upends the very expensive traditional way of making fertilizer.”Pivot’s team is also extending the product to cotton, and Temme says microbes can be a nitrogen source for any type of plant on the planet. Further down the line, the company believes it can help farmers with other nutrients essential to help their crops grow.“Now that we’ve established our technology, how can Pivot help farmers overcome all the other limitations they face with crop nutrients to maximize yields?” Temme asks. “That really starts to change the way a farmer thinks about managing the entire acre from a price, productivity, and sustainability perspective.” More

  • in

    Puzzling out climate change

    Shreyaa Raghavan’s journey into solving some of the world’s toughest challenges started with a simple love for puzzles. By high school, her knack for problem-solving naturally drew her to computer science. Through her participation in an entrepreneurship and leadership program, she built apps and twice made it to the semifinals of the program’s global competition.Her early successes made a computer science career seem like an obvious choice, but Raghavan says a significant competing interest left her torn.“Computer science sparks that puzzle-, problem-solving part of my brain,” says Raghavan ’24, an Accenture Fellow and a PhD candidate in MIT’s Institute for Data, Systems, and Society. “But while I always felt like building mobile apps was a fun little hobby, it didn’t feel like I was directly solving societal challenges.”Her perspective shifted when, as an MIT undergraduate, Raghavan participated in an Undergraduate Research Opportunity in the Photovoltaic Research Laboratory, now known as the Accelerated Materials Laboratory for Sustainability. There, she discovered how computational techniques like machine learning could optimize materials for solar panels — a direct application of her skills toward mitigating climate change.“This lab had a very diverse group of people, some from a computer science background, some from a chemistry background, some who were hardcore engineers. All of them were communicating effectively and working toward one unified goal — building better renewable energy systems,” Raghavan says. “It opened my eyes to the fact that I could use very technical tools that I enjoy building and find fulfillment in that by helping solve major climate challenges.”With her sights set on applying machine learning and optimization to energy and climate, Raghavan joined Cathy Wu’s lab when she started her PhD in 2023. The lab focuses on building more sustainable transportation systems, a field that resonated with Raghavan due to its universal impact and its outsized role in climate change — transportation accounts for roughly 30 percent of greenhouse gas emissions.“If we were to throw all of the intelligent systems we are exploring into the transportation networks, by how much could we reduce emissions?” she asks, summarizing a core question of her research.Wu, an associate professor in the Department of Civil and Environmental Engineering, stresses the value of Raghavan’s work.“Transportation is a critical element of both the economy and climate change, so potential changes to transportation must be carefully studied,” Wu says. “Shreyaa’s research into smart congestion management is important because it takes a data-driven approach to add rigor to the broader research supporting sustainability.”Raghavan’s contributions have been recognized with the Accenture Fellowship, a cornerstone of the MIT-Accenture Convergence Initiative for Industry and Technology. As an Accenture Fellow, she is exploring the potential impact of technologies for avoiding stop-and-go traffic and its emissions, using systems such as networked autonomous vehicles and digital speed limits that vary according to traffic conditions — solutions that could advance decarbonization in the transportation section at relatively low cost and in the near term.Raghavan says she appreciates the Accenture Fellowship not only for the support it provides, but also because it demonstrates industry involvement in sustainable transportation solutions.“It’s important for the field of transportation, and also energy and climate as a whole, to synergize with all of the different stakeholders,” she says. “I think it’s important for industry to be involved in this issue of incorporating smarter transportation systems to decarbonize transportation.”Raghavan has also received a fellowship supporting her research from the U.S. Department of Transportation.“I think it’s really exciting that there’s interest from the policy side with the Department of Transportation and from the industry side with Accenture,” she says.Raghavan believes that addressing climate change requires collaboration across disciplines. “I think with climate change, no one industry or field is going to solve it on its own. It’s really got to be each field stepping up and trying to make a difference,” she says. “I don’t think there’s any silver-bullet solution to this problem. It’s going to take many different solutions from different people, different angles, different disciplines.”With that in mind, Raghavan has been very active in the MIT Energy and Climate Club since joining about three years ago, which, she says, “was a really cool way to meet lots of people who were working toward the same goal, the same climate goals, the same passions, but from completely different angles.”This year, Raghavan is on the community and education team, which works to build the community at MIT that is working on climate and energy issues. As part of that work, Raghavan is launching a mentorship program for undergraduates, pairing them with graduate students who help the undergrads develop ideas about how they can work on climate using their unique expertise.“I didn’t foresee myself using my computer science skills in energy and climate,” Raghavan says, “so I really want to give other students a clear pathway, or a clear sense of how they can get involved.”Raghavan has embraced her area of study even in terms of where she likes to think.“I love working on trains, on buses, on airplanes,” she says. “It’s really fun to be in transit and working on transportation problems.”Anticipating a trip to New York to visit a cousin, she holds no dread for the long train trip.“I know I’m going to do some of my best work during those hours,” she says. “Four hours there. Four hours back.” More