More stories

  • in

    Study: Climate change may make it harder to reduce smog in some regions

    Global warming will likely hinder our future ability to control ground-level ozone, a harmful air pollutant that is a primary component of smog, according to a new MIT study.The results could help scientists and policymakers develop more effective strategies for improving both air quality and human health. Ground-level ozone causes a host of detrimental health impacts, from asthma to heart disease, and contributes to thousands of premature deaths each year.The researchers’ modeling approach reveals that, as the Earth warms due to climate change, ground-level ozone will become less sensitive to reductions in nitrogen oxide emissions in eastern North America and Western Europe. In other words, it will take greater nitrogen oxide emission reductions to get the same air quality benefits.However, the study also shows that the opposite would be true in northeast Asia, where cutting emissions would have a greater impact on reducing ground-level ozone in the future. The researchers combined a climate model that simulates meteorological factors, such as temperature and wind speeds, with a chemical transport model that estimates the movement and composition of chemicals in the atmosphere.By generating a range of possible future outcomes, the researchers’ ensemble approach better captures inherent climate variability, allowing them to paint a fuller picture than many previous studies.“Future air quality planning should consider how climate change affects the chemistry of air pollution. We may need steeper cuts in nitrogen oxide emissions to achieve the same air quality goals,” says Emmie Le Roy, a graduate student in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) and lead author of a paper on this study.Her co-authors include Anthony Y.H. Wong, a postdoc in the MIT Center for Sustainability Science and Strategy; Sebastian D. Eastham, principal research scientist in the MIT Center for Sustainability Science and Strategy; Arlene Fiore, the Peter H. Stone and Paola Malanotte Stone Professor of EAPS; and senior author Noelle Selin, a professor in the Institute for Data, Systems, and Society (IDSS) and EAPS. The research appears today in Environmental Science and Technology.Controlling ozoneGround-level ozone differs from the stratospheric ozone layer that protects the Earth from harmful UV radiation. It is a respiratory irritant that is harmful to the health of humans, animals, and plants.Controlling ground-level ozone is particularly challenging because it is a secondary pollutant, formed in the atmosphere by complex reactions involving nitrogen oxides and volatile organic compounds in the presence of sunlight.“That is why you tend to have higher ozone days when it is warm and sunny,” Le Roy explains.Regulators typically try to reduce ground-level ozone by cutting nitrogen oxide emissions from industrial processes. But it is difficult to predict the effects of those policies because ground-level ozone interacts with nitrogen oxide and volatile organic compounds in nonlinear ways.Depending on the chemical environment, reducing nitrogen oxide emissions could cause ground-level ozone to increase instead.“Past research has focused on the role of emissions in forming ozone, but the influence of meteorology is a really important part of Emmie’s work,” Selin says.To conduct their study, the researchers combined a global atmospheric chemistry model with a climate model that simulate future meteorology.They used the climate model to generate meteorological inputs for each future year in their study, simulating factors such as likely temperature and wind speeds, in a way that captures the inherent variability of a region’s climate.Then they fed those inputs to the atmospheric chemistry model, which calculates how the chemical composition of the atmosphere would change because of meteorology and emissions.The researchers focused on Eastern North America, Western Europe, and Northeast China, since those regions have historically high levels of the precursor chemicals that form ozone and well-established monitoring networks to provide data.They chose to model two future scenarios, one with high warming and one with low warming, over a 16-year period between 2080 and 2095. They compared them to a historical scenario capturing 2000 to 2015 to see the effects of a 10 percent reduction in nitrogen oxide emissions.Capturing climate variability“The biggest challenge is that the climate naturally varies from year to year. So, if you want to isolate the effects of climate change, you need to simulate enough years to see past that natural variability,” Le Roy says.They could overcome that challenge due to recent advances in atmospheric chemistry modeling and by taking advantage of parallel computing to simulate multiple years at the same time. They simulated five 16-year realizations, resulting in 80 model years for each scenario.The researchers found that eastern North America and Western Europe are especially sensitive to increases in nitrogen oxide emissions from the soil, which are natural emissions driven by increases in temperature.Due to that sensitivity, as the Earth warms and more nitrogen oxide from soil enters the atmosphere, reducing nitrogen oxide emissions from human activities will have less of an impact on ground-level ozone.“This shows how important it is to improve our representation of the biosphere in these models to better understand how climate change may impact air quality,” Le Roy says.On the other hand, since industrial processes in northeast Asia cause more ozone per unit of nitrogen oxide emitted, cutting emissions there would cause greater reductions in ground-level ozone in future warming scenarios.“But I wouldn’t say that is a good thing because it means that, overall, there are higher levels of ozone,” Le Roy adds.Running detailed meteorology simulations, rather than relying on annual average weather data, gave the researchers a more complete picture of the potential effects on human health.“Average climate isn’t the only thing that matters. One high ozone day, which might be a statistical anomaly, could mean we don’t meet our air quality target and have negative human health impacts that we should care about,” Le Roy says.In the future, the researchers want to continue exploring the intersection of meteorology and air quality. They also want to expand their modeling approach to consider other climate change factors with high variability, like wildfires or biomass burning.“We’ve shown that it is important for air quality scientists to consider the full range of climate variability, even if it is hard to do in your models, because it really does affect the answer that you get,” says Selin.This work is funded, in part, by the MIT Praecis Presidential Fellowship, the J.H. and E.V. Wade Fellowship, and the MIT Martin Family Society of Fellows for Sustainability. More

  • in

    A day in the life of MIT MBA student David Brown

    “MIT Sloan was my first and only choice,” says MIT graduate student David Brown. After receiving his BS in chemical engineering at the U.S. Military Academy at West Point, Brown spent eight years as a helicopter pilot in the U.S. Army, serving as a platoon leader and troop commander. Now in the final year of his MBA, Brown has co-founded a climate tech company — Helix Carbon — with Ariel Furst, an MIT assistant professor in the Department of Chemical Engineering, and Evan Haas MBA ’24, SM ’24. Their goal: erase the carbon footprint of tough-to-decarbonize industries like ironmaking, polyurethanes, and olefins by generating competitively-priced, carbon-neutral fuels directly from waste carbon dioxide (CO2). It’s an ambitious project; they’re looking to scale the company large enough to have a gigaton per year impact on CO2 emissions. They have lab space off campus, and after graduation, Brown will be taking a full-time job as chief operating officer.“What I loved about the Army was that I felt every day that the work I was doing was important or impactful in some way. I wanted that to continue, and felt the best way to have the greatest possible positive impact was to use my operational skills learned from the military to help close the gap between the lab and impact in the market.”The following photo essay provides a snapshot of what a typical day for Brown has been like as an MIT student.

    8:30 a.m. — “The first thing on my schedule today is meeting with the Helix Carbon team. Today, we’re talking about the results from the latest lab runs, and what they mean for planned experiments the rest of the week. We are also discussing our fundraising plans ahead of the investor meetings we have scheduled for later this week.”

    10:00 a.m. — “I spend a lot of time at the Martin Trust Center for MIT Entrepreneurship. It’s the hub of entrepreneurship at MIT. My pre-MBA internship, and my first work experience after leaving the Army, was as the program manager for delta v, the premier startup accelerator at MIT. That was also my introduction to the entrepreneurship ecosystem at MIT, and how I met Ariel. With zero hyperbole I can say that was a life-changing experience, and really defined the direction of my life out of the military.”

    10:30 a.m. — “In addition to working to fund and scale Helix Carbon, I have a lot of work to do to finish up the semester. Something I think is unique about MIT is that classes give a real-world perspective from people who are actively a participant on the cutting edge of what’s happening in that realm. For example, I’m taking Climate and Energy in the Global Economy, and the professor, Catherine Wolfram, has incredible experience both on the ground and in policy with both climate and energy.”

    11:00 a.m. — “When I arrived at MIT Sloan, I was grouped into my cohort team. We navigated the first semester core classes together and built a strong bond. We still meet up for coffee and have team dinners even a year-and-a-half later. I always find myself inspired by how much they’ve accomplished, and I consider myself incredibly lucky for their support and to call them my friends.”

    12 p.m. — “Next, I have a meeting with Bill Aulet, the managing director of the Trust Center, to prepare for an entrepreneurship accelerator called Third Derivative that Helix Carbon got picked up for. Sustainability startups from all over the U.S. and around the world come together to meet with each other and other mentors in order to share progress, best practices, and develop plans for moving forward.”

    12:30 p.m. — “Throughout the day, I run into friends, colleagues, and mentors. Even though MIT Sloan is pitched as a community experience, I didn’t expect how much of a community experience it really is. My classmates have been the absolute highlight of my time here, and I have learned so much from their experiences and from the way they carry themselves.”

    1 p.m. — “My only class today is Applied Behavioral Economics. I’m taking it almost entirely for pleasure — it’s such a fascinating topic. And the professor — Drazen Prelec — is one of the world’s foremost experts. It’s a class that challenges assumptions and gets me thinking. I really enjoy it.”

    2:30 p.m. — “I have a little bit of time before my next event. When I need a place that isn’t too crowded to think, I like to hang out on the couch on the sky bridge between the Tang Center and the Morris and Sophie Chang Building. When the weather is nice, I’ll head out to one of the open green spaces in Kendall Square, or to Urban Park across the street.”

    3:30 p.m. — “When I was the program manager for delta v, this was where I sat, and it’s still where I like to spend time when I’m at the Trust Center. Because it looks like a welcome desk, a lot of people come up to ask questions or talk about their startups. Since I used to work there I’m able to help them out pretty well!”

    5:00 p.m. — “For my last event of the day, I’m attending a seminar at the Priscilla King Gray Public Service Center (PKG Center) as part of their IDEAS Social Innovation Challenge, MIT’s 20-plus year-old social impact incubator. The program works with MIT student-led teams addressing social and environmental challenges in our communities. The program has helped teach us critical frameworks and tools around setting goals for and measuring our social impact. We actually placed first in the Harvard Social Enterprise Conference Pitch competition thanks to the lessons we learned here!”

    7:00 p.m. — “Time to head home. A few days a week after work and class, my wife and I play in a combat archery league. It’s like dodgeball, but instead of dodgeballs everyone has a bow and you shoot arrows that have pillow tips. It’s incredible. Tons of fun. I have tried to recruit many of my classmates — marginal success rate!”

    Previous item
    Next item More

  • in

    How can India decarbonize its coal-dependent electric power system?

    As the world struggles to reduce climate-warming carbon emissions, India has pledged to do its part, and its success is critical: In 2023, India was the third-largest carbon emitter worldwide. The Indian government has committed to having net-zero carbon emissions by 2070.To fulfill that promise, India will need to decarbonize its electric power system, and that will be a challenge: Fully 60 percent of India’s electricity comes from coal-burning power plants that are extremely inefficient. To make matters worse, the demand for electricity in India is projected to more than double in the coming decade due to population growth and increased use of air conditioning, electric cars, and so on.Despite having set an ambitious target, the Indian government has not proposed a plan for getting there. Indeed, as in other countries, in India the government continues to permit new coal-fired power plants to be built, and aging plants to be renovated and their retirement postponed.To help India define an effective — and realistic — plan for decarbonizing its power system, key questions must be addressed. For example, India is already rapidly developing carbon-free solar and wind power generators. What opportunities remain for further deployment of renewable generation? Are there ways to retrofit or repurpose India’s existing coal plants that can substantially and affordably reduce their greenhouse gas emissions? And do the responses to those questions differ by region?With funding from IHI Corp. through the MIT Energy Initiative (MITEI), Yifu Ding, a postdoc at MITEI, and her colleagues set out to answer those questions by first using machine learning to determine the efficiency of each of India’s current 806 coal plants, and then investigating the impacts that different decarbonization approaches would have on the mix of power plants and the price of electricity in 2035 under increasingly stringent caps on emissions.First step: Develop the needed datasetAn important challenge in developing a decarbonization plan for India has been the lack of a complete dataset describing the current power plants in India. While other studies have generated plans, they haven’t taken into account the wide variation in the coal-fired power plants in different regions of the country. “So, we first needed to create a dataset covering and characterizing all of the operating coal plants in India. Such a dataset was not available in the existing literature,” says Ding.Making a cost-effective plan for expanding the capacity of a power system requires knowing the efficiencies of all the power plants operating in the system. For this study, the researchers used as their metric the “station heat rate,” a standard measurement of the overall fuel efficiency of a given power plant. The station heat rate of each plant is needed in order to calculate the fuel consumption and power output of that plant as plans for capacity expansion are being developed.Some of the Indian coal plants’ efficiencies were recorded before 2022, so Ding and her team used machine-learning models to predict the efficiencies of all the Indian coal plants operating now. In 2024, they created and posted online the first comprehensive, open-sourced dataset for all 806 power plants in 30 regions of India. The work won the 2024 MIT Open Data Prize. This dataset includes each plant’s power capacity, efficiency, age, load factor (a measure indicating how much of the time it operates), water stress, and more.In addition, they categorized each plant according to its boiler design. A “supercritical” plant operates at a relatively high temperature and pressure, which makes it thermodynamically efficient, so it produces a lot of electricity for each unit of heat in the fuel. A “subcritical” plant runs at a lower temperature and pressure, so it’s less thermodynamically efficient. Most of the Indian coal plants are still subcritical plants running at low efficiency.Next step: Investigate decarbonization optionsEquipped with their detailed dataset covering all the coal power plants in India, the researchers were ready to investigate options for responding to tightening limits on carbon emissions. For that analysis, they turned to GenX, a modeling platform that was developed at MITEI to help guide decision-makers as they make investments and other plans for the future of their power systems.Ding built a GenX model based on India’s power system in 2020, including details about each power plant and transmission network across 30 regions of the country. She also entered the coal price, potential resources for wind and solar power installations, and other attributes of each region. Based on the parameters given, the GenX model would calculate the lowest-cost combination of equipment and operating conditions that can fulfill a defined future level of demand while also meeting specified policy constraints, including limits on carbon emissions. The model and all data sources were also released as open-source tools for all viewers to use.Ding and her colleagues — Dharik Mallapragada, a former principal research scientist at MITEI who is now an assistant professor of chemical and biomolecular energy at NYU Tandon School of Engineering and a MITEI visiting scientist; and Robert J. Stoner, the founding director of the MIT Tata Center for Technology and Design and former deputy director of MITEI for science and technology — then used the model to explore options for meeting demands in 2035 under progressively tighter carbon emissions caps, taking into account region-to-region variations in the efficiencies of the coal plants, the price of coal, and other factors. They describe their methods and their findings in a paper published in the journal Energy for Sustainable Development.In separate runs, they explored plans involving various combinations of current coal plants, possible new renewable plants, and more, to see their outcome in 2035. Specifically, they assumed the following four “grid-evolution scenarios:”Baseline: The baseline scenario assumes limited onshore wind and solar photovoltaics development and excludes retrofitting options, representing a business-as-usual pathway.High renewable capacity: This scenario calls for the development of onshore wind and solar power without any supply chain constraints.Biomass co-firing: This scenario assumes the baseline limits on renewables, but here all coal plants — both subcritical and supercritical — can be retrofitted for “co-firing” with biomass, an approach in which clean-burning biomass replaces some of the coal fuel. Certain coal power plants in India already co-fire coal and biomass, so the technology is known.Carbon capture and sequestration plus biomass co-firing: This scenario is based on the same assumptions as the biomass co-firing scenario with one addition: All of the high-efficiency supercritical plants are also retrofitted for carbon capture and sequestration (CCS), a technology that captures and removes carbon from a power plant’s exhaust stream and prepares it for permanent disposal. Thus far, CCS has not been used in India. This study specifies that 90 percent of all carbon in the power plant exhaust is captured.Ding and her team investigated power system planning under each of those grid-evolution scenarios and four assumptions about carbon caps: no cap, which is the current situation; 1,000 million tons (Mt) of carbon dioxide (CO2) emissions, which reflects India’s announced targets for 2035; and two more-ambitious targets, namely 800 Mt and 500 Mt. For context, CO2 emissions from India’s power sector totaled about 1,100 Mt in 2021. (Note that transmission network expansion is allowed in all scenarios.)Key findingsAssuming the adoption of carbon caps under the four scenarios generated a vast array of detailed numerical results. But taken together, the results show interesting trends in the cost-optimal mix of generating capacity and the cost of electricity under the different scenarios.Even without any limits on carbon emissions, most new capacity additions will be wind and solar generators — the lowest-cost option for expanding India’s electricity-generation capacity. Indeed, this is observed to be the case now in India. However, the increasing demand for electricity will still require some new coal plants to be built. Model results show a 10 to 20 percent increase in coal plant capacity by 2035 relative to 2020.Under the baseline scenario, renewables are expanded up to the maximum allowed under the assumptions, implying that more deployment would be economical. More coal capacity is built, and as the cap on emissions tightens, there is also investment in natural gas power plants, as well as batteries to help compensate for the now-large amount of intermittent solar and wind generation. When a 500 Mt cap on carbon is imposed, the cost of electricity generation is twice as high as it was with no cap.The high renewable capacity scenario reduces the development of new coal capacity and produces the lowest electricity cost of the four scenarios. Under the most stringent cap — 500 Mt — onshore wind farms play an important role in bringing the cost down. “Otherwise, it’ll be very expensive to reach such stringent carbon constraints,” notes Ding. “Certain coal plants that remain run only a few hours per year, so are inefficient as well as financially unviable. But they still need to be there to support wind and solar.” She explains that other backup sources of electricity, such as batteries, are even more costly. The biomass co-firing scenario assumes the same capacity limit on renewables as in the baseline scenario, and the results are much the same, in part because the biomass replaces such a low fraction — just 20 percent — of the coal in the fuel feedstock. “This scenario would be most similar to the current situation in India,” says Ding. “It won’t bring down the cost of electricity, so we’re basically saying that adding this technology doesn’t contribute effectively to decarbonization.”But CCS plus biomass co-firing is a different story. It also assumes the limits on renewables development, yet it is the second-best option in terms of reducing costs. Under the 500 Mt cap on CO2 emissions, retrofitting for both CCS and biomass co-firing produces a 22 percent reduction in the cost of electricity compared to the baseline scenario. In addition, as the carbon cap tightens, this option reduces the extent of deployment of natural gas plants and significantly improves overall coal plant utilization. That increased utilization “means that coal plants have switched from just meeting the peak demand to supplying part of the baseline load, which will lower the cost of coal generation,” explains Ding.Some concernsWhile those trends are enlightening, the analyses also uncovered some concerns for India to consider, in particular, with the two approaches that yielded the lowest electricity costs.The high renewables scenario is, Ding notes, “very ideal.” It assumes that there will be little limiting the development of wind and solar capacity, so there won’t be any issues with supply chains, which is unrealistic. More importantly, the analyses showed that implementing the high renewables approach would create uneven investment in renewables across the 30 regions. Resources for onshore and offshore wind farms are mainly concentrated in a few regions in western and southern India. “So all the wind farms would be put in those regions, near where the rich cities are,” says Ding. “The poorer cities on the eastern side, where the coal power plants are, will have little renewable investment.”So the approach that’s best in terms of cost is not best in terms of social welfare, because it tends to benefit the rich regions more than the poor ones. “It’s like [the government will] need to consider the trade-off between energy justice and cost,” says Ding. Enacting state-level renewable generation targets could encourage a more even distribution of renewable capacity installation. Also, as transmission expansion is planned, coordination among power system operators and renewable energy investors in different regions could help in achieving the best outcome.CCS plus biomass co-firing — the second-best option for reducing prices — solves the equity problem posed by high renewables, and it assumes a more realistic level of renewable power adoption. However, CCS hasn’t been used in India, so there is no precedent in terms of costs. The researchers therefore based their cost estimates on the cost of CCS in China and then increased the required investment by 10 percent, the “first-of-a-kind” index developed by the U.S. Energy Information Administration. Based on those costs and other assumptions, the researchers conclude that coal plants with CCS could come into use by 2035 when the carbon cap for power generation is less than 1,000 Mt.But will CCS actually be implemented in India? While there’s been discussion about using CCS in heavy industry, the Indian government has not announced any plans for implementing the technology in coal-fired power plants. Indeed, India is currently “very conservative about CCS,” says Ding. “Some researchers say CCS won’t happen because it’s so expensive, and as long as there’s no direct use for the captured carbon, the only thing you can do is put it in the ground.” She adds, “It’s really controversial to talk about whether CCS will be implemented in India in the next 10 years.”Ding and her colleagues hope that other researchers and policymakers — especially those working in developing countries — may benefit from gaining access to their datasets and learning about their methods. Based on their findings for India, she stresses the importance of understanding the detailed geographical situation in a country in order to design plans and policies that are both realistic and equitable. More

  • in

    SLB joins the MIT.nano Consortium

    SLB, a global company creating technology to address the world’s energy challenges, has joined the MIT.nano Consortium.The MIT.nano Consortium is a platform for academia-industry collaboration, fostering research and innovation in nanoscale science and engineering.“The addition of SLB to the MIT.nano Consortium represents a powerful synergy between academic innovation and leading industry,” says Vladimir Bulović, the founding faculty director of MIT.nano and the Fariborz Masseh (1990) Professor of Emerging Technologies at MIT. “SLB’s expertise in developing energy technologies and its commitment to decarbonization aligns with MIT‘s mission to address the many challenges of climate change. Their addition to the consortium, and collaborations that will follow, will empower the MIT.nano community to advance critical research in this domain.”For 100 years, SLB has developed strategies and systems to unlock access to energy beneath the Earth’s surface. The company’s founder, Conrad Schlumberger, conceived the idea of using electrical measurements to map subsurface rock bodies back in 1912. Since then, SLB has continued to open new fronts in energy exploration—innovating in oil and gas, scaling new technologies, and designing digital solutions. Applying decades of innovation in science and engineering, SLB has committed to accelerating the decarbonization of the energy sector and supporting the global transition to low-carbon energy systems.With more than 900 facilities in over 120 countries, SLB adds to the global industry perspective of the MIT.nano Consortium and the broader MIT research community.“Taking a nanoscale approach to the scientific and technological challenges we face in the decarbonization domains is an endeavor that SLB is excited to embark on with MIT.nano,” says Smaine Zeroug, SLB research director and ambassador to MIT. “We are confident our engagement with MIT.nano and the extensive research network they offer access to will ultimately lead to field-viable solutions.”SLB has a longstanding relationship with MIT. The company, formerly named Schlumberger, donated specialized software to the MIT Seismic Visualization Laboratory in 1999 to enable MIT researchers and students to use three-dimensional seismic data in their studies of the Earth’s upper crust. SLB is also a current member of the MIT CSAIL Alliances.As a member of the MIT.nano consortium, SLB will gain unparalleled access to MIT.nano’s dynamic user community, providing opportunities to share expertise and guide advances in nanoscale technology.MIT.nano continues to welcome new companies as sustaining members. For details, and to see a list of current members, visit the MIT.nano Consortium page. More

  • in

    The MIT-Portugal Program enters Phase 4

    Since its founding 19 years ago as a pioneering collaboration with Portuguese universities, research institutions and corporations, the MIT-Portugal Program (MPP) has achieved a slew of successes — from enabling 47 entrepreneurial spinoffs and funding over 220 joint projects between MIT and Portuguese researchers to training a generation of exceptional researchers on both sides of the Atlantic.In March, with nearly two decades of collaboration under their belts, MIT and the Portuguese Science and Technology Foundation (FCT) signed an agreement that officially launches the program’s next chapter. Running through 2030, MPP’s Phase 4 will support continued exploration of innovative ideas and solutions in fields ranging from artificial intelligence and nanotechnology to climate change — both on the MIT campus and with partners throughout Portugal.  “One of the advantages of having a program that has gone on so long is that we are pretty well familiar with each other at this point. Over the years, we’ve learned each other’s systems, strengths and weaknesses and we’ve been able to create a synergy that would not have existed if we worked together for a short period of time,” says Douglas Hart, MIT mechanical engineering professor and MPP co-director.Hart and John Hansman, the T. Wilson Professor of Aeronautics and Astronautics at MIT and MPP co-director, are eager to take the program’s existing research projects further, while adding new areas of focus identified by MIT and FCT. Known as the Fundação para a Ciência e Tecnologia in Portugal, FCT is the national public agency supporting research in science, technology and innovation under Portugal’s Ministry of Education, Science and Innovation.“Over the past two decades, the partnership with MIT has built a foundation of trust that has fostered collaboration among researchers and the development of projects with significant scientific impact and contributions to the Portuguese economy,” Fernando Alexandre, Portugal’s minister for education, science, and innovation, says. “In this new phase of the partnership, running from 2025 to 2030, we expect even greater ambition and impact — raising Portuguese science and its capacity to transform the economy and improve our society to even higher levels, while helping to address the challenges we face in areas such as climate change and the oceans, digitalization, and space.”“International collaborations like the MIT-Portugal Program are absolutely vital to MIT’s mission of research, education and service. I’m thrilled to see the program move into its next phase,” says MIT President Sally Kornbluth. “MPP offers our faculty and students opportunities to work in unique research environments where they not only make new findings and learn new methods but also contribute to solving urgent local and global problems. MPP’s work in the realm of ocean science and climate is a prime example of how international partnerships like this can help solve important human problems.”Sharing MIT’s commitment to academic independence and excellence, Kornbluth adds, “the institutions and researchers we partner with through MPP enhance MIT’s ability to achieve its mission, enabling us to pursue the exacting standards of intellectual and creative distinction that make MIT a cradle of innovation and world leader in scientific discovery.”The epitome of an effective international collaboration, MPP has stayed true to its mission and continued to deliver results here in the U.S. and in Portugal for nearly two decades — prevailing amid myriad shifts in the political, social, and economic landscape. The multifaceted program encompasses an annual research conference and educational summits such as an Innovation Workshop at MIT each June and a Marine Robotics Summer School in the Azores in July, as well as student and faculty exchanges that facilitate collaborative research. During the third phase of the program alone, 59 MIT students and 53 faculty and researchers visited Portugal, and MIT hosted 131 students and 49 faculty and researchers from Portuguese universities and other institutions.In each roughly five-year phase, MPP researchers focus on a handful of core research areas. For Phase 3, MPP advanced cutting-edge research in four strategic areas: climate science and climate change; Earth systems: oceans to near space; digital transformation in manufacturing; and sustainable cities. Within these broad areas, MIT and FCT researchers worked together on numerous small-scale projects and several large “flagship” ones, including development of Portugal’s CubeSat satellite, a collaboration between MPP and several Portuguese universities and companies that marked the country’s second satellite launch and the first in 30 years.While work in the Phase 3 fields will continue during Phase 4, researchers will also turn their attention to four more areas: chips/nanotechnology, energy (a previous focus in Phase 2), artificial intelligence, and space.“We are opening up the aperture for additional collaboration areas,” Hansman says.In addition to focusing on distinct subject areas, each phase has emphasized the various parts of MPP’s mission to differing degrees. While Phase 3 accentuated collaborative research more than educational exchanges and entrepreneurship, those two aspects will be given more weight under the Phase 4 agreement, Hart said.“We have approval in Phase 4 to bring a number of Portuguese students over, and our principal investigators will benefit from close collaborations with Portuguese researchers,” he says.The longevity of MPP and the recent launch of Phase 4 are evidence of the program’s value. The program has played a role in the educational, technological and economic progress Portugal has achieved over the past two decades, as well.  “The Portugal of today is remarkably stronger than the Portugal of 20 years ago, and many of the places where they are stronger have been impacted by the program,” says Hansman, pointing to sustainable cities and “green” energy, in particular. “We can’t take direct credit, but we’ve been part of Portugal’s journey forward.”Since MPP began, Hart adds, “Portugal has become much more entrepreneurial. Many, many, many more start-up companies are coming out of Portuguese universities than there used to be.”  A recent analysis of MPP and FCT’s other U.S. collaborations highlighted a number of positive outcomes. The report noted that collaborations with MIT and other US universities have enhanced Portuguese research capacities and promoted organizational upgrades in the national R&D ecosystem, while providing Portuguese universities and companies with opportunities to engage in complex projects that would have been difficult to undertake on their own.Regarding MIT in particular, the report found that MPP’s long-term collaboration has spawned the establishment of sustained doctoral programs and pointed to a marked shift within Portugal’s educational ecosystem toward globally aligned standards. MPP, it reported, has facilitated the education of 198 Portuguese PhDs.Portugal’s universities, students and companies are not alone in benefitting from the research, networks, and economic activity MPP has spawned. MPP also delivers unique value to MIT, as well as to the broader US science and research community. Among the program’s consistent themes over the years, for example, is “joint interest in the Atlantic,” Hansman says.This summer, Faial Island in the Azores will host MPP’s fifth annual Marine Robotics Summer School, a two-week course open to 12 Portuguese Master’s and first year PhD students and 12 MIT upper-level undergraduates and graduate students. The course, which includes lectures by MIT and Portuguese faculty and other researchers, workshops, labs and hands-on experiences, “is always my favorite,” said Hart.“I get to work with some of the best researchers in the world there, and some of the top students coming out of Woods Hole Oceanographic Institution, MIT, and Portugal,” he says, adding that some of his previous Marine Robotics Summer School students have come to study at MIT and then gone on to become professors in ocean science.“So, it’s been exciting to see the growth of students coming out of that program, certainly a positive impact,” Hart says.MPP provides one-of-a-kind opportunities for ocean research due to the unique marine facilities available in Portugal, including not only open ocean off the Azores but also Lisbon’s deep-water port and a Portuguese Naval facility just south of Lisbon that is available for collaborative research by international scientists. Like MIT, Portuguese universities are also strongly invested in climate change research — a field of study keenly related to ocean systems.“The international collaboration has allowed us to test and further develop our research prototypes in different aquaculture environments both in the US and in Portugal, while building on the unique expertise of our Portuguese faculty collaborator Dr. Ricardo Calado from the University of Aveiro and our industry collaborators,” says Stefanie Mueller, the TIBCO Career Development Associate Professor in MIT’s departments of Electrical Engineering and Computer Science and Mechanical Engineering and leader of the Human-Computer Interaction Group at the MIT Computer Science and Artificial Intelligence Lab.Mueller points to the work of MIT mechanical engineering PhD student Charlene Xia, a Marine Robotics Summer School participant, whose research is aimed at developing an economical system to monitor the microbiome of seaweed farms and halt the spread of harmful bacteria associated with ocean warming. In addition to participating in the summer school as a student, Xia returned to the Azores for two subsequent years as a teaching assistant.“The MIT-Portugal Program has been a key enabler of our research on monitoring the aquatic microbiome for potential disease outbreaks,” Mueller says.As MPP enters its next phase, Hart and Hansman are optimistic about the program’s continuing success on both sides of the Atlantic and envision broadening its impact going forward.“I think, at this point, the research is going really well, and we’ve got a lot of connections. I think one of our goals is to expand not the science of the program necessarily, but the groups involved,” Hart says, noting that MPP could have a bigger presence in technical fields such as AI and micro-nano manufacturing, as well as in social sciences and humanities.“We’d like to involve many more people and new people here at MIT, as well as in Portugal,” he says, “so that we can reach a larger slice of the population.”  More

  • in

    Hundred-year storm tides will occur every few decades in Bangladesh, scientists report

    Tropical cyclones are hurricanes that brew over the tropical ocean and can travel over land, inundating coastal regions. The most extreme cyclones can generate devastating storm tides — seawater that is heightened by the tides and swells onto land, causing catastrophic flood events in coastal regions. A new study by MIT scientists finds that, as the planet warms, the recurrence of destructive storm tides will increase tenfold for one of the hardest-hit regions of the world.In a study appearing today in One Earth, the scientists report that, for the highly populated coastal country of Bangladesh, what was once a 100-year event could now strike every 10 years — or more often — by the end of the century. In a future where fossil fuels continue to burn as they do today, what was once considered a catastrophic, once-in-a-century storm tide will hit Bangladesh, on average, once per decade. And the kind of storm tides that have occurred every decade or so will likely batter the country’s coast more frequently, every few years.Bangladesh is one of the most densely populated countries in the world, with more than 171 million people living in a region roughly the size of New York state. The country has been historically vulnerable to tropical cyclones, as it is a low-lying delta that is easily flooded by storms and experiences a seasonal monsoon. Some of the most destructive floods in the world have occurred in Bangladesh, where it’s been increasingly difficult for agricultural economies to recover.The study also finds that Bangladesh will likely experience tropical cyclones that overlap with the months-long monsoon season. Until now, cyclones and the monsoon have occurred at separate times during the year. But as the planet warms, the scientists’ modeling shows that cyclones will push into the monsoon season, causing back-to-back flooding events across the country.“Bangladesh is very active in preparing for climate hazards and risks, but the problem is, everything they’re doing is more or less based on what they’re seeing in the present climate,” says study co-author Sai Ravela, principal research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “We are now seeing an almost tenfold rise in the recurrence of destructive storm tides almost anywhere you look in Bangladesh. This cannot be ignored. So, we think this is timely, to say they have to pause and revisit how they protect against these storms.”Ravela’s co-authors are Jiangchao Qiu, a postdoc in EAPS, and Kerry Emanuel, professor emeritus of atmospheric science at MIT.Height of tidesIn recent years, Bangladesh has invested significantly in storm preparedness, for instance in improving its early-warning system, fortifying village embankments, and increasing access to community shelters. But such preparations have generally been based on the current frequency of storms.In this new study, the MIT team aimed to provide detailed projections of extreme storm tide hazards, which are flooding events where tidal effects amplify cyclone-induced storm surge, in Bangladesh under various climate-warming scenarios and sea-level rise projections.“A lot of these events happen at night, so tides play a really strong role in how much additional water you might get, depending on what the tide is,” Ravela explains.To evaluate the risk of storm tide, the team first applied a method of physics-based downscaling, which Emanuel’s group first developed over 20 years ago and has been using since to study hurricane activity in different parts of the world. The technique involves a low-resolution model of the global ocean and atmosphere that is embedded with a finer-resolution model that simulates weather patterns as detailed as a single hurricane. The researchers then scatter hurricane “seeds” in a region of interest and run the model forward to observe which seeds grow and make landfall over time.To the downscaled model, the researchers incorporated a hydrodynamical model, which simulates the height of a storm surge, given the pattern and strength of winds at the time of a given storm. For any given simulated storm, the team also tracked the tides, as well as effects of sea level rise, and incorporated this information into a numerical model that calculated the storm tide, or the height of the water, with tidal effects as a storm makes landfall.Extreme overlapWith this framework, the scientists simulated tens of thousands of potential tropical cyclones near Bangladesh, under several future climate scenarios, ranging from one that resembles the current day to one in which the world experiences further warming as a result of continued fossil fuel burning. For each simulation, they recorded the maximum storm tides along the coast of Bangladesh and noted the frequency of storm tides of various heights in a given climate scenario.“We can look at the entire bucket of simulations and see, for this storm tide of say, 3 meters, we saw this many storms, and from that you can figure out the relative frequency of that kind of storm,” Qiu says. “You can then invert that number to a return period.”A return period is the time it takes for a storm of a particular type to make landfall again. A storm that is considered a “100-year event” is typically more powerful and destructive, and in this case, creates more extreme storm tides, and therefore more catastrophic flooding, compared to a 10-year event.From their modeling, Ravela and his colleagues found that under a scenario of increased global warming, the storms that previously were considered 100-year events, producing the highest storm tide values, can recur every decade or less by late-century. They also observed that, toward the end of this century, tropical cyclones in Bangladesh will occur across a broader seasonal window, potentially overlapping in certain years with the seasonal monsoon season.“If the monsoon rain has come in and saturated the soil, a cyclone then comes in and it makes the problem much worse,” Ravela says. “People won’t have any reprieve between the extreme storm and the monsoon. There are so many compound and cascading effects between the two. And this only emerges because warming happens.”Ravela and his colleagues are using their modeling to help experts in Bangladesh better evaluate and prepare for a future of increasing storm risk. And he says that the climate future for Bangladesh is in some ways not unique to this part of the world.“This climate change story that is playing out in Bangladesh in a certain way will be playing out in a different way elsewhere,” Ravela notes. “Maybe where you are, the story is about heat stress, or amplifying droughts, or wildfires. The peril is different. But the underlying catastrophe story is not that different.”This research is supported in part by the MIT Climate Resilience Early Warning Systems Climate Grand Challenges project, the Jameel Observatory JO-CREWSNet project; MIT Weather and Climate Extremes Climate Grand Challenges project; and Schmidt Sciences, LLC.  More

  • in

    Using liquid air for grid-scale energy storage

    As the world moves to reduce carbon emissions, solar and wind power will play an increasing role on electricity grids. But those renewable sources only generate electricity when it’s sunny or windy. So to ensure a reliable power grid — one that can deliver electricity 24/7 — it’s crucial to have a means of storing electricity when supplies are abundant and delivering it later, when they’re not. And sometimes large amounts of electricity will need to be stored not just for hours, but for days, or even longer.Some methods of achieving “long-duration energy storage” are promising. For example, with pumped hydro energy storage, water is pumped from a lake to another, higher lake when there’s extra electricity and released back down through power-generating turbines when more electricity is needed. But that approach is limited by geography, and most potential sites in the United States have already been used. Lithium-ion batteries could provide grid-scale storage, but only for about four hours. Longer than that and battery systems get prohibitively expensive.A team of researchers from MIT and the Norwegian University of Science and Technology (NTNU) has been investigating a less-familiar option based on an unlikely-sounding concept: liquid air, or air that is drawn in from the surroundings, cleaned and dried, and then cooled to the point that it liquefies. “Liquid air energy storage” (LAES) systems have been built, so the technology is technically feasible. Moreover, LAES systems are totally clean and can be sited nearly anywhere, storing vast amounts of electricity for days or longer and delivering it when it’s needed. But there haven’t been conclusive studies of its economic viability. Would the income over time warrant the initial investment and ongoing costs? With funding from the MIT Energy Initiative’s Future Energy Systems Center, the researchers developed a model that takes detailed information on LAES systems and calculates when and where those systems would be economically viable, assuming future scenarios in line with selected decarbonization targets as well as other conditions that may prevail on future energy grids.They found that under some of the scenarios they modeled, LAES could be economically viable in certain locations. Sensitivity analyses showed that policies providing a subsidy on capital expenses could make LAES systems economically viable in many locations. Further calculations showed that the cost of storing a given amount of electricity with LAES would be lower than with more familiar systems such as pumped hydro and lithium-ion batteries. They conclude that LAES holds promise as a means of providing critically needed long-duration storage when future power grids are decarbonized and dominated by intermittent renewable sources of electricity.The researchers — Shaylin A. Cetegen, a PhD candidate in the MIT Department of Chemical Engineering (ChemE); Professor Emeritus Truls Gundersen of the NTNU Department of Energy and Process Engineering; and MIT Professor Emeritus Paul I. Barton of ChemE — describe their model and their findings in a new paper published in the journal Energy.The LAES technology and its benefitsLAES systems consists of three steps: charging, storing, and discharging. When supply on the grid exceeds demand and prices are low, the LAES system is charged. Air is then drawn in and liquefied. A large amount of electricity is consumed to cool and liquefy the air in the LAES process. The liquid air is then sent to highly insulated storage tanks, where it’s held at a very low temperature and atmospheric pressure. When the power grid needs added electricity to meet demand, the liquid air is first pumped to a higher pressure and then heated, and it turns back into a gas. This high-pressure, high-temperature, vapor-phase air expands in a turbine that generates electricity to be sent back to the grid.According to Cetegen, a primary advantage of LAES is that it’s clean. “There are no contaminants involved,” she says. “It takes in and releases only ambient air and electricity, so it’s as clean as the electricity that’s used to run it.” In addition, a LAES system can be built largely from commercially available components and does not rely on expensive or rare materials. And the system can be sited almost anywhere, including near other industrial processes that produce waste heat or cold that can be used by the LAES system to increase its energy efficiency.Economic viabilityIn considering the potential role of LAES on future power grids, the first question is: Will LAES systems be attractive to investors? Answering that question requires calculating the technology’s net present value (NPV), which represents the sum of all discounted cash flows — including revenues, capital expenditures, operating costs, and other financial factors — over the project’s lifetime. (The study assumed a cash flow discount rate of 7 percent.)To calculate the NPV, the researchers needed to determine how LAES systems will perform in future energy markets. In those markets, various sources of electricity are brought online to meet the current demand, typically following a process called “economic dispatch:” The lowest-cost source that’s available is always deployed next. Determining the NPV of liquid air storage therefore requires predicting how that technology will fare in future markets competing with other sources of electricity when demand exceeds supply — and also accounting for prices when supply exceeds demand, so excess electricity is available to recharge the LAES systems.For their study, the MIT and NTNU researchers designed a model that starts with a description of an LAES system, including details such as the sizes of the units where the air is liquefied and the power is recovered, and also capital expenses based on estimates reported in the literature. The model then draws on state-of-the-art pricing data that’s released every year by the National Renewable Energy Laboratory (NREL) and is widely used by energy modelers worldwide. The NREL dataset forecasts prices, construction and retirement of specific types of electricity generation and storage facilities, and more, assuming eight decarbonization scenarios for 18 regions of the United States out to 2050.The new model then tracks buying and selling in energy markets for every hour of every day in a year, repeating the same schedule for five-year intervals. Based on the NREL dataset and details of the LAES system — plus constraints such as the system’s physical storage capacity and how often it can switch between charging and discharging — the model calculates how much money LAES operators would make selling power to the grid when it’s needed and how much they would spend buying electricity when it’s available to recharge their LAES system. In line with the NREL dataset, the model generates results for 18 U.S. regions and eight decarbonization scenarios, including 100 percent decarbonization by 2035 and 95 percent decarbonization by 2050, and other assumptions about future energy grids, including high-demand growth plus high and low costs for renewable energy and for natural gas.Cetegen describes some of their results: “Assuming a 100-megawatt (MW) system — a standard sort of size — we saw economic viability pop up under the decarbonization scenario calling for 100 percent decarbonization by 2035.” So, positive NPVs (indicating economic viability) occurred only under the most aggressive — therefore the least realistic — scenario, and they occurred in only a few southern states, including Texas and Florida, likely because of how those energy markets are structured and operate.The researchers also tested the sensitivity of NPVs to different storage capacities, that is, how long the system could continuously deliver power to the grid. They calculated the NPVs of a 100 MW system that could provide electricity supply for one day, one week, and one month. “That analysis showed that under aggressive decarbonization, weekly storage is more economically viable than monthly storage, because [in the latter case] we’re paying for more storage capacity than we need,” explains Cetegen.Improving the NPV of the LAES systemThe researchers next analyzed two possible ways to improve the NPV of liquid air storage: by increasing the system’s energy efficiency and by providing financial incentives. Their analyses showed that increasing the energy efficiency, even up to the theoretical limit of the process, would not change the economic viability of LAES under the most realistic decarbonization scenarios. On the other hand, a major improvement resulted when they assumed policies providing subsidies on capital expenditures on new installations. Indeed, assuming subsidies of between 40 percent and 60 percent made the NPVs for a 100 MW system become positive under all the realistic scenarios.Thus, their analysis showed that financial incentives could be far more effective than technical improvements in making LAES economically viable. While engineers may find that outcome disappointing, Cetegen notes that from a broader perspective, it’s good news. “You could spend your whole life trying to optimize the efficiency of this process, and it wouldn’t translate to securing the investment needed to scale the technology,” she says. “Policies can take a long time to implement as well. But theoretically you could do it overnight. So if storage is needed [on a future decarbonized grid], then this is one way to encourage adoption of LAES right away.”Cost comparison with other energy storage technologiesCalculating the economic viability of a storage technology is highly dependent on the assumptions used. As a result, a different measure — the “levelized cost of storage” (LCOS) — is typically used to compare the costs of different storage technologies. In simple terms, the LCOS is the cost of storing each unit of energy over the lifetime of a project, not accounting for any income that results.On that measure, the LAES technology excels. The researchers’ model yielded an LCOS for liquid air storage of about $60 per megawatt-hour, regardless of the decarbonization scenario. That LCOS is about a third that of lithium-ion battery storage and half that of pumped hydro. Cetegen cites another interesting finding: the LCOS of their assumed LAES system varied depending on where it’s being used. The standard practice of reporting a single LCOS for a given energy storage technology may not provide the full picture.Cetegen has adapted the model and is now calculating the NPV and LCOS for energy storage using lithium-ion batteries. But she’s already encouraged by the LCOS of liquid air storage. “While LAES systems may not be economically viable from an investment perspective today, that doesn’t mean they won’t be implemented in the future,” she concludes. “With limited options for grid-scale storage expansion and the growing need for storage technologies to ensure energy security, if we can’t find economically viable alternatives, we’ll likely have to turn to least-cost solutions to meet storage needs. This is why the story of liquid air storage is far from over. We believe our findings justify the continued exploration of LAES as a key energy storage solution for the future.” More

  • in

    Study: Burning heavy fuel oil with scrubbers is the best available option for bulk maritime shipping

    When the International Maritime Organization enacted a mandatory cap on the sulfur content of marine fuels in 2020, with an eye toward reducing harmful environmental and health impacts, it left shipping companies with three main options.They could burn low-sulfur fossil fuels, like marine gas oil, or install cleaning systems to remove sulfur from the exhaust gas produced by burning heavy fuel oil. Biofuels with lower sulfur content offer another alternative, though their limited availability makes them a less feasible option.While installing exhaust gas cleaning systems, known as scrubbers, is the most feasible and cost-effective option, there has been a great deal of uncertainty among firms, policymakers, and scientists as to how “green” these scrubbers are.Through a novel lifecycle assessment, researchers from MIT, Georgia Tech, and elsewhere have now found that burning heavy fuel oil with scrubbers in the open ocean can match or surpass using low-sulfur fuels, when a wide variety of environmental factors is considered.The scientists combined data on the production and operation of scrubbers and fuels with emissions measurements taken onboard an oceangoing cargo ship.They found that, when the entire supply chain is considered, burning heavy fuel oil with scrubbers was the least harmful option in terms of nearly all 10 environmental impact factors they studied, such as greenhouse gas emissions, terrestrial acidification, and ozone formation.“In our collaboration with Oldendorff Carriers to broadly explore reducing the environmental impact of shipping, this study of scrubbers turned out to be an unexpectedly deep and important transitional issue,” says Neil Gershenfeld, an MIT professor, director of the Center for Bits and Atoms (CBA), and senior author of the study.“Claims about environmental hazards and policies to mitigate them should be backed by science. You need to see the data, be objective, and design studies that take into account the full picture to be able to compare different options from an apples-to-apples perspective,” adds lead author Patricia Stathatou, an assistant professor at Georgia Tech, who began this study as a postdoc in the CBA.Stathatou is joined on the paper by Michael Triantafyllou, the Henry L. and Grace Doherty and others at the National Technical University of Athens in Greece and the maritime shipping firm Oldendorff Carriers. The research appears today in Environmental Science and Technology.Slashing sulfur emissionsHeavy fuel oil, traditionally burned by bulk carriers that make up about 30 percent of the global maritime fleet, usually has a sulfur content around 2 to 3 percent. This is far higher than the International Maritime Organization’s 2020 cap of 0.5 percent in most areas of the ocean and 0.1 percent in areas near population centers or environmentally sensitive regions.Sulfur oxide emissions contribute to air pollution and acid rain, and can damage the human respiratory system.In 2018, fewer than 1,000 vessels employed scrubbers. After the cap went into place, higher prices of low-sulfur fossil fuels and limited availability of alternative fuels led many firms to install scrubbers so they could keep burning heavy fuel oil.Today, more than 5,800 vessels utilize scrubbers, the majority of which are wet, open-loop scrubbers.“Scrubbers are a very mature technology. They have traditionally been used for decades in land-based applications like power plants to remove pollutants,” Stathatou says.A wet, open-loop marine scrubber is a huge, metal, vertical tank installed in a ship’s exhaust stack, above the engines. Inside, seawater drawn from the ocean is sprayed through a series of nozzles downward to wash the hot exhaust gases as they exit the engines.The seawater interacts with sulfur dioxide in the exhaust, converting it to sulfates — water-soluble, environmentally benign compounds that naturally occur in seawater. The washwater is released back into the ocean, while the cleaned exhaust escapes to the atmosphere with little to no sulfur dioxide emissions.But the acidic washwater can contain other combustion byproducts like heavy metals, so scientists wondered if scrubbers were comparable, from a holistic environmental point of view, to burning low-sulfur fuels.Several studies explored toxicity of washwater and fuel system pollution, but none painted a full picture.The researchers set out to fill that scientific gap.A “well-to-wake” analysisThe team conducted a lifecycle assessment using a global environmental database on production and transport of fossil fuels, such as heavy fuel oil, marine gas oil, and very-low sulfur fuel oil. Considering the entire lifecycle of each fuel is key, since producing low-sulfur fuel requires extra processing steps in the refinery, causing additional emissions of greenhouse gases and particulate matter.“If we just look at everything that happens before the fuel is bunkered onboard the vessel, heavy fuel oil is significantly more low-impact, environmentally, than low-sulfur fuels,” she says.The researchers also collaborated with a scrubber manufacturer to obtain detailed information on all materials, production processes, and transportation steps involved in marine scrubber fabrication and installation.“If you consider that the scrubber has a lifetime of about 20 years, the environmental impacts of producing the scrubber over its lifetime are negligible compared to producing heavy fuel oil,” she adds.For the final piece, Stathatou spent a week onboard a bulk carrier vessel in China to measure emissions and gather seawater and washwater samples. The ship burned heavy fuel oil with a scrubber and low-sulfur fuels under similar ocean conditions and engine settings.Collecting these onboard data was the most challenging part of the study.“All the safety gear, combined with the heat and the noise from the engines on a moving ship, was very overwhelming,” she says.Their results showed that scrubbers reduce sulfur dioxide emissions by 97 percent, putting heavy fuel oil on par with low-sulfur fuels according to that measure. The researchers saw similar trends for emissions of other pollutants like carbon monoxide and nitrous oxide.In addition, they tested washwater samples for more than 60 chemical parameters, including nitrogen, phosphorus, polycyclic aromatic hydrocarbons, and 23 metals.The concentrations of chemicals regulated by the IMO were far below the organization’s requirements. For unregulated chemicals, the researchers compared the concentrations to the strictest limits for industrial effluents from the U.S. Environmental Protection Agency and European Union.Most chemical concentrations were at least an order of magnitude below these requirements.In addition, since washwater is diluted thousands of times as it is dispersed by a moving vessel, the concentrations of such chemicals would be even lower in the open ocean.These findings suggest that the use of scrubbers with heavy fuel oil can be considered as equal to or more environmentally friendly than low-sulfur fuels across many of the impact categories the researchers studied.“This study demonstrates the scientific complexity of the waste stream of scrubbers. Having finally conducted a multiyear, comprehensive, and peer-reviewed study, commonly held fears and assumptions are now put to rest,” says Scott Bergeron, managing director at Oldendorff Carriers and co-author of the study.“This first-of-its-kind study on a well-to-wake basis provides very valuable input to ongoing discussion at the IMO,” adds Thomas Klenum, executive vice president of innovation and regulatory affairs at the Liberian Registry, emphasizing the need “for regulatory decisions to be made based on scientific studies providing factual data and conclusions.”Ultimately, this study shows the importance of incorporating lifecycle assessments into future environmental impact reduction policies, Stathatou says.“There is all this discussion about switching to alternative fuels in the future, but how green are these fuels? We must do our due diligence to compare them equally with existing solutions to see the costs and benefits,” she adds.This study was supported, in part, by Oldendorff Carriers. More