More stories

  • in

    The MIT-Portugal Program enters Phase 4

    Since its founding 19 years ago as a pioneering collaboration with Portuguese universities, research institutions and corporations, the MIT-Portugal Program (MPP) has achieved a slew of successes — from enabling 47 entrepreneurial spinoffs and funding over 220 joint projects between MIT and Portuguese researchers to training a generation of exceptional researchers on both sides of the Atlantic.In March, with nearly two decades of collaboration under their belts, MIT and the Portuguese Science and Technology Foundation (FCT) signed an agreement that officially launches the program’s next chapter. Running through 2030, MPP’s Phase 4 will support continued exploration of innovative ideas and solutions in fields ranging from artificial intelligence and nanotechnology to climate change — both on the MIT campus and with partners throughout Portugal.  “One of the advantages of having a program that has gone on so long is that we are pretty well familiar with each other at this point. Over the years, we’ve learned each other’s systems, strengths and weaknesses and we’ve been able to create a synergy that would not have existed if we worked together for a short period of time,” says Douglas Hart, MIT mechanical engineering professor and MPP co-director.Hart and John Hansman, the T. Wilson Professor of Aeronautics and Astronautics at MIT and MPP co-director, are eager to take the program’s existing research projects further, while adding new areas of focus identified by MIT and FCT. Known as the Fundação para a Ciência e Tecnologia in Portugal, FCT is the national public agency supporting research in science, technology and innovation under Portugal’s Ministry of Education, Science and Innovation.“Over the past two decades, the partnership with MIT has built a foundation of trust that has fostered collaboration among researchers and the development of projects with significant scientific impact and contributions to the Portuguese economy,” Fernando Alexandre, Portugal’s minister for education, science, and innovation, says. “In this new phase of the partnership, running from 2025 to 2030, we expect even greater ambition and impact — raising Portuguese science and its capacity to transform the economy and improve our society to even higher levels, while helping to address the challenges we face in areas such as climate change and the oceans, digitalization, and space.”“International collaborations like the MIT-Portugal Program are absolutely vital to MIT’s mission of research, education and service. I’m thrilled to see the program move into its next phase,” says MIT President Sally Kornbluth. “MPP offers our faculty and students opportunities to work in unique research environments where they not only make new findings and learn new methods but also contribute to solving urgent local and global problems. MPP’s work in the realm of ocean science and climate is a prime example of how international partnerships like this can help solve important human problems.”Sharing MIT’s commitment to academic independence and excellence, Kornbluth adds, “the institutions and researchers we partner with through MPP enhance MIT’s ability to achieve its mission, enabling us to pursue the exacting standards of intellectual and creative distinction that make MIT a cradle of innovation and world leader in scientific discovery.”The epitome of an effective international collaboration, MPP has stayed true to its mission and continued to deliver results here in the U.S. and in Portugal for nearly two decades — prevailing amid myriad shifts in the political, social, and economic landscape. The multifaceted program encompasses an annual research conference and educational summits such as an Innovation Workshop at MIT each June and a Marine Robotics Summer School in the Azores in July, as well as student and faculty exchanges that facilitate collaborative research. During the third phase of the program alone, 59 MIT students and 53 faculty and researchers visited Portugal, and MIT hosted 131 students and 49 faculty and researchers from Portuguese universities and other institutions.In each roughly five-year phase, MPP researchers focus on a handful of core research areas. For Phase 3, MPP advanced cutting-edge research in four strategic areas: climate science and climate change; Earth systems: oceans to near space; digital transformation in manufacturing; and sustainable cities. Within these broad areas, MIT and FCT researchers worked together on numerous small-scale projects and several large “flagship” ones, including development of Portugal’s CubeSat satellite, a collaboration between MPP and several Portuguese universities and companies that marked the country’s second satellite launch and the first in 30 years.While work in the Phase 3 fields will continue during Phase 4, researchers will also turn their attention to four more areas: chips/nanotechnology, energy (a previous focus in Phase 2), artificial intelligence, and space.“We are opening up the aperture for additional collaboration areas,” Hansman says.In addition to focusing on distinct subject areas, each phase has emphasized the various parts of MPP’s mission to differing degrees. While Phase 3 accentuated collaborative research more than educational exchanges and entrepreneurship, those two aspects will be given more weight under the Phase 4 agreement, Hart said.“We have approval in Phase 4 to bring a number of Portuguese students over, and our principal investigators will benefit from close collaborations with Portuguese researchers,” he says.The longevity of MPP and the recent launch of Phase 4 are evidence of the program’s value. The program has played a role in the educational, technological and economic progress Portugal has achieved over the past two decades, as well.  “The Portugal of today is remarkably stronger than the Portugal of 20 years ago, and many of the places where they are stronger have been impacted by the program,” says Hansman, pointing to sustainable cities and “green” energy, in particular. “We can’t take direct credit, but we’ve been part of Portugal’s journey forward.”Since MPP began, Hart adds, “Portugal has become much more entrepreneurial. Many, many, many more start-up companies are coming out of Portuguese universities than there used to be.”  A recent analysis of MPP and FCT’s other U.S. collaborations highlighted a number of positive outcomes. The report noted that collaborations with MIT and other US universities have enhanced Portuguese research capacities and promoted organizational upgrades in the national R&D ecosystem, while providing Portuguese universities and companies with opportunities to engage in complex projects that would have been difficult to undertake on their own.Regarding MIT in particular, the report found that MPP’s long-term collaboration has spawned the establishment of sustained doctoral programs and pointed to a marked shift within Portugal’s educational ecosystem toward globally aligned standards. MPP, it reported, has facilitated the education of 198 Portuguese PhDs.Portugal’s universities, students and companies are not alone in benefitting from the research, networks, and economic activity MPP has spawned. MPP also delivers unique value to MIT, as well as to the broader US science and research community. Among the program’s consistent themes over the years, for example, is “joint interest in the Atlantic,” Hansman says.This summer, Faial Island in the Azores will host MPP’s fifth annual Marine Robotics Summer School, a two-week course open to 12 Portuguese Master’s and first year PhD students and 12 MIT upper-level undergraduates and graduate students. The course, which includes lectures by MIT and Portuguese faculty and other researchers, workshops, labs and hands-on experiences, “is always my favorite,” said Hart.“I get to work with some of the best researchers in the world there, and some of the top students coming out of Woods Hole Oceanographic Institution, MIT, and Portugal,” he says, adding that some of his previous Marine Robotics Summer School students have come to study at MIT and then gone on to become professors in ocean science.“So, it’s been exciting to see the growth of students coming out of that program, certainly a positive impact,” Hart says.MPP provides one-of-a-kind opportunities for ocean research due to the unique marine facilities available in Portugal, including not only open ocean off the Azores but also Lisbon’s deep-water port and a Portuguese Naval facility just south of Lisbon that is available for collaborative research by international scientists. Like MIT, Portuguese universities are also strongly invested in climate change research — a field of study keenly related to ocean systems.“The international collaboration has allowed us to test and further develop our research prototypes in different aquaculture environments both in the US and in Portugal, while building on the unique expertise of our Portuguese faculty collaborator Dr. Ricardo Calado from the University of Aveiro and our industry collaborators,” says Stefanie Mueller, the TIBCO Career Development Associate Professor in MIT’s departments of Electrical Engineering and Computer Science and Mechanical Engineering and leader of the Human-Computer Interaction Group at the MIT Computer Science and Artificial Intelligence Lab.Mueller points to the work of MIT mechanical engineering PhD student Charlene Xia, a Marine Robotics Summer School participant, whose research is aimed at developing an economical system to monitor the microbiome of seaweed farms and halt the spread of harmful bacteria associated with ocean warming. In addition to participating in the summer school as a student, Xia returned to the Azores for two subsequent years as a teaching assistant.“The MIT-Portugal Program has been a key enabler of our research on monitoring the aquatic microbiome for potential disease outbreaks,” Mueller says.As MPP enters its next phase, Hart and Hansman are optimistic about the program’s continuing success on both sides of the Atlantic and envision broadening its impact going forward.“I think, at this point, the research is going really well, and we’ve got a lot of connections. I think one of our goals is to expand not the science of the program necessarily, but the groups involved,” Hart says, noting that MPP could have a bigger presence in technical fields such as AI and micro-nano manufacturing, as well as in social sciences and humanities.“We’d like to involve many more people and new people here at MIT, as well as in Portugal,” he says, “so that we can reach a larger slice of the population.”  More

  • in

    Hundred-year storm tides will occur every few decades in Bangladesh, scientists report

    Tropical cyclones are hurricanes that brew over the tropical ocean and can travel over land, inundating coastal regions. The most extreme cyclones can generate devastating storm tides — seawater that is heightened by the tides and swells onto land, causing catastrophic flood events in coastal regions. A new study by MIT scientists finds that, as the planet warms, the recurrence of destructive storm tides will increase tenfold for one of the hardest-hit regions of the world.In a study appearing today in One Earth, the scientists report that, for the highly populated coastal country of Bangladesh, what was once a 100-year event could now strike every 10 years — or more often — by the end of the century. In a future where fossil fuels continue to burn as they do today, what was once considered a catastrophic, once-in-a-century storm tide will hit Bangladesh, on average, once per decade. And the kind of storm tides that have occurred every decade or so will likely batter the country’s coast more frequently, every few years.Bangladesh is one of the most densely populated countries in the world, with more than 171 million people living in a region roughly the size of New York state. The country has been historically vulnerable to tropical cyclones, as it is a low-lying delta that is easily flooded by storms and experiences a seasonal monsoon. Some of the most destructive floods in the world have occurred in Bangladesh, where it’s been increasingly difficult for agricultural economies to recover.The study also finds that Bangladesh will likely experience tropical cyclones that overlap with the months-long monsoon season. Until now, cyclones and the monsoon have occurred at separate times during the year. But as the planet warms, the scientists’ modeling shows that cyclones will push into the monsoon season, causing back-to-back flooding events across the country.“Bangladesh is very active in preparing for climate hazards and risks, but the problem is, everything they’re doing is more or less based on what they’re seeing in the present climate,” says study co-author Sai Ravela, principal research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “We are now seeing an almost tenfold rise in the recurrence of destructive storm tides almost anywhere you look in Bangladesh. This cannot be ignored. So, we think this is timely, to say they have to pause and revisit how they protect against these storms.”Ravela’s co-authors are Jiangchao Qiu, a postdoc in EAPS, and Kerry Emanuel, professor emeritus of atmospheric science at MIT.Height of tidesIn recent years, Bangladesh has invested significantly in storm preparedness, for instance in improving its early-warning system, fortifying village embankments, and increasing access to community shelters. But such preparations have generally been based on the current frequency of storms.In this new study, the MIT team aimed to provide detailed projections of extreme storm tide hazards, which are flooding events where tidal effects amplify cyclone-induced storm surge, in Bangladesh under various climate-warming scenarios and sea-level rise projections.“A lot of these events happen at night, so tides play a really strong role in how much additional water you might get, depending on what the tide is,” Ravela explains.To evaluate the risk of storm tide, the team first applied a method of physics-based downscaling, which Emanuel’s group first developed over 20 years ago and has been using since to study hurricane activity in different parts of the world. The technique involves a low-resolution model of the global ocean and atmosphere that is embedded with a finer-resolution model that simulates weather patterns as detailed as a single hurricane. The researchers then scatter hurricane “seeds” in a region of interest and run the model forward to observe which seeds grow and make landfall over time.To the downscaled model, the researchers incorporated a hydrodynamical model, which simulates the height of a storm surge, given the pattern and strength of winds at the time of a given storm. For any given simulated storm, the team also tracked the tides, as well as effects of sea level rise, and incorporated this information into a numerical model that calculated the storm tide, or the height of the water, with tidal effects as a storm makes landfall.Extreme overlapWith this framework, the scientists simulated tens of thousands of potential tropical cyclones near Bangladesh, under several future climate scenarios, ranging from one that resembles the current day to one in which the world experiences further warming as a result of continued fossil fuel burning. For each simulation, they recorded the maximum storm tides along the coast of Bangladesh and noted the frequency of storm tides of various heights in a given climate scenario.“We can look at the entire bucket of simulations and see, for this storm tide of say, 3 meters, we saw this many storms, and from that you can figure out the relative frequency of that kind of storm,” Qiu says. “You can then invert that number to a return period.”A return period is the time it takes for a storm of a particular type to make landfall again. A storm that is considered a “100-year event” is typically more powerful and destructive, and in this case, creates more extreme storm tides, and therefore more catastrophic flooding, compared to a 10-year event.From their modeling, Ravela and his colleagues found that under a scenario of increased global warming, the storms that previously were considered 100-year events, producing the highest storm tide values, can recur every decade or less by late-century. They also observed that, toward the end of this century, tropical cyclones in Bangladesh will occur across a broader seasonal window, potentially overlapping in certain years with the seasonal monsoon season.“If the monsoon rain has come in and saturated the soil, a cyclone then comes in and it makes the problem much worse,” Ravela says. “People won’t have any reprieve between the extreme storm and the monsoon. There are so many compound and cascading effects between the two. And this only emerges because warming happens.”Ravela and his colleagues are using their modeling to help experts in Bangladesh better evaluate and prepare for a future of increasing storm risk. And he says that the climate future for Bangladesh is in some ways not unique to this part of the world.“This climate change story that is playing out in Bangladesh in a certain way will be playing out in a different way elsewhere,” Ravela notes. “Maybe where you are, the story is about heat stress, or amplifying droughts, or wildfires. The peril is different. But the underlying catastrophe story is not that different.”This research is supported in part by the MIT Climate Resilience Early Warning Systems Climate Grand Challenges project, the Jameel Observatory JO-CREWSNet project; MIT Weather and Climate Extremes Climate Grand Challenges project; and Schmidt Sciences, LLC.  More

  • in

    Using liquid air for grid-scale energy storage

    As the world moves to reduce carbon emissions, solar and wind power will play an increasing role on electricity grids. But those renewable sources only generate electricity when it’s sunny or windy. So to ensure a reliable power grid — one that can deliver electricity 24/7 — it’s crucial to have a means of storing electricity when supplies are abundant and delivering it later, when they’re not. And sometimes large amounts of electricity will need to be stored not just for hours, but for days, or even longer.Some methods of achieving “long-duration energy storage” are promising. For example, with pumped hydro energy storage, water is pumped from a lake to another, higher lake when there’s extra electricity and released back down through power-generating turbines when more electricity is needed. But that approach is limited by geography, and most potential sites in the United States have already been used. Lithium-ion batteries could provide grid-scale storage, but only for about four hours. Longer than that and battery systems get prohibitively expensive.A team of researchers from MIT and the Norwegian University of Science and Technology (NTNU) has been investigating a less-familiar option based on an unlikely-sounding concept: liquid air, or air that is drawn in from the surroundings, cleaned and dried, and then cooled to the point that it liquefies. “Liquid air energy storage” (LAES) systems have been built, so the technology is technically feasible. Moreover, LAES systems are totally clean and can be sited nearly anywhere, storing vast amounts of electricity for days or longer and delivering it when it’s needed. But there haven’t been conclusive studies of its economic viability. Would the income over time warrant the initial investment and ongoing costs? With funding from the MIT Energy Initiative’s Future Energy Systems Center, the researchers developed a model that takes detailed information on LAES systems and calculates when and where those systems would be economically viable, assuming future scenarios in line with selected decarbonization targets as well as other conditions that may prevail on future energy grids.They found that under some of the scenarios they modeled, LAES could be economically viable in certain locations. Sensitivity analyses showed that policies providing a subsidy on capital expenses could make LAES systems economically viable in many locations. Further calculations showed that the cost of storing a given amount of electricity with LAES would be lower than with more familiar systems such as pumped hydro and lithium-ion batteries. They conclude that LAES holds promise as a means of providing critically needed long-duration storage when future power grids are decarbonized and dominated by intermittent renewable sources of electricity.The researchers — Shaylin A. Cetegen, a PhD candidate in the MIT Department of Chemical Engineering (ChemE); Professor Emeritus Truls Gundersen of the NTNU Department of Energy and Process Engineering; and MIT Professor Emeritus Paul I. Barton of ChemE — describe their model and their findings in a new paper published in the journal Energy.The LAES technology and its benefitsLAES systems consists of three steps: charging, storing, and discharging. When supply on the grid exceeds demand and prices are low, the LAES system is charged. Air is then drawn in and liquefied. A large amount of electricity is consumed to cool and liquefy the air in the LAES process. The liquid air is then sent to highly insulated storage tanks, where it’s held at a very low temperature and atmospheric pressure. When the power grid needs added electricity to meet demand, the liquid air is first pumped to a higher pressure and then heated, and it turns back into a gas. This high-pressure, high-temperature, vapor-phase air expands in a turbine that generates electricity to be sent back to the grid.According to Cetegen, a primary advantage of LAES is that it’s clean. “There are no contaminants involved,” she says. “It takes in and releases only ambient air and electricity, so it’s as clean as the electricity that’s used to run it.” In addition, a LAES system can be built largely from commercially available components and does not rely on expensive or rare materials. And the system can be sited almost anywhere, including near other industrial processes that produce waste heat or cold that can be used by the LAES system to increase its energy efficiency.Economic viabilityIn considering the potential role of LAES on future power grids, the first question is: Will LAES systems be attractive to investors? Answering that question requires calculating the technology’s net present value (NPV), which represents the sum of all discounted cash flows — including revenues, capital expenditures, operating costs, and other financial factors — over the project’s lifetime. (The study assumed a cash flow discount rate of 7 percent.)To calculate the NPV, the researchers needed to determine how LAES systems will perform in future energy markets. In those markets, various sources of electricity are brought online to meet the current demand, typically following a process called “economic dispatch:” The lowest-cost source that’s available is always deployed next. Determining the NPV of liquid air storage therefore requires predicting how that technology will fare in future markets competing with other sources of electricity when demand exceeds supply — and also accounting for prices when supply exceeds demand, so excess electricity is available to recharge the LAES systems.For their study, the MIT and NTNU researchers designed a model that starts with a description of an LAES system, including details such as the sizes of the units where the air is liquefied and the power is recovered, and also capital expenses based on estimates reported in the literature. The model then draws on state-of-the-art pricing data that’s released every year by the National Renewable Energy Laboratory (NREL) and is widely used by energy modelers worldwide. The NREL dataset forecasts prices, construction and retirement of specific types of electricity generation and storage facilities, and more, assuming eight decarbonization scenarios for 18 regions of the United States out to 2050.The new model then tracks buying and selling in energy markets for every hour of every day in a year, repeating the same schedule for five-year intervals. Based on the NREL dataset and details of the LAES system — plus constraints such as the system’s physical storage capacity and how often it can switch between charging and discharging — the model calculates how much money LAES operators would make selling power to the grid when it’s needed and how much they would spend buying electricity when it’s available to recharge their LAES system. In line with the NREL dataset, the model generates results for 18 U.S. regions and eight decarbonization scenarios, including 100 percent decarbonization by 2035 and 95 percent decarbonization by 2050, and other assumptions about future energy grids, including high-demand growth plus high and low costs for renewable energy and for natural gas.Cetegen describes some of their results: “Assuming a 100-megawatt (MW) system — a standard sort of size — we saw economic viability pop up under the decarbonization scenario calling for 100 percent decarbonization by 2035.” So, positive NPVs (indicating economic viability) occurred only under the most aggressive — therefore the least realistic — scenario, and they occurred in only a few southern states, including Texas and Florida, likely because of how those energy markets are structured and operate.The researchers also tested the sensitivity of NPVs to different storage capacities, that is, how long the system could continuously deliver power to the grid. They calculated the NPVs of a 100 MW system that could provide electricity supply for one day, one week, and one month. “That analysis showed that under aggressive decarbonization, weekly storage is more economically viable than monthly storage, because [in the latter case] we’re paying for more storage capacity than we need,” explains Cetegen.Improving the NPV of the LAES systemThe researchers next analyzed two possible ways to improve the NPV of liquid air storage: by increasing the system’s energy efficiency and by providing financial incentives. Their analyses showed that increasing the energy efficiency, even up to the theoretical limit of the process, would not change the economic viability of LAES under the most realistic decarbonization scenarios. On the other hand, a major improvement resulted when they assumed policies providing subsidies on capital expenditures on new installations. Indeed, assuming subsidies of between 40 percent and 60 percent made the NPVs for a 100 MW system become positive under all the realistic scenarios.Thus, their analysis showed that financial incentives could be far more effective than technical improvements in making LAES economically viable. While engineers may find that outcome disappointing, Cetegen notes that from a broader perspective, it’s good news. “You could spend your whole life trying to optimize the efficiency of this process, and it wouldn’t translate to securing the investment needed to scale the technology,” she says. “Policies can take a long time to implement as well. But theoretically you could do it overnight. So if storage is needed [on a future decarbonized grid], then this is one way to encourage adoption of LAES right away.”Cost comparison with other energy storage technologiesCalculating the economic viability of a storage technology is highly dependent on the assumptions used. As a result, a different measure — the “levelized cost of storage” (LCOS) — is typically used to compare the costs of different storage technologies. In simple terms, the LCOS is the cost of storing each unit of energy over the lifetime of a project, not accounting for any income that results.On that measure, the LAES technology excels. The researchers’ model yielded an LCOS for liquid air storage of about $60 per megawatt-hour, regardless of the decarbonization scenario. That LCOS is about a third that of lithium-ion battery storage and half that of pumped hydro. Cetegen cites another interesting finding: the LCOS of their assumed LAES system varied depending on where it’s being used. The standard practice of reporting a single LCOS for a given energy storage technology may not provide the full picture.Cetegen has adapted the model and is now calculating the NPV and LCOS for energy storage using lithium-ion batteries. But she’s already encouraged by the LCOS of liquid air storage. “While LAES systems may not be economically viable from an investment perspective today, that doesn’t mean they won’t be implemented in the future,” she concludes. “With limited options for grid-scale storage expansion and the growing need for storage technologies to ensure energy security, if we can’t find economically viable alternatives, we’ll likely have to turn to least-cost solutions to meet storage needs. This is why the story of liquid air storage is far from over. We believe our findings justify the continued exploration of LAES as a key energy storage solution for the future.” More

  • in

    Study: Burning heavy fuel oil with scrubbers is the best available option for bulk maritime shipping

    When the International Maritime Organization enacted a mandatory cap on the sulfur content of marine fuels in 2020, with an eye toward reducing harmful environmental and health impacts, it left shipping companies with three main options.They could burn low-sulfur fossil fuels, like marine gas oil, or install cleaning systems to remove sulfur from the exhaust gas produced by burning heavy fuel oil. Biofuels with lower sulfur content offer another alternative, though their limited availability makes them a less feasible option.While installing exhaust gas cleaning systems, known as scrubbers, is the most feasible and cost-effective option, there has been a great deal of uncertainty among firms, policymakers, and scientists as to how “green” these scrubbers are.Through a novel lifecycle assessment, researchers from MIT, Georgia Tech, and elsewhere have now found that burning heavy fuel oil with scrubbers in the open ocean can match or surpass using low-sulfur fuels, when a wide variety of environmental factors is considered.The scientists combined data on the production and operation of scrubbers and fuels with emissions measurements taken onboard an oceangoing cargo ship.They found that, when the entire supply chain is considered, burning heavy fuel oil with scrubbers was the least harmful option in terms of nearly all 10 environmental impact factors they studied, such as greenhouse gas emissions, terrestrial acidification, and ozone formation.“In our collaboration with Oldendorff Carriers to broadly explore reducing the environmental impact of shipping, this study of scrubbers turned out to be an unexpectedly deep and important transitional issue,” says Neil Gershenfeld, an MIT professor, director of the Center for Bits and Atoms (CBA), and senior author of the study.“Claims about environmental hazards and policies to mitigate them should be backed by science. You need to see the data, be objective, and design studies that take into account the full picture to be able to compare different options from an apples-to-apples perspective,” adds lead author Patricia Stathatou, an assistant professor at Georgia Tech, who began this study as a postdoc in the CBA.Stathatou is joined on the paper by Michael Triantafyllou, the Henry L. and Grace Doherty and others at the National Technical University of Athens in Greece and the maritime shipping firm Oldendorff Carriers. The research appears today in Environmental Science and Technology.Slashing sulfur emissionsHeavy fuel oil, traditionally burned by bulk carriers that make up about 30 percent of the global maritime fleet, usually has a sulfur content around 2 to 3 percent. This is far higher than the International Maritime Organization’s 2020 cap of 0.5 percent in most areas of the ocean and 0.1 percent in areas near population centers or environmentally sensitive regions.Sulfur oxide emissions contribute to air pollution and acid rain, and can damage the human respiratory system.In 2018, fewer than 1,000 vessels employed scrubbers. After the cap went into place, higher prices of low-sulfur fossil fuels and limited availability of alternative fuels led many firms to install scrubbers so they could keep burning heavy fuel oil.Today, more than 5,800 vessels utilize scrubbers, the majority of which are wet, open-loop scrubbers.“Scrubbers are a very mature technology. They have traditionally been used for decades in land-based applications like power plants to remove pollutants,” Stathatou says.A wet, open-loop marine scrubber is a huge, metal, vertical tank installed in a ship’s exhaust stack, above the engines. Inside, seawater drawn from the ocean is sprayed through a series of nozzles downward to wash the hot exhaust gases as they exit the engines.The seawater interacts with sulfur dioxide in the exhaust, converting it to sulfates — water-soluble, environmentally benign compounds that naturally occur in seawater. The washwater is released back into the ocean, while the cleaned exhaust escapes to the atmosphere with little to no sulfur dioxide emissions.But the acidic washwater can contain other combustion byproducts like heavy metals, so scientists wondered if scrubbers were comparable, from a holistic environmental point of view, to burning low-sulfur fuels.Several studies explored toxicity of washwater and fuel system pollution, but none painted a full picture.The researchers set out to fill that scientific gap.A “well-to-wake” analysisThe team conducted a lifecycle assessment using a global environmental database on production and transport of fossil fuels, such as heavy fuel oil, marine gas oil, and very-low sulfur fuel oil. Considering the entire lifecycle of each fuel is key, since producing low-sulfur fuel requires extra processing steps in the refinery, causing additional emissions of greenhouse gases and particulate matter.“If we just look at everything that happens before the fuel is bunkered onboard the vessel, heavy fuel oil is significantly more low-impact, environmentally, than low-sulfur fuels,” she says.The researchers also collaborated with a scrubber manufacturer to obtain detailed information on all materials, production processes, and transportation steps involved in marine scrubber fabrication and installation.“If you consider that the scrubber has a lifetime of about 20 years, the environmental impacts of producing the scrubber over its lifetime are negligible compared to producing heavy fuel oil,” she adds.For the final piece, Stathatou spent a week onboard a bulk carrier vessel in China to measure emissions and gather seawater and washwater samples. The ship burned heavy fuel oil with a scrubber and low-sulfur fuels under similar ocean conditions and engine settings.Collecting these onboard data was the most challenging part of the study.“All the safety gear, combined with the heat and the noise from the engines on a moving ship, was very overwhelming,” she says.Their results showed that scrubbers reduce sulfur dioxide emissions by 97 percent, putting heavy fuel oil on par with low-sulfur fuels according to that measure. The researchers saw similar trends for emissions of other pollutants like carbon monoxide and nitrous oxide.In addition, they tested washwater samples for more than 60 chemical parameters, including nitrogen, phosphorus, polycyclic aromatic hydrocarbons, and 23 metals.The concentrations of chemicals regulated by the IMO were far below the organization’s requirements. For unregulated chemicals, the researchers compared the concentrations to the strictest limits for industrial effluents from the U.S. Environmental Protection Agency and European Union.Most chemical concentrations were at least an order of magnitude below these requirements.In addition, since washwater is diluted thousands of times as it is dispersed by a moving vessel, the concentrations of such chemicals would be even lower in the open ocean.These findings suggest that the use of scrubbers with heavy fuel oil can be considered as equal to or more environmentally friendly than low-sulfur fuels across many of the impact categories the researchers studied.“This study demonstrates the scientific complexity of the waste stream of scrubbers. Having finally conducted a multiyear, comprehensive, and peer-reviewed study, commonly held fears and assumptions are now put to rest,” says Scott Bergeron, managing director at Oldendorff Carriers and co-author of the study.“This first-of-its-kind study on a well-to-wake basis provides very valuable input to ongoing discussion at the IMO,” adds Thomas Klenum, executive vice president of innovation and regulatory affairs at the Liberian Registry, emphasizing the need “for regulatory decisions to be made based on scientific studies providing factual data and conclusions.”Ultimately, this study shows the importance of incorporating lifecycle assessments into future environmental impact reduction policies, Stathatou says.“There is all this discussion about switching to alternative fuels in the future, but how green are these fuels? We must do our due diligence to compare them equally with existing solutions to see the costs and benefits,” she adds.This study was supported, in part, by Oldendorff Carriers. More

  • in

    Taking the “training wheels” off clean energy

    Renewable power sources have seen unprecedented levels of investment in recent years. But with political uncertainty clouding the future of subsidies for green energy, these technologies must begin to compete with fossil fuels on equal footing, said participants at the 2025 MIT Energy Conference.“What these technologies need less is training wheels, and more of a level playing field,” said Brian Deese, an MIT Institute Innovation Fellow, during a conference-opening keynote panel.The theme of the two-day conference, which is organized each year by MIT students, was “Breakthrough to deployment: Driving climate innovation to market.” Speakers largely expressed optimism about advancements in green technology, balanced by occasional notes of alarm about a rapidly changing regulatory and political environment.Deese defined what he called “the good, the bad, and the ugly” of the current energy landscape. The good: Clean energy investment in the United States hit an all-time high of $272 billion in 2024. The bad: Announcements of future investments have tailed off. And the ugly: Macro conditions are making it more difficult for utilities and private enterprise to build out the clean energy infrastructure needed to meet growing energy demands.“We need to build massive amounts of energy capacity in the United States,” Deese said. “And the three things that are the most allergic to building are high uncertainty, high interest rates, and high tariff rates. So that’s kind of ugly. But the question … is how, and in what ways, that underlying commercial momentum can drive through this period of uncertainty.”A shifting clean energy landscapeDuring a panel on artificial intelligence and growth in electricity demand, speakers said that the technology may serve as a catalyst for green energy breakthroughs, in addition to putting strain on existing infrastructure. “Google is committed to building digital infrastructure responsibly, and part of that means catalyzing the development of clean energy infrastructure that is not only meeting the AI need, but also benefiting the grid as a whole,” said Lucia Tian, head of clean energy and decarbonization technologies at Google.Across the two days, speakers emphasized that the cost-per-unit and scalability of clean energy technologies will ultimately determine their fate. But they also acknowledged the impact of public policy, as well as the need for government investment to tackle large-scale issues like grid modernization.Vanessa Chan, a former U.S. Department of Energy (DoE) official and current vice dean of innovation and entrepreneurship at the University of Pennsylvania School of Engineering and Applied Sciences, warned of the “knock-on” effects of the move to slash National Institutes of Health (NIH) funding for indirect research costs, for example. “In reality, what you’re doing is undercutting every single academic institution that does research across the nation,” she said.During a panel titled “No clean energy transition without transmission,” Maria Robinson, former director of the DoE’s Grid Deployment Office, said that ratepayers alone will likely not be able to fund the grid upgrades needed to meet growing power demand. “The amount of investment we’re going to need over the next couple of years is going to be significant,” she said. “That’s where the federal government is going to have to play a role.”David Cohen-Tanugi, a clean energy venture builder at MIT, noted that extreme weather events have changed the climate change conversation in recent years. “There was a narrative 10 years ago that said … if we start talking about resilience and adaptation to climate change, we’re kind of throwing in the towel or giving up,” he said. “I’ve noticed a very big shift in the investor narrative, the startup narrative, and more generally, the public consciousness. There’s a realization that the effects of climate change are already upon us.”“Everything on the table”The conference featured panels and keynote addresses on a range of emerging clean energy technologies, including hydrogen power, geothermal energy, and nuclear fusion, as well as a session on carbon capture.Alex Creely, a chief engineer at Commonwealth Fusion Systems, explained that fusion (the combining of small atoms into larger atoms, which is the same process that fuels stars) is safer and potentially more economical than traditional nuclear power. Fusion facilities, he said, can be powered down instantaneously, and companies like his are developing new, less-expensive magnet technology to contain the extreme heat produced by fusion reactors.By the early 2030s, Creely said, his company hopes to be operating 400-megawatt power plants that use only 50 kilograms of fuel per year. “If you can get fusion working, it turns energy into a manufacturing product, not a natural resource,” he said.Quinn Woodard Jr., senior director of power generation and surface facilities at geothermal energy supplier Fervo Energy, said his company is making the geothermal energy more economical through standardization, innovation, and economies of scale. Traditionally, he said, drilling is the largest cost in producing geothermal power. Fervo has “completely flipped the cost structure” with advances in drilling, Woodard said, and now the company is focused on bringing down its power plant costs.“We have to continuously be focused on cost, and achieving that is paramount for the success of the geothermal industry,” he said.One common theme across the conference: a number of approaches are making rapid advancements, but experts aren’t sure when — or, in some cases, if — each specific technology will reach a tipping point where it is capable of transforming energy markets.“I don’t want to get caught in a place where we often descend in this climate solution situation, where it’s either-or,” said Peter Ellis, global director of nature climate solutions at The Nature Conservancy. “We’re talking about the greatest challenge civilization has ever faced. We need everything on the table.”The road aheadSeveral speakers stressed the need for academia, industry, and government to collaborate in pursuit of climate and energy goals. Amy Luers, senior global director of sustainability for Microsoft, compared the challenge to the Apollo spaceflight program, and she said that academic institutions need to focus more on how to scale and spur investments in green energy.“The challenge is that academic institutions are not currently set up to be able to learn the how, in driving both bottom-up and top-down shifts over time,” Luers said. “If the world is going to succeed in our road to net zero, the mindset of academia needs to shift. And fortunately, it’s starting to.”During a panel called “From lab to grid: Scaling first-of-a-kind energy technologies,” Hannan Happi, CEO of renewable energy company Exowatt, stressed that electricity is ultimately a commodity. “Electrons are all the same,” he said. “The only thing [customers] care about with regards to electrons is that they are available when they need them, and that they’re very cheap.”Melissa Zhang, principal at Azimuth Capital Management, noted that energy infrastructure development cycles typically take at least five to 10 years — longer than a U.S. political cycle. However, she warned that green energy technologies are unlikely to receive significant support at the federal level in the near future. “If you’re in something that’s a little too dependent on subsidies … there is reason to be concerned over this administration,” she said.World Energy CEO Gene Gebolys, the moderator of the lab-to-grid panel, listed off a number of companies founded at MIT. “They all have one thing in common,” he said. “They all went from somebody’s idea, to a lab, to proof-of-concept, to scale. It’s not like any of this stuff ever ends. It’s an ongoing process.” More

  • in

    Collaboration between MIT and GE Vernova aims to develop and scale sustainable energy systems

    MIT and GE Vernova today announced the creation of the MIT-GE Vernova Energy and Climate Alliance to help develop and scale sustainable energy systems across the globe.The alliance launches a five-year collaboration between MIT and GE Vernova, a global energy company that spun off from General Electric’s energy business in 2024. The endeavor will encompass research, education, and career opportunities for students, faculty, and staff across MIT’s five schools and the MIT Schwarzman College of Computing. It will focus on three main themes: decarbonization, electrification, and renewables acceleration.“This alliance will provide MIT students and researchers with a tremendous opportunity to work on energy solutions that could have real-world impact,” says Anantha Chandrakasan, MIT’s chief innovation and strategy officer and dean of the School of Engineering. “GE Vernova brings domain knowledge and expertise deploying these at scale. When our researchers develop new innovative technologies, GE Vernova is strongly positioned to bring them to global markets.”Through the alliance, GE Vernova is sponsoring research projects at MIT and providing philanthropic support for MIT research fellowships. The company will also engage with MIT’s community through participation in corporate membership programs and professional education.“It’s a privilege to combine forces with MIT’s world-class faculty and students as we work together to realize an optimistic, innovation-driven approach to solving the world’s most pressing challenges,” says Scott Strazik, GE Vernova CEO. “Through this alliance, we are proud to be able to help drive new technologies while at the same time inspire future leaders to play a meaningful role in deploying technology to improve the planet at companies like GE Vernova.”“This alliance embodies the spirit of the MIT Climate Project — combining cutting-edge research, a shared drive to tackle today’s toughest energy challenges, and a deep sense of optimism about what we can achieve together,” says Sally Kornbluth, president of MIT. “With the combined strengths of MIT and GE Vernova, we have a unique opportunity to make transformative progress in the flagship areas of electrification, decarbonization, and renewables acceleration.”The alliance, comprising a $50 million commitment, will operate within MIT’s Office of Innovation and Strategy. It will fund approximately 12 annual research projects relating to the three themes, as well as three master’s student projects in MIT’s Technology and Policy Program. The research projects will address challenges like developing and storing clean energy, as well as the creation of robust system architectures that help sustainable energy sources like solar, wind, advanced nuclear reactors, green hydrogen, and more compete with carbon-emitting sources.The projects will be selected by a joint steering committee composed of representatives from MIT and GE Vernova, following an annual Institute-wide call for proposals.The collaboration will also create approximately eight endowed GE Vernova research fellowships for MIT students, to be selected by faculty and beginning in the fall. There will also be 10 student internships that will span GE Vernova’s global operations, and GE Vernova will also sponsor programming through MIT’s New Engineering Education Transformation (NEET), which equips students with career-oriented experiential opportunities. Additionally, the alliance will create professional education programming for GE Vernova employees.“The internships and fellowships will be designed to bring students into our ecosystem,” says GE Vernova Chief Corporate Affairs Officer Roger Martella. “Students will walk our factory floor, come to our labs, be a part of our management teams, and see how we operate as business leaders. They’ll get a sense for how what they’re learning in the classroom is being applied in the real world.”Philanthropic support from GE Vernova will also support projects in MIT’s Human Insight Collaborative (MITHIC), which launched last fall to elevate human-centered research and teaching. The projects will allow faculty to explore how areas like energy and cybersecurity influence human behavior and experiences.In connection with the alliance, GE Vernova is expected to join several MIT consortia and membership programs, helping foster collaborations and dialogue between industry experts and researchers and educators across campus.With operations across more than 100 countries, GE Vernova designs, manufactures, and services technologies to generate, transfer, and store electricity with a mission to decarbonize the world. The company is headquartered in Kendall Square, right down the road from MIT, which its leaders say is not a coincidence.“We’re really good at taking proven technologies and commercializing them and scaling them up through our labs,” Martella says. “MIT excels at coming up with those ideas and being a sort of time machine that thinks outside the box to create the future. That’s why this such a great fit: We both have a commitment to research, innovation, and technology.”The alliance is the latest in MIT’s rapidly growing portfolio of research and innovation initiatives around sustainable energy systems, which also includes the Climate Project at MIT. Separate from, but complementary to, the MIT-GE Vernova Alliance, the Climate Project is a campus-wide effort to develop technological, behavioral, and policy solutions to some of the toughest problems impeding an effective global climate response. More

  • in

    For plants, urban heat islands don’t mimic global warming

    It’s tricky to predict precisely what the impacts of climate change will be, given the many variables involved. To predict the impacts of a warmer world on plant life, some researchers look at urban “heat islands,” where, because of the effects of urban structures, temperatures consistently run a few degrees higher than those of the surrounding rural areas. This enables side-by-side comparisons of plant responses.But a new study by researchers at MIT and Harvard University has found that, at least for forests, urban heat islands are a poor proxy for global warming, and this may have led researchers to underestimate the impacts of warming in some cases. The discrepancy, they found, has a lot to do with the limited genetic diversity of urban tree species.The findings appear in the journal PNAS, in a paper by MIT postdoc Meghan Blumstein, professor of civil and environmental engineering David Des Marais, and four others.“The appeal of these urban temperature gradients is, well, it’s already there,” says Des Marais. “We can’t look into the future, so why don’t we look across space, comparing rural and urban areas?” Because such data is easily obtainable, methods comparing the growth of plants in cities with similar plants outside them have been widely used, he says, and have been quite useful. Researchers did recognize some shortcomings to this approach, including significant differences in availability of some nutrients such as nitrogen. Still, “a lot of ecologists recognized that they weren’t perfect, but it was what we had,” he says.Most of the research by Des Marais’ group is lab-based, under conditions tightly controlled for temperature, humidity, and carbon dioxide concentration. While there are a handful of experimental sites where conditions are modified out in the field, for example using heaters around one or a few trees, “those are super small-scale,” he says. “When you’re looking at these longer-term trends that are occurring over space that’s quite a bit larger than you could reasonably manipulate, an important question is, how do you control the variables?”Temperature gradients have offered one approach to this problem, but Des Marais and his students have also been focusing on the genetics of the tree species involved, comparing those sampled in cities to the same species sampled in a natural forest nearby. And it turned out there were differences, even between trees that appeared similar.“So, lo and behold, you think you’re only letting one variable change in your model, which is the temperature difference from an urban to a rural setting,” he says, “but in fact, it looks like there was also a genotypic diversity that was not being accounted for.”The genetic differences meant that the plants being studied were not representative of those in the natural environment, and the researchers found that the difference was actually masking the impact of warming. The urban trees, they found, were less affected than their natural counterparts in terms of when the plants’ leaves grew and unfurled, or “leafed out,” in the spring.The project began during the pandemic lockdown, when Blumstein was a graduate student. She had a grant to study red oak genotypes across New England, but was unable to travel because of lockdowns. So, she concentrated on trees that were within reach in Cambridge, Massachusetts. She then collaborated with people doing research at the Harvard Forest, a research forest in rural central Massachusetts. They collected three years of data from both locations, including the temperature profiles, the leafing-out timing, and the genetic profiles of the trees. Though the study was looking at red oaks specifically, the researchers say the findings are likely to apply to trees broadly.At the time, researchers had just sequenced the oak tree genome, and that allowed Blumstein and her colleagues to look for subtle differences among the red oaks in the two locations. The differences they found showed that the urban trees were more resistant to the effects of warmer temperatures than were those in the natural environment.“Initially, we saw these results and we were sort of like, oh, this is a bad thing,” Des Marais says. “Ecologists are getting this heat island effect wrong, which is true.” Fortunately, this can be easily corrected by factoring in genomic data. “It’s not that much more work, because sequencing genomes is so cheap and so straightforward. Now, if someone wants to look at an urban-rural gradient and make these kinds of predictions, well, that’s fine. You just have to add some information about the genomes.”It’s not surprising that this genetic variation exists, he says, since growers have learned by trial and error over the decades which varieties of trees tend to thrive in the difficult urban environment, with typically poor soil, poor drainage, and pollution. “As a result, there’s just not much genetic diversity in our trees within cities.”The implications could be significant, Des Marais says. When the Intergovernmental Panel on Climate Change (IPCC) releases its regular reports on the status of the climate, “one of the tools the IPCC has to predict future responses to climate change with respect to temperature are these urban-to-rural gradients.” He hopes that these new findings will be incorporated into their next report, which is just being drafted. “If these results are generally true beyond red oaks, this suggests that the urban heat island approach to studying plant response to temperature is underpredicting how strong that response is.”The research team included Sophie Webster, Robin Hopkins, and David Basler from Harvard University and Jie Yun from MIT. The work was supported by the National Science Foundation, the Bullard Fellowship at the Harvard Forest, and MIT. More

  • in

    MIT Maritime Consortium sets sail

    Around 11 billion tons of goods, or about 1.5 tons per person worldwide, are transported by sea each year, representing about 90 percent of global trade by volume. Internationally, the merchant shipping fleet numbers around 110,000 vessels. These ships, and the ports that service them, are significant contributors to the local and global economy — and they’re significant contributors to greenhouse gas emissions.A new consortium, formalized in a signing ceremony at MIT last week, aims to address climate-harming emissions in the maritime shipping industry, while supporting efforts for environmentally friendly operation in compliance with the decarbonization goals set by the International Maritime Organization.“This is a timely collaboration with key stakeholders from the maritime industry with a very bold and interdisciplinary research agenda that will establish new technologies and evidence-based standards,” says Themis Sapsis, the William Koch Professor of Marine Technology at MIT and the director of MIT’s Center for Ocean Engineering. “It aims to bring the best from MIT in key areas for commercial shipping, such as nuclear technology for commercial settings, autonomous operation and AI methods, improved hydrodynamics and ship design, cybersecurity, and manufacturing.” Co-led by Sapsis and Fotini Christia, the Ford International Professor of the Social Sciences; director of the Institute for Data, Systems, and Society (IDSS); and director of the MIT Sociotechnical Systems Research Center, the newly-launched MIT Maritime Consortium (MC) brings together MIT collaborators from across campus, including the Center for Ocean Engineering, which is housed in the Department of Mechanical Engineering; IDSS, which is housed in the MIT Schwarzman College of Computing; the departments of Nuclear Science and Engineering and Civil and Environmental Engineering; MIT Sea Grant; and others, with a national and an international community of industry experts.The Maritime Consortium’s founding members are the American Bureau of Shipping (ABS), Capital Clean Energy Carriers Corp., and HD Korea Shipbuilding and Offshore Engineering. Innovation members are Foresight-Group, Navios Maritime Partners L.P., Singapore Maritime Institute, and Dorian LPG.“The challenges the maritime industry faces are challenges that no individual company or organization can address alone,” says Christia. “The solution involves almost every discipline from the School of Engineering, as well as AI and data-driven algorithms, and policy and regulation — it’s a true MIT problem.”Researchers will explore new designs for nuclear systems consistent with the techno-economic needs and constraints of commercial shipping, economic and environmental feasibility of alternative fuels, new data-driven algorithms and rigorous evaluation criteria for autonomous platforms in the maritime space, cyber-physical situational awareness and anomaly detection, as well as 3D printing technologies for onboard manufacturing. Collaborators will also advise on research priorities toward evidence-based standards related to MIT presidential priorities around climate, sustainability, and AI.MIT has been a leading center of ship research and design for over a century, and is widely recognized for contributions to hydrodynamics, ship structural mechanics and dynamics, propeller design, and overall ship design, and its unique educational program for U.S. Navy Officers, the Naval Construction and Engineering Program. Research today is at the forefront of ocean science and engineering, with significant efforts in fluid mechanics and hydrodynamics, acoustics, offshore mechanics, marine robotics and sensors, and ocean sensing and forecasting. The consortium’s academic home at MIT also opens the door to cross-departmental collaboration across the Institute.The MC will launch multiple research projects designed to tackle challenges from a variety of angles, all united by cutting-edge data analysis and computation techniques. Collaborators will research new designs and methods that improve efficiency and reduce greenhouse gas emissions, explore feasibility of alternative fuels, and advance data-driven decision-making, manufacturing and materials, hydrodynamic performance, and cybersecurity.“This consortium brings a powerful collection of significant companies that, together, has the potential to be a global shipping shaper in itself,” says Christopher J. Wiernicki SM ’85, chair and chief executive officer of ABS. “The strength and uniqueness of this consortium is the members, which are all world-class organizations and real difference makers. The ability to harness the members’ experience and know-how, along with MIT’s technology reach, creates real jet fuel to drive progress,” Wiernicki says. “As well as researching key barriers, bottlenecks, and knowledge gaps in the emissions challenge, the consortium looks to enable development of the novel technology and policy innovation that will be key. Long term, the consortium hopes to provide the gravity we will need to bend the curve.” More