More stories

  • in

    MIT Maritime Consortium releases “Nuclear Ship Safety Handbook”

    Commercial shipping accounts for 3 percent of all greenhouse gas emissions globally. As the sector sets climate goals and chases a carbon-free future, nuclear power — long used as a source for military vessels — presents an enticing solution. To date, however, there has been no clear, unified public document available to guide design safety for certain components of civilian nuclear ships. A new “Nuclear Ship Safety Handbook” by the MIT Maritime Consortium aims to change that and set the standard for safe maritime nuclear propulsion.“This handbook is a critical tool in efforts to support the adoption of nuclear in the maritime industry,” explains Themis Sapsis, the William I. Koch Professor of Mechanical Engineering at MIT, director of the MIT Center for Ocean Engineering, and co-director of the MIT Maritime Consortium. “The goal is to provide a strong basis for initial safety on key areas that require nuclear and maritime regulatory research and development in the coming years to prepare for nuclear propulsion in the maritime industry.”Using research data and standards, combined with operational experiences during civilian maritime nuclear operations, the handbook provides unique insights into potential issues and resolutions in the design efficacy of maritime nuclear operations, a topic of growing importance on the national and international stage. “Right now, the nuclear-maritime policies that exist are outdated and often tied only to specific technologies, like pressurized water reactors,” says Jose Izurieta, a graduate student in the Department of Mechanical Engineering (MechE) Naval Construction and Engineering (2N) Program, and one of the handbook authors. “With the recent U.K.-U.S. Technology Prosperity Deal now including civil maritime nuclear applications, I hope the handbook can serve as a foundation for creating a clear, modern regulatory framework for nuclear-powered commercial ships.”The recent memorandum of understanding signed by the U.S. and U.K calls for the exploration of “novel applications of advanced nuclear energy, including civil maritime applications,” and for the parties to play “a leading role informing the establishment of international standards, potential establishment of a maritime shipping corridor between the Participants’ territories, and strengthening energy resilience for the Participants’ defense facilities.”“The U.S.-U.K. nuclear shipping corridor offers a great opportunity to collaborate with legislators on establishing the critical framework that will enable the United States to invest on nuclear-powered merchant vessels — an achievement that will reestablish America in the shipbuilding space,” says Fotini Christia, the Ford International Professor of the Social Sciences, director of the Institute for Data, Systems, and Society (IDSS), director of the MIT Sociotechnical Systems Research Center, and co-director of the MIT Maritime Consortium.“With over 30 nations now building or planning their first reactors, nuclear energy’s global acceptance is unprecedented — and that momentum is key to aligning safety rules across borders for nuclear-powered ships and the respective ports,” says Koroush Shirvan, the Atlantic Richfield Career Development Professor in Energy Studies at MIT and director of the Reactor Technology Course for Utility Executives.The handbook, which is divided into chapters in areas involving the overlapping nuclear and maritime safety design decisions that will be encountered by engineers, is careful to balance technical and practical guidance with policy considerations.Commander Christopher MacLean, MIT associate professor of the practice in mechanical engineering, naval construction, and engineering, says the handbook will significantly benefit the entire maritime community, specifically naval architects and marine engineers, by providing standardized guidelines for design and operation specific to nuclear powered commercial vessels.“This will assist in enhancing safety protocols, improve risk assessments, and ensure consistent compliance with international regulations,” MacLean says. “This will also help foster collaboration amongst engineers and regulators. Overall, this will further strengthen the reliability, sustainability, and public trust in nuclear-powered maritime systems.”Anthony Valiaveedu, the handbook’s lead author, and co-author Nat Edmonds, are both students in the MIT Master’s Program in Technology and Policy (TPP) within the IDSS. The pair are also co-authors of a paper published in Science Policy Review earlier this year that offered structured advice on the development of nuclear regulatory policies.“It is important for safety and technology to go hand-in-hand,” Valiaveedu explains. “What we have done is provide a risk-informed process to begin these discussions for engineers and policymakers.”“Ultimately, I hope this framework can be used to build strong bilateral agreements between nations that will allow nuclear propulsion to thrive,” says fellow co-author Izurieta.Impact on industry“Maritime designers needed a source of information to improve their ability to understand and design the reactor primary components, and development of the ‘Nuclear Ship Safety Handbook’ was a good step to bridge this knowledge gap,” says Christopher J. Wiernicki, American Bureau of Shipping (ABS) chair and CEO. “For this reason, it is an important document for the industry.”The ABS, which is the American classification society for the maritime industry, develops criteria and provides safety certification for all ocean-going vessels. ABS is among the founding members of the MIT Maritime Consortium. Capital Clean Energy Carriers Corp., HD Korea Shipbuilding and Offshore Engineering, and Delos Navigation Ltd. are also consortium founding members. Innovation members are Foresight-Group, Navios Maritime Partners L.P., Singapore Maritime Institute, and Dorian LPG.“As we consider a net-zero framework for the shipping industry, nuclear propulsion represents a potential solution. Careful investigation remains the priority, with safety and regulatory standards at the forefront,” says Jerry Kalogiratos, CEO of Capital Clean Energy Carriers Corp. “As first movers, we are exploring all options. This handbook lays the technical foundation for the development of nuclear-powered commercial vessels.”Sangmin Park, senior vice president at HD Korea Shipbuilding and Offshore Engineering, says “The ‘Nuclear Ship Safety Handbook’ marks a groundbreaking milestone that bridges shipbuilding excellence and nuclear safety. It drives global collaboration between industry and academia, and paves the way for the safe advancement of the nuclear maritime era.”Maritime at MITMIT has been a leading center of ship research and design for over a century, with work at the Institute today representing significant advancements in fluid mechanics and hydrodynamics, acoustics, offshore mechanics, marine robotics and sensors, and ocean sensing and forecasting. Maritime Consortium projects, including the handbook, reflect national priorities aimed at revitalizing the U.S. shipbuilding and commercial maritime industries.The MIT Maritime Consortium, which launched in 2024, brings together MIT and maritime industry leaders to explore data-powered strategies to reduce harmful emissions, optimize vessel operations, and support economic priorities.“One of our most important efforts is the development of technologies, policies, and regulations to make nuclear propulsion for commercial ships a reality,” says Sapsis. “Over the last year, we have put together an interdisciplinary team with faculty and students from across the Institute. One of the outcomes of this effort is this very detailed document providing detailed guidance on how such effort should be implemented safely.”Handbook contributors come from multiple disciplines and MIT departments, labs, and research centers, including the Center for Ocean Engineering, IDSS, MechE’s Course 2N Program, the MIT Technology and Policy Program, and the Department of Nuclear Science and Engineering.MIT faculty members and research advisors on the project include Sapsis; Christia; Shirvan; MacLean; Jacopo Buongiorno, the Battelle Energy Alliance Professor in Nuclear Science and Engineering, director, Center for Advanced Nuclear Energy Systems, and director of science and technology for the Nuclear Reactor Laboratory; and Captain Andrew Gillespy, professor of the practice and director of the Naval Construction and Engineering (2N) Program.“Proving the viability of nuclear propulsion for civilian ships will entail getting the technologies, the economics and the regulations right,” says Buongiorno. “This handbook is a meaningful initial contribution to the development of a sound regulatory framework.”“We were lucky to have a team of students and knowledgeable professors from so many fields,” says Edmonds. “Before even beginning the outline of the handbook, we did significant archival and history research to understand the existing regulations and overarching story of nuclear ships. Some of the most relevant documents we found were written before 1975, and many of them were stored in the bellows of the NS Savannah.”The NS Savannah, which was built in the late 1950s as a demonstration project for the potential peacetime uses of nuclear energy, was the first nuclear-powered merchant ship. The Savannah was first launched on July 21, 1959, two years after the first nuclear-powered civilian vessel, the Soviet ice-breaker Lenin, and was retired in 1971.Historical context for this project is important, because the reactor technologies envisioned for maritime propulsion today are quite different from the traditional pressurized water reactors used by the U.S. Navy. These new reactors are being developed not just in the maritime context, but also to power ports and data centers on land; they all use low-enriched uranium and are passively cooled. For the maritime industry, Sapsis says, “the technology is there, it’s safe, and it’s ready.”“The Nuclear Ship Safety Handbook” is publicly available on the MIT Maritime Consortium website and from the MIT Libraries.  More

  • in

    Simpler models can outperform deep learning at climate prediction

    Environmental scientists are increasingly using enormous artificial intelligence models to make predictions about changes in weather and climate, but a new study by MIT researchers shows that bigger models are not always better.The team demonstrates that, in certain climate scenarios, much simpler, physics-based models can generate more accurate predictions than state-of-the-art deep-learning models.Their analysis also reveals that a benchmarking technique commonly used to evaluate machine-learning techniques for climate predictions can be distorted by natural variations in the data, like fluctuations in weather patterns. This could lead someone to believe a deep-learning model makes more accurate predictions when that is not the case.The researchers developed a more robust way of evaluating these techniques, which shows that, while simple models are more accurate when estimating regional surface temperatures, deep-learning approaches can be the best choice for estimating local rainfall.They used these results to enhance a simulation tool known as a climate emulator, which can rapidly simulate the effect of human activities onto a future climate.The researchers see their work as a “cautionary tale” about the risk of deploying large AI models for climate science. While deep-learning models have shown incredible success in domains such as natural language, climate science contains a proven set of physical laws and approximations, and the challenge becomes how to incorporate those into AI models.“We are trying to develop models that are going to be useful and relevant for the kinds of things that decision-makers need going forward when making climate policy choices. While it might be attractive to use the latest, big-picture machine-learning model on a climate problem, what this study shows is that stepping back and really thinking about the problem fundamentals is important and useful,” says study senior author Noelle Selin, a professor in the MIT Institute for Data, Systems, and Society (IDSS) and the Department of Earth, Atmospheric and Planetary Sciences (EAPS).Selin’s co-authors are lead author Björn Lütjens, a former EAPS postdoc who is now a research scientist at IBM Research; senior author Raffaele Ferrari, the Cecil and Ida Green Professor of Oceanography in EAPS and co-director of the Lorenz Center; and Duncan Watson-Parris, assistant professor at the University of California at San Diego. Selin and Ferrari are also co-principal investigators of the Bringing Computation to the Climate Challenge project, out of which this research emerged. The paper appears today in the Journal of Advances in Modeling Earth Systems.Comparing emulatorsBecause the Earth’s climate is so complex, running a state-of-the-art climate model to predict how pollution levels will impact environmental factors like temperature can take weeks on the world’s most powerful supercomputers.Scientists often create climate emulators, simpler approximations of a state-of-the art climate model, which are faster and more accessible. A policymaker could use a climate emulator to see how alternative assumptions on greenhouse gas emissions would affect future temperatures, helping them develop regulations.But an emulator isn’t very useful if it makes inaccurate predictions about the local impacts of climate change. While deep learning has become increasingly popular for emulation, few studies have explored whether these models perform better than tried-and-true approaches.The MIT researchers performed such a study. They compared a traditional technique called linear pattern scaling (LPS) with a deep-learning model using a common benchmark dataset for evaluating climate emulators.Their results showed that LPS outperformed deep-learning models on predicting nearly all parameters they tested, including temperature and precipitation.“Large AI methods are very appealing to scientists, but they rarely solve a completely new problem, so implementing an existing solution first is necessary to find out whether the complex machine-learning approach actually improves upon it,” says Lütjens.Some initial results seemed to fly in the face of the researchers’ domain knowledge. The powerful deep-learning model should have been more accurate when making predictions about precipitation, since those data don’t follow a linear pattern.They found that the high amount of natural variability in climate model runs can cause the deep learning model to perform poorly on unpredictable long-term oscillations, like El Niño/La Niña. This skews the benchmarking scores in favor of LPS, which averages out those oscillations.Constructing a new evaluationFrom there, the researchers constructed a new evaluation with more data that address natural climate variability. With this new evaluation, the deep-learning model performed slightly better than LPS for local precipitation, but LPS was still more accurate for temperature predictions.“It is important to use the modeling tool that is right for the problem, but in order to do that you also have to set up the problem the right way in the first place,” Selin says.Based on these results, the researchers incorporated LPS into a climate emulation platform to predict local temperature changes in different emission scenarios.“We are not advocating that LPS should always be the goal. It still has limitations. For instance, LPS doesn’t predict variability or extreme weather events,” Ferrari adds.Rather, they hope their results emphasize the need to develop better benchmarking techniques, which could provide a fuller picture of which climate emulation technique is best suited for a particular situation.“With an improved climate emulation benchmark, we could use more complex machine-learning methods to explore problems that are currently very hard to address, like the impacts of aerosols or estimations of extreme precipitation,” Lütjens says.Ultimately, more accurate benchmarking techniques will help ensure policymakers are making decisions based on the best available information.The researchers hope others build on their analysis, perhaps by studying additional improvements to climate emulation methods and benchmarks. Such research could explore impact-oriented metrics like drought indicators and wildfire risks, or new variables like regional wind speeds.This research is funded, in part, by Schmidt Sciences, LLC, and is part of the MIT Climate Grand Challenges team for “Bringing Computation to the Climate Challenge.” More

  • in

    Jessika Trancik named director of the Sociotechnical Systems Research Center

    Jessika Trancik, a professor in MIT’s Institute for Data, Systems, and Society, has been named the new director of the Sociotechnical Systems Research Center (SSRC), effective July 1. The SSRC convenes and supports researchers focused on problems and solutions at the intersection of technology and its societal impacts.Trancik conducts research on technology innovation and energy systems. At the Trancik Lab, she and her team develop methods drawing on engineering knowledge, data science, and policy analysis. Their work examines the pace and drivers of technological change, helping identify where innovation is occurring most rapidly, how emerging technologies stack up against existing systems, and which performance thresholds matter most for real-world impact. Her models have been used to inform government innovation policy and have been applied across a wide range of industries.“Professor Trancik’s deep expertise in the societal implications of technology, and her commitment to developing impactful solutions across industries, make her an excellent fit to lead SSRC,” says Maria C. Yang, interim dean of engineering and William E. Leonhard (1940) Professor of Mechanical Engineering.Much of Trancik’s research focuses on the domain of energy systems, and establishing methods for energy technology evaluation, including of their costs, performance, and environmental impacts. She covers a wide range of energy services — including electricity, transportation, heating, and industrial processes. Her research has applications in solar and wind energy, energy storage, low-carbon fuels, electric vehicles, and nuclear fission. Trancik is also known for her research on extreme events in renewable energy availability.A prolific researcher, Trancik has helped measure progress and inform the development of solar photovoltaics, batteries, electric vehicle charging infrastructure, and other low-carbon technologies — and anticipate future trends. One of her widely cited contributions includes quantifying learning rates and identifying where targeted investments can most effectively accelerate innovation. These tools have been used by U.S. federal agencies, international organizations, and the private sector to shape energy R&D portfolios, climate policy, and infrastructure planning.Trancik is committed to engaging and informing the public on energy consumption. She and her team developed the app carboncounter.com, which helps users choose cars with low costs and low environmental impacts.As an educator, Trancik teaches courses for students across MIT’s five schools and the MIT Schwarzman College of Computing.“The question guiding my teaching and research is how do we solve big societal challenges with technology, and how can we be more deliberate in developing and supporting technologies to get us there?” Trancik said in an article about course IDS.521/IDS.065 (Energy Systems for Climate Change Mitigation).Trancik received her undergraduate degree in materials science and engineering from Cornell University. As a Rhodes Scholar, she completed her PhD in materials science at the University of Oxford. She subsequently worked for the United Nations in Geneva, Switzerland, and the Earth Institute at Columbia University. After serving as an Omidyar Research Fellow at the Santa Fe Institute, she joined MIT in 2010 as a faculty member.Trancik succeeds Fotini Christia, the Ford International Professor of Social Sciences in the Department of Political Science and director of IDSS, who previously served as director of SSRC. More

  • in

    Surprisingly diverse innovations led to dramatically cheaper solar panels

    The cost of solar panels has dropped by more than 99 percent since the 1970s, enabling widespread adoption of photovoltaic systems that convert sunlight into electricity.A new MIT study drills down on specific innovations that enabled such dramatic cost reductions, revealing that technical advances across a web of diverse research efforts and industries played a pivotal role.The findings could help renewable energy companies make more effective R&D investment decisions and aid policymakers in identifying areas to prioritize to spur growth in manufacturing and deployment.The researchers’ modeling approach shows that key innovations often originated outside the solar sector, including advances in semiconductor fabrication, metallurgy, glass manufacturing, oil and gas drilling, construction processes, and even legal domains.“Our results show just how intricate the process of cost improvement is, and how much scientific and engineering advances, often at a very basic level, are at the heart of these cost reductions. A lot of knowledge was drawn from different domains and industries, and this network of knowledge is what makes these technologies improve,” says study senior author Jessika Trancik, a professor in MIT’s Institute for Data, Systems, and Society.Trancik is joined on the paper by co-lead authors Goksin Kavlak, a former IDSS graduate student and postdoc who is now a senior energy associate at the Brattle Group; Magdalena Klemun, a former IDSS graduate student and postdoc who is now an assistant professor at Johns Hopkins University; former MIT postdoc Ajinkya Kamat; as well as Brittany Smith and Robert Margolis of the National Renewable Energy Laboratory. The research appears today in PLOS ONE.Identifying innovationsThis work builds on mathematical models that the researchers previously developed that tease out the effects of engineering technologies on the cost of photovoltaic (PV) modules and systems.In this study, the researchers aimed to dig even deeper into the scientific advances that drove those cost declines.They combined their quantitative cost model with a detailed, qualitative analysis of innovations that affected the costs of PV system materials, manufacturing steps, and deployment processes.“Our quantitative cost model guided the qualitative analysis, allowing us to look closely at innovations in areas that are hard to measure due to a lack of quantitative data,” Kavlak says.Building on earlier work identifying key cost drivers — such as the number of solar cells per module, wiring efficiency, and silicon wafer area — the researchers conducted a structured scan of the literature for innovations likely to affect these drivers. Next, they grouped these innovations to identify patterns, revealing clusters that reduced costs by improving materials or prefabricating components to streamline manufacturing and installation. Finally, the team tracked industry origins and timing for each innovation, and consulted domain experts to zero in on the most significant innovations.All told, they identified 81 unique innovations that affected PV system costs since 1970, from improvements in antireflective coated glass to the implementation of fully online permitting interfaces.“With innovations, you can always go to a deeper level, down to things like raw materials processing techniques, so it was challenging to know when to stop. Having that quantitative model to ground our qualitative analysis really helped,” Trancik says.They chose to separate PV module costs from so-called balance-of-system (BOS) costs, which cover things like mounting systems, inverters, and wiring.PV modules, which are wired together to form solar panels, are mass-produced and can be exported, while many BOS components are designed, built, and sold at the local level.“By examining innovations both at the BOS level and within the modules, we identify the different types of innovations that have emerged in these two parts of PV technology,” Kavlak says.BOS costs depend more on soft technologies, nonphysical elements such as permitting procedures, which have contributed significantly less to PV’s past cost improvement compared to hardware innovations.“Often, it comes down to delays. Time is money, and if you have delays on construction sites and unpredictable processes, that affects these balance-of-system costs,” Trancik says.Innovations such as automated permitting software, which flags code-compliant systems for fast-track approval, show promise. Though not yet quantified in this study, the team’s framework could support future analysis of their economic impact and similar innovations that streamline deployment processes.Interconnected industriesThe researchers found that innovations from the semiconductor, electronics, metallurgy, and petroleum industries played a major role in reducing both PV and BOS costs, but BOS costs were also impacted by innovations in software engineering and electric utilities.Noninnovation factors, like efficiency gains from bulk purchasing and the accumulation of knowledge in the solar power industry, also reduced some cost variables.In addition, while most PV panel innovations originated in research organizations or industry, many BOS innovations were developed by city governments, U.S. states, or professional associations.“I knew there was a lot going on with this technology, but the diversity of all these fields and how closely linked they are, and the fact that we can clearly see that network through this analysis, was interesting,” Trancik says.“PV was very well-positioned to absorb innovations from other industries — thanks to the right timing, physical compatibility, and supportive policies to adapt innovations for PV applications,” Klemun adds.The analysis also reveals the role greater computing power could play in reducing BOS costs through advances like automated engineering review systems and remote site assessment software.“In terms of knowledge spillovers, what we’ve seen so far in PV may really just be the beginning,” Klemun says, pointing to the expanding role of robotics and AI-driven digital tools in driving future cost reductions and quality improvements.In addition to their qualitative analysis, the researchers demonstrated how this methodology could be used to estimate the quantitative impact of a particular innovation if one has the numerical data to plug into the cost equation.For instance, using information about material prices and manufacturing procedures, they estimate that wire sawing, a technique which was introduced in the 1980s, led to an overall PV system cost decrease of $5 per watt by reducing silicon losses and increasing throughput during fabrication.“Through this retrospective analysis, you learn something valuable for future strategy because you can see what worked and what didn’t work, and the models can also be applied prospectively. It is also useful to know what adjacent sectors may help support improvement in a particular technology,” Trancik says.Moving forward, the researchers plan to apply this methodology to a wide range of technologies, including other renewable energy systems. They also want to further study soft technology to identify innovations or processes that could accelerate cost reductions.“Although the process of technological innovation may seem like a black box, we’ve shown that you can study it just like any other phenomena,” Trancik says.This research is funded, in part, by the U.S. Department of Energy Solar Energies Technology Office. More

  • in

    Eco-driving measures could significantly reduce vehicle emissions

    Any motorist who has ever waited through multiple cycles for a traffic light to turn green knows how annoying signalized intersections can be. But sitting at intersections isn’t just a drag on drivers’ patience — unproductive vehicle idling could contribute as much as 15 percent of the carbon dioxide emissions from U.S. land transportation.A large-scale modeling study led by MIT researchers reveals that eco-driving measures, which can involve dynamically adjusting vehicle speeds to reduce stopping and excessive acceleration, could significantly reduce those CO2 emissions.Using a powerful artificial intelligence method called deep reinforcement learning, the researchers conducted an in-depth impact assessment of the factors affecting vehicle emissions in three major U.S. cities.Their analysis indicates that fully adopting eco-driving measures could cut annual city-wide intersection carbon emissions by 11 to 22 percent, without slowing traffic throughput or affecting vehicle and traffic safety.Even if only 10 percent of vehicles on the road employ eco-driving, it would result in 25 to 50 percent of the total reduction in CO2 emissions, the researchers found.In addition, dynamically optimizing speed limits at about 20 percent of intersections provides 70 percent of the total emission benefits. This indicates that eco-driving measures could be implemented gradually while still having measurable, positive impacts on mitigating climate change and improving public health.

    An animated GIF compares what 20% eco-driving adoption looks like to 100% eco-driving adoption.Image: Courtesy of the researchers

    “Vehicle-based control strategies like eco-driving can move the needle on climate change reduction. We’ve shown here that modern machine-learning tools, like deep reinforcement learning, can accelerate the kinds of analysis that support sociotechnical decision making. This is just the tip of the iceberg,” says senior author Cathy Wu, the Class of 1954 Career Development Associate Professor in Civil and Environmental Engineering (CEE) and the Institute for Data, Systems, and Society (IDSS) at MIT, and a member of the Laboratory for Information and Decision Systems (LIDS).She is joined on the paper by lead author Vindula Jayawardana, an MIT graduate student; as well as MIT graduate students Ao Qu, Cameron Hickert, and Edgar Sanchez; MIT undergraduate Catherine Tang; Baptiste Freydt, a graduate student at ETH Zurich; and Mark Taylor and Blaine Leonard of the Utah Department of Transportation. The research appears in Transportation Research Part C: Emerging Technologies.A multi-part modeling studyTraffic control measures typically call to mind fixed infrastructure, like stop signs and traffic signals. But as vehicles become more technologically advanced, it presents an opportunity for eco-driving, which is a catch-all term for vehicle-based traffic control measures like the use of dynamic speeds to reduce energy consumption.In the near term, eco-driving could involve speed guidance in the form of vehicle dashboards or smartphone apps. In the longer term, eco-driving could involve intelligent speed commands that directly control the acceleration of semi-autonomous and fully autonomous vehicles through vehicle-to-infrastructure communication systems.“Most prior work has focused on how to implement eco-driving. We shifted the frame to consider the question of should we implement eco-driving. If we were to deploy this technology at scale, would it make a difference?” Wu says.To answer that question, the researchers embarked on a multifaceted modeling study that would take the better part of four years to complete.They began by identifying 33 factors that influence vehicle emissions, including temperature, road grade, intersection topology, age of the vehicle, traffic demand, vehicle types, driver behavior, traffic signal timing, road geometry, etc.“One of the biggest challenges was making sure we were diligent and didn’t leave out any major factors,” Wu says.Then they used data from OpenStreetMap, U.S. geological surveys, and other sources to create digital replicas of more than 6,000 signalized intersections in three cities — Atlanta, San Francisco, and Los Angeles — and simulated more than a million traffic scenarios.The researchers used deep reinforcement learning to optimize each scenario for eco-driving to achieve the maximum emissions benefits.Reinforcement learning optimizes the vehicles’ driving behavior through trial-and-error interactions with a high-fidelity traffic simulator, rewarding vehicle behaviors that are more energy-efficient while penalizing those that are not.The researchers cast the problem as a decentralized cooperative multi-agent control problem, where the vehicles cooperate to achieve overall energy efficiency, even among non-participating vehicles, and they act in a decentralized manner, avoiding the need for costly communication between vehicles.However, training vehicle behaviors that generalize across diverse intersection traffic scenarios was a major challenge. The researchers observed that some scenarios are more similar to one another than others, such as scenarios with the same number of lanes or the same number of traffic signal phases.As such, the researchers trained separate reinforcement learning models for different clusters of traffic scenarios, yielding better emission benefits overall.But even with the help of AI, analyzing citywide traffic at the network level would be so computationally intensive it could take another decade to unravel, Wu says.Instead, they broke the problem down and solved each eco-driving scenario at the individual intersection level.“We carefully constrained the impact of eco-driving control at each intersection on neighboring intersections. In this way, we dramatically simplified the problem, which enabled us to perform this analysis at scale, without introducing unknown network effects,” she says.Significant emissions benefitsWhen they analyzed the results, the researchers found that full adoption of eco-driving could result in intersection emissions reductions of between 11 and 22 percent.These benefits differ depending on the layout of a city’s streets. A denser city like San Francisco has less room to implement eco-driving between intersections, offering a possible explanation for reduced emission savings, while Atlanta could see greater benefits given its higher speed limits.Even if only 10 percent of vehicles employ eco-driving, a city could still realize 25 to 50 percent of the total emissions benefit because of car-following dynamics: Non-eco-driving vehicles would follow controlled eco-driving vehicles as they optimize speed to pass smoothly through intersections, reducing their carbon emissions as well.In some cases, eco-driving could also increase vehicle throughput by minimizing emissions. However, Wu cautions that increasing throughput could result in more drivers taking to the roads, reducing emissions benefits.And while their analysis of widely used safety metrics known as surrogate safety measures, such as time to collision, suggest that eco-driving is as safe as human driving, it could cause unexpected behavior in human drivers. More research is needed to fully understand potential safety impacts, Wu says.Their results also show that eco-driving could provide even greater benefits when combined with alternative transportation decarbonization solutions. For instance, 20 percent eco-driving adoption in San Francisco would cut emission levels by 7 percent, but when combined with the projected adoption of hybrid and electric vehicles, it would cut emissions by 17 percent.“This is a first attempt to systematically quantify network-wide environmental benefits of eco-driving. This is a great research effort that will serve as a key reference for others to build on in the assessment of eco-driving systems,” says Hesham Rakha, the Samuel L. Pritchard Professor of Engineering at Virginia Tech, who was not involved with this research.And while the researchers focus on carbon emissions, the benefits are highly correlated with improvements in fuel consumption, energy use, and air quality.“This is almost a free intervention. We already have smartphones in our cars, and we are rapidly adopting cars with more advanced automation features. For something to scale quickly in practice, it must be relatively simple to implement and shovel-ready. Eco-driving fits that bill,” Wu says.This work is funded, in part, by Amazon and the Utah Department of Transportation. More

  • in

    Study: Climate change may make it harder to reduce smog in some regions

    Global warming will likely hinder our future ability to control ground-level ozone, a harmful air pollutant that is a primary component of smog, according to a new MIT study.The results could help scientists and policymakers develop more effective strategies for improving both air quality and human health. Ground-level ozone causes a host of detrimental health impacts, from asthma to heart disease, and contributes to thousands of premature deaths each year.The researchers’ modeling approach reveals that, as the Earth warms due to climate change, ground-level ozone will become less sensitive to reductions in nitrogen oxide emissions in eastern North America and Western Europe. In other words, it will take greater nitrogen oxide emission reductions to get the same air quality benefits.However, the study also shows that the opposite would be true in northeast Asia, where cutting emissions would have a greater impact on reducing ground-level ozone in the future. The researchers combined a climate model that simulates meteorological factors, such as temperature and wind speeds, with a chemical transport model that estimates the movement and composition of chemicals in the atmosphere.By generating a range of possible future outcomes, the researchers’ ensemble approach better captures inherent climate variability, allowing them to paint a fuller picture than many previous studies.“Future air quality planning should consider how climate change affects the chemistry of air pollution. We may need steeper cuts in nitrogen oxide emissions to achieve the same air quality goals,” says Emmie Le Roy, a graduate student in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) and lead author of a paper on this study.Her co-authors include Anthony Y.H. Wong, a postdoc in the MIT Center for Sustainability Science and Strategy; Sebastian D. Eastham, principal research scientist in the MIT Center for Sustainability Science and Strategy; Arlene Fiore, the Peter H. Stone and Paola Malanotte Stone Professor of EAPS; and senior author Noelle Selin, a professor in the Institute for Data, Systems, and Society (IDSS) and EAPS. The research appears today in Environmental Science and Technology.Controlling ozoneGround-level ozone differs from the stratospheric ozone layer that protects the Earth from harmful UV radiation. It is a respiratory irritant that is harmful to the health of humans, animals, and plants.Controlling ground-level ozone is particularly challenging because it is a secondary pollutant, formed in the atmosphere by complex reactions involving nitrogen oxides and volatile organic compounds in the presence of sunlight.“That is why you tend to have higher ozone days when it is warm and sunny,” Le Roy explains.Regulators typically try to reduce ground-level ozone by cutting nitrogen oxide emissions from industrial processes. But it is difficult to predict the effects of those policies because ground-level ozone interacts with nitrogen oxide and volatile organic compounds in nonlinear ways.Depending on the chemical environment, reducing nitrogen oxide emissions could cause ground-level ozone to increase instead.“Past research has focused on the role of emissions in forming ozone, but the influence of meteorology is a really important part of Emmie’s work,” Selin says.To conduct their study, the researchers combined a global atmospheric chemistry model with a climate model that simulate future meteorology.They used the climate model to generate meteorological inputs for each future year in their study, simulating factors such as likely temperature and wind speeds, in a way that captures the inherent variability of a region’s climate.Then they fed those inputs to the atmospheric chemistry model, which calculates how the chemical composition of the atmosphere would change because of meteorology and emissions.The researchers focused on Eastern North America, Western Europe, and Northeast China, since those regions have historically high levels of the precursor chemicals that form ozone and well-established monitoring networks to provide data.They chose to model two future scenarios, one with high warming and one with low warming, over a 16-year period between 2080 and 2095. They compared them to a historical scenario capturing 2000 to 2015 to see the effects of a 10 percent reduction in nitrogen oxide emissions.Capturing climate variability“The biggest challenge is that the climate naturally varies from year to year. So, if you want to isolate the effects of climate change, you need to simulate enough years to see past that natural variability,” Le Roy says.They could overcome that challenge due to recent advances in atmospheric chemistry modeling and by taking advantage of parallel computing to simulate multiple years at the same time. They simulated five 16-year realizations, resulting in 80 model years for each scenario.The researchers found that eastern North America and Western Europe are especially sensitive to increases in nitrogen oxide emissions from the soil, which are natural emissions driven by increases in temperature.Due to that sensitivity, as the Earth warms and more nitrogen oxide from soil enters the atmosphere, reducing nitrogen oxide emissions from human activities will have less of an impact on ground-level ozone.“This shows how important it is to improve our representation of the biosphere in these models to better understand how climate change may impact air quality,” Le Roy says.On the other hand, since industrial processes in northeast Asia cause more ozone per unit of nitrogen oxide emitted, cutting emissions there would cause greater reductions in ground-level ozone in future warming scenarios.“But I wouldn’t say that is a good thing because it means that, overall, there are higher levels of ozone,” Le Roy adds.Running detailed meteorology simulations, rather than relying on annual average weather data, gave the researchers a more complete picture of the potential effects on human health.“Average climate isn’t the only thing that matters. One high ozone day, which might be a statistical anomaly, could mean we don’t meet our air quality target and have negative human health impacts that we should care about,” Le Roy says.In the future, the researchers want to continue exploring the intersection of meteorology and air quality. They also want to expand their modeling approach to consider other climate change factors with high variability, like wildfires or biomass burning.“We’ve shown that it is important for air quality scientists to consider the full range of climate variability, even if it is hard to do in your models, because it really does affect the answer that you get,” says Selin.This work is funded, in part, by the MIT Praecis Presidential Fellowship, the J.H. and E.V. Wade Fellowship, and the MIT Martin Family Society of Fellows for Sustainability. More

  • in

    MIT Maritime Consortium sets sail

    Around 11 billion tons of goods, or about 1.5 tons per person worldwide, are transported by sea each year, representing about 90 percent of global trade by volume. Internationally, the merchant shipping fleet numbers around 110,000 vessels. These ships, and the ports that service them, are significant contributors to the local and global economy — and they’re significant contributors to greenhouse gas emissions.A new consortium, formalized in a signing ceremony at MIT last week, aims to address climate-harming emissions in the maritime shipping industry, while supporting efforts for environmentally friendly operation in compliance with the decarbonization goals set by the International Maritime Organization.“This is a timely collaboration with key stakeholders from the maritime industry with a very bold and interdisciplinary research agenda that will establish new technologies and evidence-based standards,” says Themis Sapsis, the William Koch Professor of Marine Technology at MIT and the director of MIT’s Center for Ocean Engineering. “It aims to bring the best from MIT in key areas for commercial shipping, such as nuclear technology for commercial settings, autonomous operation and AI methods, improved hydrodynamics and ship design, cybersecurity, and manufacturing.” Co-led by Sapsis and Fotini Christia, the Ford International Professor of the Social Sciences; director of the Institute for Data, Systems, and Society (IDSS); and director of the MIT Sociotechnical Systems Research Center, the newly-launched MIT Maritime Consortium (MC) brings together MIT collaborators from across campus, including the Center for Ocean Engineering, which is housed in the Department of Mechanical Engineering; IDSS, which is housed in the MIT Schwarzman College of Computing; the departments of Nuclear Science and Engineering and Civil and Environmental Engineering; MIT Sea Grant; and others, with a national and an international community of industry experts.The Maritime Consortium’s founding members are the American Bureau of Shipping (ABS), Capital Clean Energy Carriers Corp., and HD Korea Shipbuilding and Offshore Engineering. Innovation members are Foresight-Group, Navios Maritime Partners L.P., Singapore Maritime Institute, and Dorian LPG.“The challenges the maritime industry faces are challenges that no individual company or organization can address alone,” says Christia. “The solution involves almost every discipline from the School of Engineering, as well as AI and data-driven algorithms, and policy and regulation — it’s a true MIT problem.”Researchers will explore new designs for nuclear systems consistent with the techno-economic needs and constraints of commercial shipping, economic and environmental feasibility of alternative fuels, new data-driven algorithms and rigorous evaluation criteria for autonomous platforms in the maritime space, cyber-physical situational awareness and anomaly detection, as well as 3D printing technologies for onboard manufacturing. Collaborators will also advise on research priorities toward evidence-based standards related to MIT presidential priorities around climate, sustainability, and AI.MIT has been a leading center of ship research and design for over a century, and is widely recognized for contributions to hydrodynamics, ship structural mechanics and dynamics, propeller design, and overall ship design, and its unique educational program for U.S. Navy Officers, the Naval Construction and Engineering Program. Research today is at the forefront of ocean science and engineering, with significant efforts in fluid mechanics and hydrodynamics, acoustics, offshore mechanics, marine robotics and sensors, and ocean sensing and forecasting. The consortium’s academic home at MIT also opens the door to cross-departmental collaboration across the Institute.The MC will launch multiple research projects designed to tackle challenges from a variety of angles, all united by cutting-edge data analysis and computation techniques. Collaborators will research new designs and methods that improve efficiency and reduce greenhouse gas emissions, explore feasibility of alternative fuels, and advance data-driven decision-making, manufacturing and materials, hydrodynamic performance, and cybersecurity.“This consortium brings a powerful collection of significant companies that, together, has the potential to be a global shipping shaper in itself,” says Christopher J. Wiernicki SM ’85, chair and chief executive officer of ABS. “The strength and uniqueness of this consortium is the members, which are all world-class organizations and real difference makers. The ability to harness the members’ experience and know-how, along with MIT’s technology reach, creates real jet fuel to drive progress,” Wiernicki says. “As well as researching key barriers, bottlenecks, and knowledge gaps in the emissions challenge, the consortium looks to enable development of the novel technology and policy innovation that will be key. Long term, the consortium hopes to provide the gravity we will need to bend the curve.” More

  • in

    J-WAFS: Supporting food and water research across MIT

    MIT’s Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) has transformed the landscape of water and food research at MIT, driving faculty engagement and catalyzing new research and innovation in these critical areas. With philanthropic, corporate, and government support, J-WAFS’ strategic approach spans the entire research life cycle, from support for early-stage research to commercialization grants for more advanced projects.Over the past decade, J-WAFS has invested approximately $25 million in direct research funding to support MIT faculty pursuing transformative research with the potential for significant impact. “Since awarding our first cohort of seed grants in 2015, it’s remarkable to look back and see that over 10 percent of the MIT faculty have benefited from J-WAFS funding,” observes J-WAFS Executive Director Renee J. Robins ’83. “Many of these professors hadn’t worked on water or food challenges before their first J-WAFS grant.” By fostering interdisciplinary collaborations and supporting high-risk, high-reward projects, J-WAFS has amplified the capacity of MIT faculty to pursue groundbreaking research that addresses some of the world’s most pressing challenges facing our water and food systems.Drawing MIT faculty to water and food researchJ-WAFS open calls for proposals enable faculty to explore bold ideas and develop impactful approaches to tackling critical water and food system challenges. Professor Patrick Doyle’s work in water purification exemplifies this impact. “Without J-WAFS, I would have never ventured into the field of water purification,” Doyle reflects. While previously focused on pharmaceutical manufacturing and drug delivery, exposure to J-WAFS-funded peers led him to apply his expertise in soft materials to water purification. “Both the funding and the J-WAFS community led me to be deeply engaged in understanding some of the key challenges in water purification and water security,” he explains.Similarly, Professor Otto Cordero of the Department of Civil and Environmental Engineering (CEE) leveraged J-WAFS funding to pivot his research into aquaculture. Cordero explains that his first J-WAFS seed grant “has been extremely influential for my lab because it allowed me to take a step in a new direction, with no preliminary data in hand.” Cordero’s expertise is in microbial communities. He was previous unfamiliar with aquaculture, but he saw the relevance of microbial communities the health of farmed aquatic organisms.Supporting early-career facultyNew assistant professors at MIT have particularly benefited from J-WAFS funding and support. J-WAFS has played a transformative role in shaping the careers and research trajectories of many new faculty members by encouraging them to explore novel research areas, and in many instances providing their first MIT research grant.Professor Ariel Furst reflects on how pivotal J-WAFS’ investment has been in advancing her research. “This was one of the first grants I received after starting at MIT, and it has truly shaped the development of my group’s research program,” Furst explains. With J-WAFS’ backing, her lab has achieved breakthroughs in chemical detection and remediation technologies for water. “The support of J-WAFS has enabled us to develop the platform funded through this work beyond the initial applications to the general detection of environmental contaminants and degradation of those contaminants,” she elaborates. Karthish Manthiram, now a professor of chemical engineering and chemistry at Caltech, explains how J-WAFS’ early investment enabled him and other young faculty to pursue ambitious ideas. “J-WAFS took a big risk on us,” Manthiram reflects. His research on breaking the nitrogen triple bond to make ammonia for fertilizer was initially met with skepticism. However, J-WAFS’ seed funding allowed his lab to lay the groundwork for breakthroughs that later attracted significant National Science Foundation (NSF) support. “That early funding from J-WAFS has been pivotal to our long-term success,” he notes. These stories underscore the broad impact of J-WAFS’ support for early-career faculty, and its commitment to empowering them to address critical global challenges and innovate boldly.Fueling follow-on funding J-WAFS seed grants enable faculty to explore nascent research areas, but external funding for continued work is usually necessary to achieve the full potential of these novel ideas. “It’s often hard to get funding for early stage or out-of-the-box ideas,” notes J-WAFS Director Professor John H. Lienhard V. “My hope, when I founded J-WAFS in 2014, was that seed grants would allow PIs [principal investigators] to prove out novel ideas so that they would be attractive for follow-on funding. And after 10 years, J-WAFS-funded research projects have brought more than $21 million in subsequent awards to MIT.”Professor Retsef Levi led a seed study on how agricultural supply chains affect food safety, with a team of faculty spanning the MIT schools Engineering and Science as well as the MIT Sloan School of Management. The team parlayed their seed grant research into a multi-million-dollar follow-on initiative. Levi reflects, “The J-WAFS seed funding allowed us to establish the initial credibility of our team, which was key to our success in obtaining large funding from several other agencies.”Dave Des Marais was an assistant professor in the Department of CEE when he received his first J-WAFS seed grant. The funding supported his research on how plant growth and physiology are controlled by genes and interact with the environment. The seed grant helped launch his lab’s work addressing enhancing climate change resilience in agricultural systems. The work led to his Faculty Early Career Development (CAREER) Award from the NSF, a prestigious honor for junior faculty members. Now an associate professor, Des Marais’ ongoing project to further investigate the mechanisms and consequences of genomic and environmental interactions is supported by the five-year, $1,490,000 NSF grant. “J-WAFS providing essential funding to get my new research underway,” comments Des Marais.Stimulating interdisciplinary collaborationDes Marais’ seed grant was also key to developing new collaborations. He explains, “the J-WAFS grant supported me to develop a collaboration with Professor Caroline Uhler in EECS/IDSS [the Department of Electrical Engineering and Computer Science/Institute for Data, Systems, and Society] that really shaped how I think about framing and testing hypotheses. One of the best things about J-WAFS is facilitating unexpected connections among MIT faculty with diverse yet complementary skill sets.”Professors A. John Hart of the Department of Mechanical Engineering and Benedetto Marelli of CEE also launched a new interdisciplinary collaboration with J-WAFS funding. They partnered to join expertise in biomaterials, microfabrication, and manufacturing, to create printed silk-based colorimetric sensors that detect food spoilage. “The J-WAFS Seed Grant provided a unique opportunity for multidisciplinary collaboration,” Hart notes.Professors Stephen Graves in the MIT Sloan School of Management and Bishwapriya Sanyal in the Department of Urban Studies and Planning (DUSP) partnered to pursue new research on agricultural supply chains. With field work in Senegal, their J-WAFS-supported project brought together international development specialists and operations management experts to study how small firms and government agencies influence access to and uptake of irrigation technology by poorer farmers. “We used J-WAFS to spur a collaboration that would have been improbable without this grant,” they explain. Being part of the J-WAFS community also introduced them to researchers in Professor Amos Winter’s lab in the Department of Mechanical Engineering working on irrigation technologies for low-resource settings. DUSP doctoral candidate Mark Brennan notes, “We got to share our understanding of how irrigation markets and irrigation supply chains work in developing economies, and then we got to contrast that with their understanding of how irrigation system models work.”Timothy Swager, professor of chemistry, and Rohit Karnik, professor of mechanical engineering and J-WAFS associate director, collaborated on a sponsored research project supported by Xylem, Inc. through the J-WAFS Research Affiliate program. The cross-disciplinary research, which targeted the development of ultra-sensitive sensors for toxic PFAS chemicals, was conceived following a series of workshops hosted by J-WAFS. Swager and Karnik were two of the participants, and their involvement led to the collaborative proposal that Xylem funded. “J-WAFS funding allowed us to combine Swager lab’s expertise in sensing with my lab’s expertise in microfluidics to develop a cartridge for field-portable detection of PFAS,” says Karnik. “J-WAFS has enriched my research program in so many ways,” adds Swager, who is now working to commercialize the technology.Driving global collaboration and impactJ-WAFS has also helped MIT faculty establish and advance international collaboration and impactful global research. By funding and supporting projects that connect MIT researchers with international partners, J-WAFS has not only advanced technological solutions, but also strengthened cross-cultural understanding and engagement.Professor Matthew Shoulders leads the inaugural J-WAFS Grand Challenge project. In response to the first J-WAFS call for “Grand Challenge” proposals, Shoulders assembled an interdisciplinary team based at MIT to enhance and provide climate resilience to agriculture by improving the most inefficient aspect of photosynthesis, the notoriously-inefficient carbon dioxide-fixing plant enzyme RuBisCO. J-WAFS funded this high-risk/high-reward project following a competitive process that engaged external reviewers through a several rounds of iterative proposal development. The technical feedback to the team led them to researchers with complementary expertise from the Australian National University. “Our collaborative team of biochemists and synthetic biologists, computational biologists, and chemists is deeply integrated with plant biologists and field trial experts, yielding a robust feedback loop for enzyme engineering,” Shoulders says. “Together, this team will be able to make a concerted effort using the most modern, state-of-the-art techniques to engineer crop RuBisCO with an eye to helping make meaningful gains in securing a stable crop supply, hopefully with accompanying improvements in both food and water security.”Professor Leon Glicksman and Research Engineer Eric Verploegen’s team designed a low-cost cooling chamber to preserve fruits and vegetables harvested by smallholder farmers with no access to cold chain storage. J-WAFS’ guidance motivated the team to prioritize practical considerations informed by local collaborators, ensuring market competitiveness. “As our new idea for a forced-air evaporative cooling chamber was taking shape, we continually checked that our solution was evolving in a direction that would be competitive in terms of cost, performance, and usability to existing commercial alternatives,” explains Verploegen. Following the team’s initial seed grant, the team secured a J-WAFS Solutions commercialization grant, which Verploegen say “further motivated us to establish partnerships with local organizations capable of commercializing the technology earlier in the project than we might have done otherwise.” The team has since shared an open-source design as part of its commercialization strategy to maximize accessibility and impact.Bringing corporate sponsored research opportunities to MIT facultyJ-WAFS also plays a role in driving private partnerships, enabling collaborations that bridge industry and academia. Through its Research Affiliate Program, for example, J-WAFS provides opportunities for faculty to collaborate with industry on sponsored research, helping to convert scientific discoveries into licensable intellectual property (IP) that companies can turn into commercial products and services.J-WAFS introduced professor of mechanical engineering Alex Slocum to a challenge presented by its research affiliate company, Xylem: how to design a more energy-efficient pump for fluctuating flows. With centrifugal pumps consuming an estimated 6 percent of U.S. electricity annually, Slocum and his then-graduate student Hilary Johnson SM ’18, PhD ’22 developed an innovative variable volute mechanism that reduces energy usage. “Xylem envisions this as the first in a new category of adaptive pump geometry,” comments Johnson. The research produced a pump prototype and related IP that Xylem is working on commercializing. Johnson notes that these outcomes “would not have been possible without J-WAFS support and facilitation of the Xylem industry partnership.” Slocum adds, “J-WAFS enabled Hilary to begin her work on pumps, and Xylem sponsored the research to bring her to this point … where she has an opportunity to do far more than the original project called for.”Swager speaks highly of the impact of corporate research sponsorship through J-WAFS on his research and technology translation efforts. His PFAS project with Karnik described above was also supported by Xylem. “Xylem was an excellent sponsor of our research. Their engagement and feedback were instrumental in advancing our PFAS detection technology, now on the path to commercialization,” Swager says.Looking forwardWhat J-WAFS has accomplished is more than a collection of research projects; a decade of impact demonstrates how J-WAFS’ approach has been transformative for many MIT faculty members. As Professor Mathias Kolle puts it, his engagement with J-WAFS “had a significant influence on how we think about our research and its broader impacts.” He adds that it “opened my eyes to the challenges in the field of water and food systems and the many different creative ideas that are explored by MIT.” This thriving ecosystem of innovation, collaboration, and academic growth around water and food research has not only helped faculty build interdisciplinary and international partnerships, but has also led to the commercialization of transformative technologies with real-world applications. C. Cem Taşan, the POSCO Associate Professor of Metallurgy who is leading a J-WAFS Solutions commercialization team that is about to launch a startup company, sums it up by noting, “Without J-WAFS, we wouldn’t be here at all.”  As J-WAFS looks to the future, its continued commitment — supported by the generosity of its donors and partners — builds on a decade of success enabling MIT faculty to advance water and food research that addresses some of the world’s most pressing challenges. More