More stories

  • in

    Migration Summit addresses education and workforce development in displacement

    “Refugees can change the world with access to education,” says Alnarjes Harba, a refugee from Syria who recently shared her story at the 2022 Migration Summit — a first-of-its-kind, global convening to address the challenges that displaced communities face in accessing education and employment.

    At the age of 13, Harba was displaced to Lebanon, where she graduated at the top of her high school class. But because of her refugee status, she recalls, no university in her host country would accept her. Today, Harba is a researcher in health-care architecture. She holds a bachelor’s degree from Southern New Hampshire University, where she was part of the Global Education Movement, a program providing refugees with pathways to higher education and work.

    Like many of the Migration Summit’s participants, Harba shared her story to call attention not only to the barriers to refugee education, but also to the opportunities to create more education-to-employment pathways like MIT Refugee Action Hub’s (ReACT) certificate programs for displaced learners.

    Organized by MIT ReACT, the MIT Abdul Latif Jameel World Education Lab (J-WEL), Na’amal, Karam Foundation, and Paper Airplanes, the Migration Summit sought to center the voices and experiences of those most directly impacted by displacement — both in narratives about the crisis and in the search for solutions. Themed “Education and Workforce Development in Displacement,” this year’s summit welcomed more than 900 attendees from over 30 countries, to a total of 40 interactive virtual sessions led by displaced learners, educators, and activists working to support communities in displacement.

    Sessions highlighted the experiences of refugees, migrants, and displaced learners, as well as current efforts across the education and workforce development landscape, ranging from pK-12 initiatives to post-secondary programs, workforce training to entrepreneurship opportunities.

    Overcoming barriers to access

    The vision for the Migration Summit developed, in part, out of the need to raise more awareness about the long-standing global displacement crisis. According to the United Nations High Commissioner for Refugees (UNHCR), 82.4 million people worldwide today are forcibly displaced, a figure that doesn’t include the estimated 12 million people who have fled their homes in Ukraine since February.

    “Refugees not only leave their countries; they leave behind a thousand memories, their friends, their families,” says Mondiant Dogon, a human rights activist, refugee ambassador, and author who gave the Migration Summit’s opening keynote address. “Education is the most important thing that can happen to refugees. In that way, we can leave behind the refugee camps and build our own independent future.”

    Yet, as the stories of the summit’s participants highlight, many in displacement have lost their livelihoods or had their education disrupted — only to face further challenges when trying to access education or find work in their new places of residence. Obstacles range from legal restrictions, language and cultural barriers, and unaffordable costs to lack of verifiable credentials. UNHCR estimates that only 5 percent of refugees have access to higher education, compared to the global average of 39 percent.

    “There is another problem related to forced displacement — dehumanization of migrants,” says Lina Sergie Attar, the founder and CEO of Karam Foundation. “They are unjustly positioned as enemies, as a threat.”

    But as Blein Alem, an MIT ReACT alum and refugee from Eritrea, explains, “No one chooses to be a refugee — it just occurs. Whether by conflict, war, human rights violations, just because you have refugee status does not mean that you are not willing to make a change in your life and access to education and work.” Several participants, including Alem, shared that, even with a degree in hand, their refugee status limited their ability to work in their new countries of residence.

    Displaced communities face complex and structural challenges in accessing education and workforce development opportunities. Because of the varying and vast effects of displacement, efforts to address these challenges range in scale and focus and differ across sectors. As Lorraine Charles, co-founder and director of Na’amal, noted in the Migration Summit’s closing session, many organizations find themselves working in silos, or even competing with each other for funding and other resources. As a result, solution-making has been fragmented, with persistent gaps between different sectors that are, in fact, working toward the same goals.

    Imagining a modular, digital, collaborative approach

    A key takeaway from the month’s discussions, then, is the need to rethink the response to refugee education and workforce challenges. During the session, “From Intentions to Impact: Decolonizing Refugee Response,” participants emphasized the systemic nature of these challenges. Yet formal responses, such as the 1951 Refugee Convention, have been largely inadequate — in some instances even oppressing the communities they’re meant to support, explains Sana Mustafa, director of partnership and engagement for Asylum Access.

    “We have the opportunity to rethink how we are handling the situation,” Mustafa says, calling for more efforts to include refugees in the design and development of solutions.

    Presenters also agreed that educational institutions, particularly universities, could play a vital role in providing more pathways for refugees and displaced learners. Key to this is rethinking the structure of education itself, including its delivery.

    “The challenge right now is that degrees are monolithic,” says Sanjay Sarma, vice president for MIT Open Learning, who gave the keynote address on “Pathways to Education, Livelihood, and Hope.” “They’re like those gigantic rocks at Stonehenge or in other megalithic sites. What we need is a much more granular version of education: bricks. Bricks were invented several thousand years ago, but we don’t really have that yet formally and extensively in education.”

    “There is no way we can accommodate thousands and thousands of refugees face-to-face,” says Shai Reshef, the founder and president of University of the People. “The only path is a digital one.”

    Ultimately, explains Demetri Fadel of Karam Foundation, “We really need to think about how to create a vision of education as a right for every person all around the world.”

    Underlying many of the Migration Summit’s conclusions is the awareness that there is still much work to be done. However, as the summit’s co-chair Lana Cook said in her closing remarks, “This was not a convening of despair, but one about what we can build together.”

    The summit’s organizers are currently putting together a public report of the key findings that have emerged from the month’s conversations, including recommendations for thematic working groups and future Migration Summit activities. More

  • in

    New England renewables + Canadian hydropower

    The urgent need to cut carbon emissions has prompted a growing number of U.S. states to commit to achieving 100 percent clean electricity by 2040 or 2050. But figuring out how to meet those commitments and still have a reliable and affordable power system is a challenge. Wind and solar installations will form the backbone of a carbon-free power system, but what technologies can meet electricity demand when those intermittent renewable sources are not adequate?

    In general, the options being discussed include nuclear power, natural gas with carbon capture and storage (CCS), and energy storage technologies such as new and improved batteries and chemical storage in the form of hydrogen. But in the northeastern United States, there is one more possibility being proposed: electricity imported from hydropower plants in the neighboring Canadian province of Quebec.

    The proposition makes sense. Those plants can produce as much electricity as about 40 large nuclear power plants, and some power generated in Quebec already comes to the Northeast. So, there could be abundant additional supply to fill any shortfall when New England’s intermittent renewables underproduce. However, U.S. wind and solar investors view Canadian hydropower as a competitor and argue that reliance on foreign supply discourages further U.S. investment.

    Two years ago, three researchers affiliated with the MIT Center for Energy and Environmental Policy Research (CEEPR) — Emil Dimanchev SM ’18, now a PhD candidate at the Norwegian University of Science and Technology; Joshua Hodge, CEEPR’s executive director; and John Parsons, a senior lecturer in the MIT Sloan School of Management — began wondering whether viewing Canadian hydro as another source of electricity might be too narrow. “Hydropower is a more-than-hundred-year-old technology, and plants are already built up north,” says Dimanchev. “We might not need to build something new. We might just need to use those plants differently or to a greater extent.”

    So the researchers decided to examine the potential role and economic value of Quebec’s hydropower resource in a future low-carbon system in New England. Their goal was to help inform policymakers, utility decision-makers, and others about how best to incorporate Canadian hydropower into their plans and to determine how much time and money New England should spend to integrate more hydropower into its system. What they found out was surprising, even to them.

    The analytical methods

    To explore possible roles for Canadian hydropower to play in New England’s power system, the MIT researchers first needed to predict how the regional power system might look in 2050 — both the resources in place and how they would be operated, given any policy constraints. To perform that analysis, they used GenX, a modeling tool originally developed by Jesse Jenkins SM ’14, PhD ’18 and Nestor Sepulveda SM ’16, PhD ’20 while they were researchers at the MIT Energy Initiative (MITEI).

    The GenX model is designed to support decision-making related to power system investment and real-time operation and to examine the impacts of possible policy initiatives on those decisions. Given information on current and future technologies — different kinds of power plants, energy storage technologies, and so on — GenX calculates the combination of equipment and operating conditions that can meet a defined future demand at the lowest cost. The GenX modeling tool can also incorporate specified policy constraints, such as limits on carbon emissions.

    For their study, Dimanchev, Hodge, and Parsons set parameters in the GenX model using data and assumptions derived from a variety of sources to build a representation of the interconnected power systems in New England, New York, and Quebec. (They included New York to account for that state’s existing demand on the Canadian hydro resources.) For data on the available hydropower, they turned to Hydro-Québec, the public utility that owns and operates most of the hydropower plants in Quebec.

    It’s standard in such analyses to include real-world engineering constraints on equipment, such as how quickly certain power plants can be ramped up and down. With help from Hydro-Québec, the researchers also put hour-to-hour operating constraints on the hydropower resource.

    Most of Hydro-Québec’s plants are “reservoir hydropower” systems. In them, when power isn’t needed, the flow on a river is restrained by a dam downstream of a reservoir, and the reservoir fills up. When power is needed, the dam is opened, and the water in the reservoir runs through downstream pipes, turning turbines and generating electricity. Proper management of such a system requires adhering to certain operating constraints. For example, to prevent flooding, reservoirs must not be allowed to overfill — especially prior to spring snowmelt. And generation can’t be increased too quickly because a sudden flood of water could erode the river edges or disrupt fishing or water quality.

    Based on projections from the National Renewable Energy Laboratory and elsewhere, the researchers specified electricity demand for every hour of the year 2050, and the model calculated the cost-optimal mix of technologies and system operating regime that would satisfy that hourly demand, including the dispatch of the Hydro-Québec hydropower system. In addition, the model determined how electricity would be traded among New England, New York, and Quebec.

    Effects of decarbonization limits on technology mix and electricity trading

    To examine the impact of the emissions-reduction mandates in the New England states, the researchers ran the model assuming reductions in carbon emissions between 80 percent and 100 percent relative to 1990 levels. The results of those runs show that, as emissions limits get more stringent, New England uses more wind and solar and extends the lifetime of its existing nuclear plants. To balance the intermittency of the renewables, the region uses natural gas plants, demand-side management, battery storage (modeled as lithium-ion batteries), and trading with Quebec’s hydropower-based system. Meanwhile, the optimal mix in Quebec is mostly composed of existing hydro generation. Some solar is added, but new reservoirs are built only if renewable costs are assumed to be very high.

    The most significant — and perhaps surprising — outcome is that in all the scenarios, the hydropower-based system of Quebec is not only an exporter but also an importer of electricity, with the direction of flow on the Quebec-New England transmission lines changing over time.

    Historically, energy has always flowed from Quebec to New England. The model results for 2018 show electricity flowing from north to south, with the quantity capped by the current transmission capacity limit of 2,225 megawatts (MW).

    An analysis for 2050, assuming that New England decarbonizes 90 percent and the capacity of the transmission lines remains the same, finds electricity flows going both ways. Flows from north to south still dominate. But for nearly 3,500 of the 8,760 hours of the year, electricity flows in the opposite direction — from New England to Quebec. And for more than 2,200 of those hours, the flow going north is at the maximum the transmission lines can carry.

    The direction of flow is motivated by economics. When renewable generation is abundant in New England, prices are low, and it’s cheaper for Quebec to import electricity from New England and conserve water in its reservoirs. Conversely, when New England’s renewables are scarce and prices are high, New England imports hydro-generated electricity from Quebec.

    So rather than delivering electricity, Canadian hydro provides a means of storing the electricity generated by the intermittent renewables in New England.

    “We see this in our modeling because when we tell the model to meet electricity demand using these resources, the model decides that it is cost-optimal to use the reservoirs to store energy rather than anything else,” says Dimanchev. “We should be sending the energy back and forth, so the reservoirs in Quebec are in essence a battery that we use to store some of the electricity produced by our intermittent renewables and discharge it when we need it.”

    Given that outcome, the researchers decided to explore the impact of expanding the transmission capacity between New England and Quebec. Building transmission lines is always contentious, but what would be the impact if it could be done?

    Their model results shows that when transmission capacity is increased from 2,225 MW to 6,225 MW, flows in both directions are greater, and in both cases the flow is at the new maximum for more than 1,000 hours.

    Results of the analysis thus confirm that the economic response to expanded transmission capacity is more two-way trading. To continue the battery analogy, more transmission capacity to and from Quebec effectively increases the rate at which the battery can be charged and discharged.

    Effects of two-way trading on the energy mix

    What impact would the advent of two-way trading have on the mix of energy-generating sources in New England and Quebec in 2050?

    Assuming current transmission capacity, in New England, the change from one-way to two-way trading increases both wind and solar power generation and to a lesser extent nuclear; it also decreases the use of natural gas with CCS. The hydro reservoirs in Canada can provide long-duration storage — over weeks, months, and even seasons — so there is less need for natural gas with CCS to cover any gaps in supply. The level of imports is slightly lower, but now there are also exports. Meanwhile, in Quebec, two-way trading reduces solar power generation, and the use of wind disappears. Exports are roughly the same, but now there are imports as well. Thus, two-way trading reallocates renewables from Quebec to New England, where it’s more economical to install and operate solar and wind systems.

    Another analysis examined the impact on the energy mix of assuming two-way trading plus expanded transmission capacity. For New England, greater transmission capacity allows wind, solar, and nuclear to expand further; natural gas with CCS all but disappears; and both imports and exports increase significantly. In Quebec, solar decreases still further, and both exports and imports of electricity increase.

    Those results assume that the New England power system decarbonizes by 99 percent in 2050 relative to 1990 levels. But at 90 percent and even 80 percent decarbonization levels, the model concludes that natural gas capacity decreases with the addition of new transmission relative to the current transmission scenario. Existing plants are retired, and new plants are not built as they are no longer economically justified. Since natural gas plants are the only source of carbon emissions in the 2050 energy system, the researchers conclude that the greater access to hydro reservoirs made possible by expanded transmission would accelerate the decarbonization of the electricity system.

    Effects of transmission changes on costs

    The researchers also explored how two-way trading with expanded transmission capacity would affect costs in New England and Quebec, assuming 99 percent decarbonization in New England. New England’s savings on fixed costs (investments in new equipment) are largely due to a decreased need to invest in more natural gas with CCS, and its savings on variable costs (operating costs) are due to a reduced need to run those plants. Quebec’s savings on fixed costs come from a reduced need to invest in solar generation. The increase in cost — borne by New England — reflects the construction and operation of the increased transmission capacity. The net benefit for the region is substantial.

    Thus, the analysis shows that everyone wins as transmission capacity increases — and the benefit grows as the decarbonization target tightens. At 99 percent decarbonization, the overall New England-Quebec region pays about $21 per megawatt-hour (MWh) of electricity with today’s transmission capacity but only $18/MWh with expanded transmission. Assuming 100 percent reduction in carbon emissions, the region pays $29/MWh with current transmission capacity and only $22/MWh with expanded transmission.

    Addressing misconceptions

    These results shed light on several misconceptions that policymakers, supporters of renewable energy, and others tend to have.

    The first misconception is that the New England renewables and Canadian hydropower are competitors. The modeling results instead show that they’re complementary. When the power systems in New England and Quebec work together as an integrated system, the Canadian reservoirs are used part of the time to store the renewable electricity. And with more access to hydropower storage in Quebec, there’s generally more renewable investment in New England.

    The second misconception arises when policymakers refer to Canadian hydro as a “baseload resource,” which implies a dependable source of electricity — particularly one that supplies power all the time. “Our study shows that by viewing Canadian hydropower as a baseload source of electricity — or indeed a source of electricity at all — you’re not taking full advantage of what that resource can provide,” says Dimanchev. “What we show is that Quebec’s reservoir hydro can provide storage, specifically for wind and solar. It’s a solution to the intermittency problem that we foresee in carbon-free power systems for 2050.”

    While the MIT analysis focuses on New England and Quebec, the researchers believe that their results may have wider implications. As power systems in many regions expand production of renewables, the value of storage grows. Some hydropower systems have storage capacity that has not yet been fully utilized and could be a good complement to renewable generation. Taking advantage of that capacity can lower the cost of deep decarbonization and help move some regions toward a decarbonized supply of electricity.

    This research was funded by the MIT Center for Energy and Environmental Policy Research, which is supported in part by a consortium of industry and government associates.

    This article appears in the Autumn 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Q&A: Randolph Kirchain on how cool pavements can mitigate climate change

    As cities search for climate change solutions, many have turned to one burgeoning technology: cool pavements. By reflecting a greater proportion of solar radiation, cool pavements can offer an array of climate change mitigation benefits, from direct radiative forcing to reduced building energy demand.

    Yet, scientists from the MIT Concrete Sustainability Hub (CSHub) have found that cool pavements are not just a summertime solution. Here, Randolph Kirchain, a principal research scientist at CSHub, discusses how implementing cool pavements can offer myriad greenhouse gas reductions in cities — some of which occur even in the winter.

    Q: What exactly are cool pavements? 

    A: There are two ways to make a cool pavement: changing the pavement formulation to make the pavement porous like a sponge (a so-called “pervious pavement”), or paving with reflective materials. The latter method has been applied extensively because it can be easily adopted on the current road network with different traffic volumes while sustaining — and sometimes improving — the road longevity. To the average observer, surface reflectivity usually corresponds to the color of a pavement — the lighter, the more reflective. 

    We can quantify this surface reflectivity through a measurement called albedo, which refers to the percentage of light a surface reflects. Typically, a reflective pavement has an albedo of 0.3 or higher, meaning that it reflects 30 percent of the light it receives.

    To attain this reflectivity, there are a number of techniques at our disposal. The most common approach is to simply paint a brighter coating atop existing pavements. But it’s also possible to pave with materials that possess naturally greater reflectivity, such as concrete or lighter-colored binders and aggregates.

    Q: How can cool pavements mitigate climate change?

    A: Cool pavements generate several, often unexpected, effects. The most widely known is a reduction in surface and local air temperatures. This occurs because cool pavements absorb less radiation and, consequently, emit less of that radiation as heat. In the summer, this means they can lower urban air temperatures by several degrees Fahrenheit.

    By changing air temperatures or reflecting light into adjacent structures, cool pavements can also alter the need for heating and cooling in those structures, which can change their energy demand and, therefore, mitigate the climate change impacts associated with building energy demand.

    However, depending on how dense the neighborhood is built, a proportion of the radiation cool pavements reflect doesn’t strike buildings; instead, it travels back into the atmosphere and out into space. This process, called a radiative forcing, shifts the Earth’s energy balance and effectively offsets some of the radiation trapped by greenhouse gases (GHGs).

    Perhaps the least-known impact of cool pavements is on vehicle fuel consumption. Certain cool pavements, namely concrete, possess a combination of structural properties and longevity that can minimize the excess fuel consumption of vehicles caused by road quality. Over the lifetime of a pavement, these fuel savings can add up — often offsetting the higher initial footprint of paving with more durable materials.

    Q: With these impacts in mind, how do the effects of cool pavements vary seasonally and by location?

    A: Many view cool pavements as a solution to summer heat. But research has shown that they can offer climate change benefits throughout the year.

    In high-volume traffic roads, the most prominent climate change benefit of cool pavements is not their reflectivity but their impact on vehicle fuel consumption. As such, cool pavement alternatives that minimize fuel consumption can continue to cut GHG emissions in winter, assuming traffic is constant.

    Even in winter, pavement reflectivity still contributes greatly to the climate change mitigation benefits of cool pavements. We found that roughly a third of the annual CO2-equivalent emissions reductions from the radiative forcing effects of cool pavements occurred in the fall and winter.

    It’s important to note, too, that the direction — not just the magnitude — of cool pavement impacts also vary seasonally. The most prominent seasonal variation is the changes to building energy demand. As they lower air temperatures, cool pavements can lessen the demand for cooling in buildings in the summer, while, conversely, they can cause buildings to consume more energy and generate more emissions due to heating in the winter.

    Interestingly, the radiation reflected by cool pavements can also strike adjacent buildings, heating them up. In the summer, this can increase building energy demand significantly, yet in the winter it can also warm structures and reduce their need for heating. In that sense, cool pavements can warm — as well as cool — their surroundings, depending on the building insolation [solar exposure] systems and neighborhood density.

    Q: How can cities manage these many impacts?

    A: As you can imagine, such different and often competing impacts can complicate the implementation of cool pavements. In some contexts, for instance, a cool pavement might even generate more emissions over its life than a conventional pavement — despite lowering air temperatures.

    To ensure that the lowest-emitting pavement is selected, then, cities should use a life-cycle perspective that considers all potential impacts. When they do, research has shown that they can reap sizeable benefits. The city of Phoenix, for instance, could see its projected emissions fall by as much as 6 percent, while Boston would experience a reduction of up to 3 percent.

    These benefits don’t just demonstrate the potential of cool pavements: they also reflect the outsized impact of pavements on our built environment and, moreover, our climate. As cities move to fight climate change, they should know that one of their most extensive assets also presents an opportunity for greater sustainability.

    The MIT Concrete Sustainability Hub is a team of researchers from several departments across MIT working on concrete and infrastructure science, engineering, and economics. Its research is supported by the Portland Cement Association and the Ready Mixed Concrete Research and Education Foundation. More

  • in

    3 Questions: What a single car can say about traffic

    Vehicle traffic has long defied description. Once measured roughly through visual inspection and traffic cameras, new smartphone crowdsourcing tools are now quantifying traffic far more precisely. This popular method, however, also presents a problem: Accurate measurements require a lot of data and users.

    Meshkat Botshekan, an MIT PhD student in civil and environmental engineering and research assistant at the MIT Concrete Sustainability Hub, has sought to expand on crowdsourcing methods by looking into the physics of traffic. During his time as a doctoral candidate, he has helped develop Carbin, a smartphone-based roadway crowdsourcing tool created by MIT CSHub and the University of Massachusetts Dartmouth, and used its data to offer more insight into the physics of traffic — from the formation of traffic jams to the inference of traffic phase and driving behavior. Here, he explains how recent findings can allow smartphones to infer traffic properties from the measurements of a single vehicle.  

    Q: Numerous navigation apps already measure traffic. Why do we need alternatives?

    A: Traffic characteristics have always been tough to measure. In the past, visual inspection and cameras were used to produce traffic metrics. So, there’s no denying that today’s navigation tools apps offer a superior alternative. Yet even these modern tools have gaps.

    Chief among them is their dependence on spatially distributed user counts: Essentially, these apps tally up their users on road segments to estimate the density of traffic. While this approach may seem adequate, it is both vulnerable to manipulation, as demonstrated in some viral videos, and requires immense quantities of data for reliable estimates. Processing these data is so time- and resource-intensive that, despite their availability, they can’t be used to quantify traffic effectively across a whole road network. As a result, this immense quantity of traffic data isn’t actually optimal for traffic management.

    Q: How could new technologies improve how we measure traffic?

    A: New alternatives have the potential to offer two improvements over existing methods: First, they can extrapolate far more about traffic with far fewer data. Second, they can cost a fraction of the price while offering a far simpler method of data collection. Just like Waze and Google Maps, they rely on crowdsourcing data from users. Yet, they are grounded in the incorporation of high-level statistical physics into data analysis.

    For instance, the Carbin app, which we are developing in collaboration with UMass Dartmouth, applies principles of statistical physics to existing traffic models to entirely forgo the need for user counts. Instead, it can infer traffic density and driver behavior using the input of a smartphone mounted in single vehicle.

    The method at the heart of the app, which was published last fall in Physical Review E, treats vehicles like particles in a many-body system. Just as the behavior of a closed many-body system can be understood through observing the behavior of an individual particle relying on the ergodic theorem of statistical physics, we can characterize traffic through the fluctuations in speed and position of a single vehicle across a road. As a result, we can infer the behavior and density of traffic on a segment of a road.

    As far less data is required, this method is more rapid and makes data management more manageable. But most importantly, it also has the potential to make traffic data less expensive and accessible to those that need it.

    Q: Who are some of the parties that would benefit from new technologies?

    A: More accessible and sophisticated traffic data would benefit more than just drivers seeking smoother, faster routes. It would also enable state and city departments of transportation (DOTs) to make local and collective interventions that advance the critical transportation objectives of equity, safety, and sustainability.

    As a safety solution, new data collection technologies could pinpoint dangerous driving conditions on a much finer scale to inform improved traffic calming measures. And since socially vulnerable communities experience traffic violence disproportionately, these interventions would have the added benefit of addressing pressing equity concerns. 

    There would also be an environmental benefit. DOTs could mitigate vehicle emissions by identifying minute deviations in traffic flow. This would present them with more opportunities to mitigate the idling and congestion that generate excess fuel consumption.  

    As we’ve seen, these three challenges have become increasingly acute, especially in urban areas. Yet, the data needed to address them exists already — and is being gathered by smartphones and telematics devices all over the world. So, to ensure a safer, more sustainable road network, it will be crucial to incorporate these data collection methods into our decision-making. More

  • in

    3 Questions: Anuradha Annaswamy on building smart infrastructures

    Much of Anuradha Annaswamy’s research hinges on uncertainty. How does cloudy weather affect a grid powered by solar energy? How do we ensure that electricity is delivered to the consumer if a grid is powered by wind and the wind does not blow? What’s the best course of action if a bird hits a plane engine on takeoff? How can you predict the behavior of a cyber attacker?

    A senior research scientist in MIT’s Department of Mechanical Engineering, Annaswamy spends most of her research time dealing with decision-making under uncertainty. Designing smart infrastructures that are resilient to uncertainty can lead to safer, more reliable systems, she says.

    Annaswamy serves as the director of MIT’s Active Adaptive Control Laboratory. A world-leading expert in adaptive control theory, she was named president of the Institute of Electrical and Electronics Engineers Control Systems Society for 2020. Her team uses adaptive control and optimization to account for various uncertainties and anomalies in autonomous systems. In particular, they are developing smart infrastructures in the energy and transportation sectors.

    Using a combination of control theory, cognitive science, economic modeling, and cyber-physical systems, Annaswamy and her team have designed intelligent systems that could someday transform the way we travel and consume energy. Their research includes a diverse range of topics such as safer autopilot systems on airplanes, the efficient dispatch of resources in electrical grids, better ride-sharing services, and price-responsive railway systems.

    In a recent interview, Annaswamy spoke about how these smart systems could help support a safer and more sustainable future.

    Q: How is your team using adaptive control to make air travel safer?

    A: We want to develop an advanced autopilot system that can safely recover the airplane in the event of a severe anomaly — such as the wing becoming damaged mid-flight, or a bird flying into the engine. In the airplane, you have a pilot and autopilot to make decisions. We’re asking: How do you combine those two decision-makers?

    The answer we landed on was developing a shared pilot-autopilot control architecture. We collaborated with David Woods, an expert in cognitive engineering at The Ohio State University, to develop an intelligent system that takes the pilot’s behavior into account. For example, all humans have something known as “capacity for maneuver” and “graceful command degradation” that inform how we react in the face of adversity. Using mathematical models of pilot behavior, we proposed a shared control architecture where the pilot and the autopilot work together to make an intelligent decision on how to react in the face of uncertainties. In this system, the pilot reports the anomaly to an adaptive autopilot system that ensures resilient flight control.

    Q: How does your research on adaptive control fit into the concept of smart cities?

    A: Smart cities are an interesting way we can use intelligent systems to promote sustainability. Our team is looking at ride-sharing services in particular. Services like Uber and Lyft have provided new transportation options, but their impact on the carbon footprint has to be considered. We’re looking at developing a system where the number of passenger-miles per unit of energy is maximized through something called “shared mobility on demand services.” Using the alternating minimization approach, we’ve developed an algorithm that can determine the optimal route for multiple passengers traveling to various destinations.

    As with the pilot-autopilot dynamic, human behavior is at play here. In sociology there is an interesting concept of behavioral dynamics known as Prospect Theory. If we give passengers options with regards to which route their shared ride service will take, we are empowering them with free will to accept or reject a route. Prospect Theory shows that if you can use pricing as an incentive, people are much more loss-averse so they would be willing to walk a bit extra or wait a few minutes longer to join a low-cost ride with an optimized route. If everyone utilized a system like this, the carbon footprint of ride-sharing services could decrease substantially.

    Q: What other ways are you using intelligent systems to promote sustainability?

    A: Renewable energy and sustainability are huge drivers for our research. To enable a world where all of our energy is coming from renewable sources like solar or wind, we need to develop a smart grid that can account for the fact that the sun isn’t always shining and wind isn’t always blowing. These uncertainties are the biggest hurdles to achieving an all-renewable grid. Of course, there are many technologies being developed for batteries that can help store renewable energy, but we are taking a different approach.

    We have created algorithms that can optimally schedule distributed energy resources within the grid — this includes making decisions on when to use onsite generators, how to operate storage devices, and when to call upon demand response technologies, all in response to the economics of using such resources and their physical constraints. If we can develop an interconnected smart grid where, for example, the air conditioning setting in a house is set to 72 degrees instead of 69 degrees automatically when demand is high, there could be a substantial savings in energy usage without impacting human comfort. In one of our studies, we applied a distributed proximal atomic coordination algorithm to the grid in Tokyo to demonstrate how this intelligent system could account for the uncertainties present in a grid powered by renewable resources. More

  • in

    Q&A: More-sustainable concrete with machine learning

    As a building material, concrete withstands the test of time. Its use dates back to early civilizations, and today it is the most popular composite choice in the world. However, it’s not without its faults. Production of its key ingredient, cement, contributes 8-9 percent of the global anthropogenic CO2 emissions and 2-3 percent of energy consumption, which is only projected to increase in the coming years. With aging United States infrastructure, the federal government recently passed a milestone bill to revitalize and upgrade it, along with a push to reduce greenhouse gas emissions where possible, putting concrete in the crosshairs for modernization, too.

    Elsa Olivetti, the Esther and Harold E. Edgerton Associate Professor in the MIT Department of Materials Science and Engineering, and Jie Chen, MIT-IBM Watson AI Lab research scientist and manager, think artificial intelligence can help meet this need by designing and formulating new, more sustainable concrete mixtures, with lower costs and carbon dioxide emissions, while improving material performance and reusing manufacturing byproducts in the material itself. Olivetti’s research improves environmental and economic sustainability of materials, and Chen develops and optimizes machine learning and computational techniques, which he can apply to materials reformulation. Olivetti and Chen, along with their collaborators, have recently teamed up for an MIT-IBM Watson AI Lab project to make concrete more sustainable for the benefit of society, the climate, and the economy.

    Q: What applications does concrete have, and what properties make it a preferred building material?

    Olivetti: Concrete is the dominant building material globally with an annual consumption of 30 billion metric tons. That is over 20 times the next most produced material, steel, and the scale of its use leads to considerable environmental impact, approximately 5-8 percent of global greenhouse gas (GHG) emissions. It can be made locally, has a broad range of structural applications, and is cost-effective. Concrete is a mixture of fine and coarse aggregate, water, cement binder (the glue), and other additives.

    Q: Why isn’t it sustainable, and what research problems are you trying to tackle with this project?

    Olivetti: The community is working on several ways to reduce the impact of this material, including alternative fuels use for heating the cement mixture, increasing energy and materials efficiency and carbon sequestration at production facilities, but one important opportunity is to develop an alternative to the cement binder.

    While cement is 10 percent of the concrete mass, it accounts for 80 percent of the GHG footprint. This impact is derived from the fuel burned to heat and run the chemical reaction required in manufacturing, but also the chemical reaction itself releases CO2 from the calcination of limestone. Therefore, partially replacing the input ingredients to cement (traditionally ordinary Portland cement or OPC) with alternative materials from waste and byproducts can reduce the GHG footprint. But use of these alternatives is not inherently more sustainable because wastes might have to travel long distances, which adds to fuel emissions and cost, or might require pretreatment processes. The optimal way to make use of these alternate materials will be situation-dependent. But because of the vast scale, we also need solutions that account for the huge volumes of concrete needed. This project is trying to develop novel concrete mixtures that will decrease the GHG impact of the cement and concrete, moving away from the trial-and-error processes towards those that are more predictive.

    Chen: If we want to fight climate change and make our environment better, are there alternative ingredients or a reformulation we could use so that less greenhouse gas is emitted? We hope that through this project using machine learning we’ll be able to find a good answer.

    Q: Why is this problem important to address now, at this point in history?

    Olivetti: There is urgent need to address greenhouse gas emissions as aggressively as possible, and the road to doing so isn’t necessarily straightforward for all areas of industry. For transportation and electricity generation, there are paths that have been identified to decarbonize those sectors. We need to move much more aggressively to achieve those in the time needed; further, the technological approaches to achieve that are more clear. However, for tough-to-decarbonize sectors, such as industrial materials production, the pathways to decarbonization are not as mapped out.

    Q: How are you planning to address this problem to produce better concrete?

    Olivetti: The goal is to predict mixtures that will both meet performance criteria, such as strength and durability, with those that also balance economic and environmental impact. A key to this is to use industrial wastes in blended cements and concretes. To do this, we need to understand the glass and mineral reactivity of constituent materials. This reactivity not only determines the limit of the possible use in cement systems but also controls concrete processing, and the development of strength and pore structure, which ultimately control concrete durability and life-cycle CO2 emissions.

    Chen: We investigate using waste materials to replace part of the cement component. This is something that we’ve hypothesized would be more sustainable and economic — actually waste materials are common, and they cost less. Because of the reduction in the use of cement, the final concrete product would be responsible for much less carbon dioxide production. Figuring out the right concrete mixture proportion that makes endurable concretes while achieving other goals is a very challenging problem. Machine learning is giving us an opportunity to explore the advancement of predictive modeling, uncertainty quantification, and optimization to solve the issue. What we are doing is exploring options using deep learning as well as multi-objective optimization techniques to find an answer. These efforts are now more feasible to carry out, and they will produce results with reliability estimates that we need to understand what makes a good concrete.

    Q: What kinds of AI and computational techniques are you employing for this?

    Olivetti: We use AI techniques to collect data on individual concrete ingredients, mix proportions, and concrete performance from the literature through natural language processing. We also add data obtained from industry and/or high throughput atomistic modeling and experiments to optimize the design of concrete mixtures. Then we use this information to develop insight into the reactivity of possible waste and byproduct materials as alternatives to cement materials for low-CO2 concrete. By incorporating generic information on concrete ingredients, the resulting concrete performance predictors are expected to be more reliable and transformative than existing AI models.

    Chen: The final objective is to figure out what constituents, and how much of each, to put into the recipe for producing the concrete that optimizes the various factors: strength, cost, environmental impact, performance, etc. For each of the objectives, we need certain models: We need a model to predict the performance of the concrete (like, how long does it last and how much weight does it sustain?), a model to estimate the cost, and a model to estimate how much carbon dioxide is generated. We will need to build these models by using data from literature, from industry, and from lab experiments.

    We are exploring Gaussian process models to predict the concrete strength, going forward into days and weeks. This model can give us an uncertainty estimate of the prediction as well. Such a model needs specification of parameters, for which we will use another model to calculate. At the same time, we also explore neural network models because we can inject domain knowledge from human experience into them. Some models are as simple as multi-layer perceptions, while some are more complex, like graph neural networks. The goal here is that we want to have a model that is not only accurate but also robust — the input data is noisy, and the model must embrace the noise, so that its prediction is still accurate and reliable for the multi-objective optimization.

    Once we have built models that we are confident with, we will inject their predictions and uncertainty estimates into the optimization of multiple objectives, under constraints and under uncertainties.

    Q: How do you balance cost-benefit trade-offs?

    Chen: The multiple objectives we consider are not necessarily consistent, and sometimes they are at odds with each other. The goal is to identify scenarios where the values for our objectives cannot be further pushed simultaneously without compromising one or a few. For example, if you want to further reduce the cost, you probably have to suffer the performance or suffer the environmental impact. Eventually, we will give the results to policymakers and they will look into the results and weigh the options. For example, they may be able to tolerate a slightly higher cost under a significant reduction in greenhouse gas. Alternatively, if the cost varies little but the concrete performance changes drastically, say, doubles or triples, then this is definitely a favorable outcome.

    Q: What kinds of challenges do you face in this work?

    Chen: The data we get either from industry or from literature are very noisy; the concrete measurements can vary a lot, depending on where and when they are taken. There are also substantial missing data when we integrate them from different sources, so, we need to spend a lot of effort to organize and make the data usable for building and training machine learning models. We also explore imputation techniques that substitute missing features, as well as models that tolerate missing features, in our predictive modeling and uncertainty estimate.

    Q: What do you hope to achieve through this work?

    Chen: In the end, we are suggesting either one or a few concrete recipes, or a continuum of recipes, to manufacturers and policymakers. We hope that this will provide invaluable information for both the construction industry and for the effort of protecting our beloved Earth.

    Olivetti: We’d like to develop a robust way to design cements that make use of waste materials to lower their CO2 footprint. Nobody is trying to make waste, so we can’t rely on one stream as a feedstock if we want this to be massively scalable. We have to be flexible and robust to shift with feedstocks changes, and for that we need improved understanding. Our approach to develop local, dynamic, and flexible alternatives is to learn what makes these wastes reactive, so we know how to optimize their use and do so as broadly as possible. We do that through predictive model development through software we have developed in my group to automatically extract data from literature on over 5 million texts and patents on various topics. We link this to the creative capabilities of our IBM collaborators to design methods that predict the final impact of new cements. If we are successful, we can lower the emissions of this ubiquitous material and play our part in achieving carbon emissions mitigation goals.

    Other researchers involved with this project include Stefanie Jegelka, the X-Window Consortium Career Development Associate Professor in the MIT Department of Electrical Engineering and Computer Science; Richard Goodwin, IBM principal researcher; Soumya Ghosh, MIT-IBM Watson AI Lab research staff member; and Kristen Severson, former research staff member. Collaborators included Nghia Hoang, former research staff member with MIT-IBM Watson AI Lab and IBM Research; and Jeremy Gregory, research scientist in the MIT Department of Civil and Environmental Engineering and executive director of the MIT Concrete Sustainability Hub.

    This research is supported by the MIT-IBM Watson AI Lab. More

  • in

    Coupling power and hydrogen sector pathways to benefit decarbonization

    Governments and companies worldwide are increasing their investments in hydrogen research and development, indicating a growing recognition that hydrogen could play a significant role in meeting global energy system decarbonization goals. Since hydrogen is light, energy-dense, storable, and produces no direct carbon dioxide emissions at the point of use, this versatile energy carrier has the potential to be harnessed in a variety of ways in a future clean energy system.

    Often considered in the context of grid-scale energy storage, hydrogen has garnered renewed interest, in part due to expectations that our future electric grid will be dominated by variable renewable energy (VRE) sources such as wind and solar, as well as decreasing costs for water electrolyzers — both of which could make clean, “green” hydrogen more cost-competitive with fossil-fuel-based production. But hydrogen’s versatility as a clean energy fuel also makes it an attractive option to meet energy demand and to open pathways for decarbonization in hard-to-abate sectors where direct electrification is difficult, such as transportation, buildings, and industry.

    “We’ve seen a lot of progress and analysis around pathways to decarbonize electricity, but we may not be able to electrify all end uses. This means that just decarbonizing electricity supply is not sufficient, and we must develop other decarbonization strategies as well,” says Dharik Mallapragada, a research scientist at the MIT Energy Initiative (MITEI). “Hydrogen is an interesting energy carrier to explore, but understanding the role for hydrogen requires us to study the interactions between the electricity system and a future hydrogen supply chain.”

    In a recent paper, researchers from MIT and Shell present a framework to systematically study the role and impact of hydrogen-based technology pathways in a future low-carbon, integrated energy system, taking into account interactions with the electric grid and the spatio-temporal variations in energy demand and supply. The developed framework co-optimizes infrastructure investment and operation across the electricity and hydrogen supply chain under various emissions price scenarios. When applied to a Northeast U.S. case study, the researchers find this approach results in substantial benefits — in terms of costs and emissions reduction — as it takes advantage of hydrogen’s potential to provide the electricity system with a large flexible load when produced through electrolysis, while also enabling decarbonization of difficult-to-electrify, end-use sectors.

    The research team includes Mallapragada; Guannan He, a postdoc at MITEI; Abhishek Bose, a graduate research assistant at MITEI; Clara Heuberger-Austin, a researcher at Shell; and Emre Gençer, a research scientist at MITEI. Their findings are published in the journal Energy & Environmental Science.

    Cross-sector modeling

    “We need a cross-sector framework to analyze each energy carrier’s economics and role across multiple systems if we are to really understand the cost/benefits of direct electrification or other decarbonization strategies,” says He.

    To do that analysis, the team developed the Decision Optimization of Low-carbon Power-HYdrogen Network (DOLPHYN) model, which allows the user to study the role of hydrogen in low-carbon energy systems, the effects of coupling the power and hydrogen sectors, and the trade-offs between various technology options across both supply chains — spanning production, transport, storage, and end use, and their impact on decarbonization goals.

    “We are seeing great interest from industry and government, because they are all asking questions about where to invest their money and how to prioritize their decarbonization strategies,” says Gençer. Heuberger-Austin adds, “Being able to assess the system-level interactions between electricity and the emerging hydrogen economy is of paramount importance to drive technology development and support strategic value chain decisions. The DOLPHYN model can be instrumental in tackling those kinds of questions.”

    For a predefined set of electricity and hydrogen demand scenarios, the model determines the least-cost technology mix across the power and hydrogen sectors while adhering to a variety of operation and policy constraints. The model can incorporate a range of technology options — from VRE generation to carbon capture and storage (CCS) used with both power and hydrogen generation to trucks and pipelines used for hydrogen transport. With its flexible structure, the model can be readily adapted to represent emerging technology options and evaluate their long-term value to the energy system.

    As an important addition, the model takes into account process-level carbon emissions by allowing the user to add a cost penalty on emissions in both sectors. “If you have a limited emissions budget, we are able to explore the question of where to prioritize the limited emissions to get the best bang for your buck in terms of decarbonization,” says Mallapragada.

    Insights from a case study

    To test their model, the researchers investigated the Northeast U.S. energy system under a variety of demand, technology, and carbon price scenarios. While their major conclusions can be generalized for other regions, the Northeast proved to be a particularly interesting case study. This region has current legislation and regulatory support for renewable generation, as well as increasing emission-reduction targets, a number of which are quite stringent. It also has a high demand for energy for heating — a sector that is difficult to electrify and could particularly benefit from hydrogen and from coupling the power and hydrogen systems.

    The researchers find that when combining the power and hydrogen sectors through electrolysis or hydrogen-based power generation, there is more operational flexibility to support VRE integration in the power sector and a reduced need for alternative grid-balancing supply-side resources such as battery storage or dispatchable gas generation, which in turn reduces the overall system cost. This increased VRE penetration also leads to a reduction in emissions compared to scenarios without sector-coupling. “The flexibility that electricity-based hydrogen production provides in terms of balancing the grid is as important as the hydrogen it is going to produce for decarbonizing other end uses,” says Mallapragada. They found this type of grid interaction to be more favorable than conventional hydrogen-based electricity storage, which can incur additional capital costs and efficiency losses when converting hydrogen back to power. This suggests that the role of hydrogen in the grid could be more beneficial as a source of flexible demand than as storage.

    The researchers’ multi-sector modeling approach also highlighted that CCS is more cost-effective when utilized in the hydrogen supply chain, versus the power sector. They note that counter to this observation, by the end of the decade, six times more CCS projects will be deployed in the power sector than for use in hydrogen production — a fact that emphasizes the need for more cross-sectoral modeling when planning future energy systems.

    In this study, the researchers tested the robustness of their conclusions against a number of factors, such as how the inclusion of non-combustion greenhouse gas emissions (including methane emissions) from natural gas used in power and hydrogen production impacts the model outcomes. They find that including the upstream emissions footprint of natural gas within the model boundary does not impact the value of sector coupling in regards to VRE integration and cost savings for decarbonization; in fact, the value actually grows because of the increased emphasis on electricity-based hydrogen production over natural gas-based pathways.

    “You cannot achieve climate targets unless you take a holistic approach,” says Gençer. “This is a systems problem. There are sectors that you cannot decarbonize with electrification, and there are other sectors that you cannot decarbonize without carbon capture, and if you think about everything together, there is a synergistic solution that significantly minimizes the infrastructure costs.”

    This research was supported, in part, by Shell Global Solutions International B.V. in Amsterdam, the Netherlands, and MITEI’s Low-Carbon Energy Centers for Electric Power Systems and Carbon Capture, Utilization, and Storage. More

  • in

    New “risk triage” platform pinpoints compounding threats to US infrastructure

    Over a 36-hour period in August, Hurricane Henri delivered record rainfall in New York City, where an aging storm-sewer system was not built to handle the deluge, resulting in street flooding. Meanwhile, an ongoing drought in California continued to overburden aquifers and extend statewide water restrictions. As climate change amplifies the frequency and intensity of extreme events in the United States and around the world, and the populations and economies they threaten grow and change, there is a critical need to make infrastructure more resilient. But how can this be done in a timely, cost-effective way?

    An emerging discipline called multi-sector dynamics (MSD) offers a promising solution. MSD homes in on compounding risks and potential tipping points across interconnected natural and human systems. Tipping points occur when these systems can no longer sustain multiple, co-evolving stresses, such as extreme events, population growth, land degradation, drinkable water shortages, air pollution, aging infrastructure, and increased human demands. MSD researchers use observations and computer models to identify key precursory indicators of such tipping points, providing decision-makers with critical information that can be applied to mitigate risks and boost resilience in infrastructure and managed resources.

    At MIT, the Joint Program on the Science and Policy of Global Change has since 2018 been developing MSD expertise and modeling tools and using them to explore compounding risks and potential tipping points in selected regions of the United States. In a two-hour webinar on Sept. 15, MIT Joint Program researchers presented an overview of the program’s MSD research tool set and its applications.  

    MSD and the risk triage platform

    “Multi-sector dynamics explores interactions and interdependencies among human and natural systems, and how these systems may adapt, interact, and co-evolve in response to short-term shocks and long-term influences and stresses,” says MIT Joint Program Deputy Director C. Adam Schlosser, noting that such analysis can reveal and quantify potential risks that would likely evade detection in siloed investigations. “These systems can experience cascading effects or failures after crossing tipping points. The real question is not just where these tipping points are in each system, but how they manifest and interact across all systems.”

    To address that question, the program’s MSD researchers have developed the MIT Socio-Environmental Triage (MST) platform, now publicly available for the first time. Focused on the continental United States, the first version of the platform analyzes present-day risks related to water, land, climate, the economy, energy, demographics, health, and infrastructure, and where these compound to create risk hot spots. It’s essentially a screening-level visualization tool that allows users to examine risks, identify hot spots when combining risks, and make decisions about how to deploy more in-depth analysis to solve complex problems at regional and local levels. For example, MST can identify hot spots for combined flood and poverty risks in the lower Mississippi River basin, and thereby alert decision-makers as to where more concentrated flood-control resources are needed.

    Successive versions of the platform will incorporate projections based on the MIT Joint Program’s Integrated Global System Modeling (IGSM) framework of how different systems and stressors may co-evolve into the future and thereby change the risk landscape. This enhanced capability could help uncover cost-effective pathways for mitigating and adapting to a wide range of environmental and economic risks.  

    MSD applications

    Five webinar presentations explored how MIT Joint Program researchers are applying the program’s risk triage platform and other MSD modeling tools to identify potential tipping points and risks in five key domains: water quality, land use, economics and energy, health, and infrastructure. 

    Joint Program Principal Research Scientist Xiang Gao described her efforts to apply a high-resolution U.S. water-quality model to calculate a location-specific, water-quality index over more than 2,000 river basins in the country. By accounting for interactions among climate, agriculture, and socioeconomic systems, various water-quality measures can be obtained ranging from nitrate and phosphate levels to phytoplankton concentrations. This modeling approach advances a unique capability to identify potential water-quality risk hot spots for freshwater resources.

    Joint Program Research Scientist Angelo Gurgel discussed his MSD-based analysis of how climate change, population growth, changing diets, crop-yield improvements and other forces that drive land-use change at the global level may ultimately impact how land is used in the United States. Drawing upon national observational data and the IGSM framework, the analysis shows that while current U.S. land-use trends are projected to persist or intensify between now and 2050, there is no evidence of any concerning tipping points arising throughout this period.  

    MIT Joint Program Research Scientist Jennifer Morris presented several examples of how the risk triage platform can be used to combine existing U.S. datasets and the IGSM framework to assess energy and economic risks at the regional level. For example, by aggregating separate data streams on fossil-fuel employment and poverty, one can target selected counties for clean energy job training programs as the nation moves toward a low-carbon future. 

    “Our modeling and risk triage frameworks can provide pictures of current and projected future economic and energy landscapes,” says Morris. “They can also highlight interactions among different human, built, and natural systems, including compounding risks that occur in the same location.”  

    MIT Joint Program research affiliate Sebastian Eastham, a research scientist at the MIT Laboratory for Aviation and the Environment, described an MSD approach to the study of air pollution and public health. Linking the IGSM with an atmospheric chemistry model, Eastham ultimately aims to better understand where the greatest health risks are in the United States and how they may compound throughout this century under different policy scenarios. Using the risk triage tool to combine current risk metrics for air quality and poverty in a selected county based on current population and air-quality data, he showed how one can rapidly identify cardiovascular and other air-pollution-induced disease risk hot spots.

    Finally, MIT Joint Program research affiliate Alyssa McCluskey, a lecturer at the University of Colorado at Boulder, showed how the risk triage tool can be used to pinpoint potential risks to roadways, waterways, and power distribution lines from flooding, extreme temperatures, population growth, and other stressors. In addition, McCluskey described how transportation and energy infrastructure development and expansion can threaten critical wildlife habitats.

    Enabling comprehensive, location-specific analyses of risks and hot spots within and among multiple domains, the Joint Program’s MSD modeling tools can be used to inform policymaking and investment from the municipal to the global level.

    “MSD takes on the challenge of linking human, natural, and infrastructure systems in order to inform risk analysis and decision-making,” says Schlosser. “Through our risk triage platform and other MSD models, we plan to assess important interactions and tipping points, and to provide foresight that supports action toward a sustainable, resilient, and prosperous world.”

    This research is funded by the U.S. Department of Energy’s Office of Science as an ongoing project. More