More stories

  • in

    MIT geologists discover where energy goes during an earthquake

    The ground-shaking that an earthquake generates is only a fraction of the total energy that a quake releases. A quake can also generate a flash of heat, along with a domino-like fracturing of underground rocks. But exactly how much energy goes into each of these three processes is exceedingly difficult, if not impossible, to measure in the field.Now MIT geologists have traced the energy that is released by “lab quakes” — miniature analogs of natural earthquakes that are carefully triggered in a controlled laboratory setting. For the first time, they have quantified the complete energy budget of such quakes, in terms of the fraction of energy that goes into heat, shaking, and fracturing.They found that only about 10 percent of a lab quake’s energy causes physical shaking. An even smaller fraction — less than 1 percent — goes into breaking up rock and creating new surfaces. The overwhelming portion of a quake’s energy — on average 80 percent — goes into heating up the immediate region around a quake’s epicenter. In fact, the researchers observed that a lab quake can produce a temperature spike hot enough to melt surrounding material and turn it briefly into liquid melt.The geologists also found that a quake’s energy budget depends on a region’s deformation history — the degree to which rocks have been shifted and disturbed by previous tectonic motions. The fractions of quake energy that produce heat, shaking, and rock fracturing can shift depending on what the region has experienced in the past.“The deformation history — essentially what the rock remembers — really influences how destructive an earthquake could be,” says Daniel Ortega-Arroyo, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “That history affects a lot of the material properties in the rock, and it dictates to some degree how it is going to slip.”The team’s lab quakes are a simplified analog of what occurs during a natural earthquake. Down the road, their results could help seismologists predict the likelihood of earthquakes in regions that are prone to seismic events. For instance, if scientists have an idea of how much shaking a quake generated in the past, they might be able to estimate the degree to which the quake’s energy also affected rocks deep underground by melting or breaking them apart. This in turn could reveal how much more or less vulnerable the region is to future quakes.“We could never reproduce the complexity of the Earth, so we have to isolate the physics of what is happening, in these lab quakes,” says Matěj Peč, associate professor of geophysics at MIT. “We hope to understand these processes and try to extrapolate them to nature.”Peč (pronounced “Peck”) and Ortega-Arroyo reported their results on Aug. 28 in the journal AGU Advances. Their MIT co-authors are Hoagy O’Ghaffari and Camilla Cattania, along with Zheng Gong and Roger Fu at Harvard University and Markus Ohl and Oliver Plümper at Utrecht University in the Netherlands.Under the surfaceEarthquakes are driven by energy that is stored up in rocks over millions of years. As tectonic plates slowly grind against each other, stress accumulates through the crust. When rocks are pushed past their material strength, they can suddenly slip along a narrow zone, creating a geologic fault. As rocks slip on either side of the fault, they produce seismic waves that ripple outward and upward.We perceive an earthquake’s energy mainly in the form of ground shaking, which can be measured using seismometers and other ground-based instruments. But the other two major forms of a quake’s energy — heat and underground fracturing — are largely inaccessible with current technologies.“Unlike the weather, where we can see daily patterns and measure a number of pertinent variables, it’s very hard to do that very deep in the Earth,” Ortega-Arroyo says. “We don’t know what’s happening to the rocks themselves, and the timescales over which earthquakes repeat within a fault zone are on the century-to-millenia timescales, making any sort of actionable forecast challenging.”To get an idea of how an earthquake’s energy is partitioned, and how that energy budget might affect a region’s seismic risk, he and Peč went into the lab. Over the last seven years, Peč’s group at MIT has developed methods and instrumentation to simulate seismic events, at the microscale, in an effort to understand how earthquakes at the macroscale may play out.“We are focusing on what’s happening on a really small scale, where we can control many aspects of failure and try to understand it before we can do any scaling to nature,” Ortega-Arroyo says.MicroshakesFor their new study, the team generated miniature lab quakes that simulate a seismic slipping of rocks along a fault zone. They worked with small samples of granite, which are representative of rocks in the seismogenic layer — the geologic region in the continental crust where earthquakes typically originate. They ground up the granite into a fine powder and mixed the crushed granite with a much finer powder of magnetic particles, which they used as a sort of internal temperature gauge. (A particle’s magnetic field strength will change in response to a fluctuation in temperature.)The researchers placed samples of the powdered granite — each about 10 square millimeters and 1 millimeter thin — between two small pistons and wrapped the ensemble in a gold jacket. They then applied a strong magnetic field to orient the powder’s magnetic particles in the same initial direction and to the same field strength. They reasoned that any change in the particles’ orientation and field strength afterward should be a sign of how much heat that region experienced as a result of any seismic event.Once samples were prepared, the team placed them one at a time into a custom-built apparatus that the researchers tuned to apply steadily increasing pressure, similar to the pressures that rocks experience in the Earth’s seismogenic layer, about 10 to 20 kilometers below the surface. They used custom-made piezoelectric sensors, developed by co-author O’Ghaffari, which they attached to either end of a sample to measure any shaking that occurred as they increased the stress on the sample.They observed that at certain stresses, some samples slipped, producing a microscale seismic event similar to an earthquake. By analyzing the magnetic particles in the samples after the fact, they obtained an estimate of how much each sample was temporarily heated — a method developed in collaboration with Roger Fu’s lab at Harvard University. They also estimated the amount of shaking each sample experienced, using measurements from the piezoelectric sensor and numerical models. The researchers also examined each sample under the microscope, at different magnifications, to assess how the size of the granite grains changed — whether and how many grains broke into smaller pieces, for instance.From all these measurements, the team was able to estimate each lab quake’s energy budget. On average, they found that about 80 percent of a quake’s energy goes into heat, while 10 percent generates shaking, and less than 1 percent goes into rock fracturing, or creating new, smaller particle surfaces. “In some instances we saw that, close to the fault, the sample went from room temperature to 1,200 degrees Celsius in a matter of microseconds, and then immediately cooled down once the motion stopped,” Ortega-Arroyo says. “And in one sample, we saw the fault move by about 100 microns, which implies slip velocities essentially about 10 meters per second. It moves very fast, though it doesn’t last very long.”The researchers suspect that similar processes play out in actual, kilometer-scale quakes.“Our experiments offer an integrated approach that provides one of the most complete views of the physics of earthquake-like ruptures in rocks to date,” Peč says. “This will provide clues on how to improve our current earthquake models and natural hazard mitigation.”This research was supported, in part, by the National Science Foundation. More

  • in

    Simpler models can outperform deep learning at climate prediction

    Environmental scientists are increasingly using enormous artificial intelligence models to make predictions about changes in weather and climate, but a new study by MIT researchers shows that bigger models are not always better.The team demonstrates that, in certain climate scenarios, much simpler, physics-based models can generate more accurate predictions than state-of-the-art deep-learning models.Their analysis also reveals that a benchmarking technique commonly used to evaluate machine-learning techniques for climate predictions can be distorted by natural variations in the data, like fluctuations in weather patterns. This could lead someone to believe a deep-learning model makes more accurate predictions when that is not the case.The researchers developed a more robust way of evaluating these techniques, which shows that, while simple models are more accurate when estimating regional surface temperatures, deep-learning approaches can be the best choice for estimating local rainfall.They used these results to enhance a simulation tool known as a climate emulator, which can rapidly simulate the effect of human activities onto a future climate.The researchers see their work as a “cautionary tale” about the risk of deploying large AI models for climate science. While deep-learning models have shown incredible success in domains such as natural language, climate science contains a proven set of physical laws and approximations, and the challenge becomes how to incorporate those into AI models.“We are trying to develop models that are going to be useful and relevant for the kinds of things that decision-makers need going forward when making climate policy choices. While it might be attractive to use the latest, big-picture machine-learning model on a climate problem, what this study shows is that stepping back and really thinking about the problem fundamentals is important and useful,” says study senior author Noelle Selin, a professor in the MIT Institute for Data, Systems, and Society (IDSS) and the Department of Earth, Atmospheric and Planetary Sciences (EAPS).Selin’s co-authors are lead author Björn Lütjens, a former EAPS postdoc who is now a research scientist at IBM Research; senior author Raffaele Ferrari, the Cecil and Ida Green Professor of Oceanography in EAPS and co-director of the Lorenz Center; and Duncan Watson-Parris, assistant professor at the University of California at San Diego. Selin and Ferrari are also co-principal investigators of the Bringing Computation to the Climate Challenge project, out of which this research emerged. The paper appears today in the Journal of Advances in Modeling Earth Systems.Comparing emulatorsBecause the Earth’s climate is so complex, running a state-of-the-art climate model to predict how pollution levels will impact environmental factors like temperature can take weeks on the world’s most powerful supercomputers.Scientists often create climate emulators, simpler approximations of a state-of-the art climate model, which are faster and more accessible. A policymaker could use a climate emulator to see how alternative assumptions on greenhouse gas emissions would affect future temperatures, helping them develop regulations.But an emulator isn’t very useful if it makes inaccurate predictions about the local impacts of climate change. While deep learning has become increasingly popular for emulation, few studies have explored whether these models perform better than tried-and-true approaches.The MIT researchers performed such a study. They compared a traditional technique called linear pattern scaling (LPS) with a deep-learning model using a common benchmark dataset for evaluating climate emulators.Their results showed that LPS outperformed deep-learning models on predicting nearly all parameters they tested, including temperature and precipitation.“Large AI methods are very appealing to scientists, but they rarely solve a completely new problem, so implementing an existing solution first is necessary to find out whether the complex machine-learning approach actually improves upon it,” says Lütjens.Some initial results seemed to fly in the face of the researchers’ domain knowledge. The powerful deep-learning model should have been more accurate when making predictions about precipitation, since those data don’t follow a linear pattern.They found that the high amount of natural variability in climate model runs can cause the deep learning model to perform poorly on unpredictable long-term oscillations, like El Niño/La Niña. This skews the benchmarking scores in favor of LPS, which averages out those oscillations.Constructing a new evaluationFrom there, the researchers constructed a new evaluation with more data that address natural climate variability. With this new evaluation, the deep-learning model performed slightly better than LPS for local precipitation, but LPS was still more accurate for temperature predictions.“It is important to use the modeling tool that is right for the problem, but in order to do that you also have to set up the problem the right way in the first place,” Selin says.Based on these results, the researchers incorporated LPS into a climate emulation platform to predict local temperature changes in different emission scenarios.“We are not advocating that LPS should always be the goal. It still has limitations. For instance, LPS doesn’t predict variability or extreme weather events,” Ferrari adds.Rather, they hope their results emphasize the need to develop better benchmarking techniques, which could provide a fuller picture of which climate emulation technique is best suited for a particular situation.“With an improved climate emulation benchmark, we could use more complex machine-learning methods to explore problems that are currently very hard to address, like the impacts of aerosols or estimations of extreme precipitation,” Lütjens says.Ultimately, more accurate benchmarking techniques will help ensure policymakers are making decisions based on the best available information.The researchers hope others build on their analysis, perhaps by studying additional improvements to climate emulation methods and benchmarks. Such research could explore impact-oriented metrics like drought indicators and wildfire risks, or new variables like regional wind speeds.This research is funded, in part, by Schmidt Sciences, LLC, and is part of the MIT Climate Grand Challenges team for “Bringing Computation to the Climate Challenge.” More

  • in

    Study links rising temperatures and declining moods

    Rising global temperatures affect human activity in many ways. Now, a new study illuminates an important dimension of the problem: Very hot days are associated with more negative moods, as shown by a large-scale look at social media postings.Overall, the study examines 1.2 billion social media posts from 157 countries over the span of a year. The research finds that when the temperature rises above 95 degrees Fahrenheit, or 35 degrees Celsius, expressed sentiments become about 25 percent more negative in lower-income countries and about 8 percent more negative in better-off countries. Extreme heat affects people emotionally, not just physically.“Our study reveals that rising temperatures don’t just threaten physical health or economic productivity — they also affect how people feel, every day, all over the world,” says Siqi Zheng, a professor in MIT’s Department of Urban Studies and Planning (DUSP) and Center for Real Estate (CRE), and co-author of a new paper detailing the results. “This work opens up a new frontier in understanding how climate stress is shaping human well-being at a planetary scale.”The paper, “Unequal Impacts of Rising Temperatures on Global Human Sentiment,” is published today in the journal One Earth. The authors are Jianghao Wang, of the Chinese Academy of Sciences; Nicolas Guetta-Jeanrenaud SM ’22, a graduate of MIT’s Technology and Policy Program (TPP) and Institute for Data, Systems, and Society; Juan Palacios, a visiting assistant professor at MIT’s Sustainable Urbanization Lab (SUL) and an assistant professor Maastricht University; Yichun Fan, of SUL and Duke University; Devika Kakkar, of Harvard University; Nick Obradovich, of SUL and the Laureate Institute for Brain Research in Tulsa; and Zheng, who is the STL Champion Professor of Urban and Real Estate Sustainability at CRE and DUSP. Zheng is also the faculty director of CRE and founded the Sustainable Urbanization Lab in 2019.Social media as a windowTo conduct the study, the researchers evaluated 1.2 billion posts from the social media platforms Twitter and Weibo, all of which appeared in 2019. They used a natural language processing technique called Bidirectional Encoder Representations from Transformers (BERT), to analyze 65 languages across the 157 countries in the study.Each social media post was given a sentiment rating from 0.0 (for very negative posts) to 1.0 (for very positive posts). The posts were then aggregated geographically to 2,988 locations and evaluated in correlation with area weather. From this method, the researchers could then deduce the connection between extreme temperatures and expressed sentiment.“Social media data provides us with an unprecedented window into human emotions across cultures and continents,” Wang says. “This approach allows us to measure emotional impacts of climate change at a scale that traditional surveys simply cannot achieve, giving us real-time insights into how temperature affects human sentiment worldwide.”To assess the effects of temperatures on sentiment in higher-income and middle-to-lower-income settings, the scholars also used a World Bank cutoff level of gross national income per-capita annual income of $13,845, finding that in places with incomes below that, the effects of heat on mood were triple those found in economically more robust settings.“Thanks to the global coverage of our data, we find that people in low- and middle-income countries experience sentiment declines from extreme heat that are three times greater than those in high-income countries,” Fan says. “This underscores the importance of incorporating adaptation into future climate impact projections.”In the long runUsing long-term global climate models, and expecting some adaptation to heat, the researchers also produced a long-range estimate of the effects of extreme temperatures on sentiment by the year 2100. Extending the current findings to that time frame, they project a 2.3 percent worsening of people’s emotional well-being based on high temperatures alone by then — although that is a far-range projection.“It’s clear now, with our present study adding to findings from prior studies, that weather alters sentiment on a global scale,” Obradovich says. “And as weather and climates change, helping individuals become more resilient to shocks to their emotional states will be an important component of overall societal adaptation.”The researchers note that there are many nuances to the subject, and room for continued research in this area. For one thing, social media users are not likely to be a perfectly representative portion of the population, with young children and the elderly almost certainly using social media less than other people. However, as the researchers observe in the paper, the very young and elderly are probably particularly vulnerable to heat shocks, making the response to hot weather possible even larger than their study can capture.The research is part of the Global Sentiment project led by the MIT Sustainable Urbanization Lab, and the study’s dataset is publicly available. Zheng and other co-authors have previously investigated these dynamics using social media, although never before at this scale.“We hope this resource helps researchers, policymakers, and communities better prepare for a warming world,” Zheng says.The research was supported, in part, by Zheng’s chaired professorship research fund, and grants Wang received from the National Natural Science Foundation of China and the Chinese Academy of Sciences.  More

  • in

    Eco-driving measures could significantly reduce vehicle emissions

    Any motorist who has ever waited through multiple cycles for a traffic light to turn green knows how annoying signalized intersections can be. But sitting at intersections isn’t just a drag on drivers’ patience — unproductive vehicle idling could contribute as much as 15 percent of the carbon dioxide emissions from U.S. land transportation.A large-scale modeling study led by MIT researchers reveals that eco-driving measures, which can involve dynamically adjusting vehicle speeds to reduce stopping and excessive acceleration, could significantly reduce those CO2 emissions.Using a powerful artificial intelligence method called deep reinforcement learning, the researchers conducted an in-depth impact assessment of the factors affecting vehicle emissions in three major U.S. cities.Their analysis indicates that fully adopting eco-driving measures could cut annual city-wide intersection carbon emissions by 11 to 22 percent, without slowing traffic throughput or affecting vehicle and traffic safety.Even if only 10 percent of vehicles on the road employ eco-driving, it would result in 25 to 50 percent of the total reduction in CO2 emissions, the researchers found.In addition, dynamically optimizing speed limits at about 20 percent of intersections provides 70 percent of the total emission benefits. This indicates that eco-driving measures could be implemented gradually while still having measurable, positive impacts on mitigating climate change and improving public health.

    An animated GIF compares what 20% eco-driving adoption looks like to 100% eco-driving adoption.Image: Courtesy of the researchers

    “Vehicle-based control strategies like eco-driving can move the needle on climate change reduction. We’ve shown here that modern machine-learning tools, like deep reinforcement learning, can accelerate the kinds of analysis that support sociotechnical decision making. This is just the tip of the iceberg,” says senior author Cathy Wu, the Class of 1954 Career Development Associate Professor in Civil and Environmental Engineering (CEE) and the Institute for Data, Systems, and Society (IDSS) at MIT, and a member of the Laboratory for Information and Decision Systems (LIDS).She is joined on the paper by lead author Vindula Jayawardana, an MIT graduate student; as well as MIT graduate students Ao Qu, Cameron Hickert, and Edgar Sanchez; MIT undergraduate Catherine Tang; Baptiste Freydt, a graduate student at ETH Zurich; and Mark Taylor and Blaine Leonard of the Utah Department of Transportation. The research appears in Transportation Research Part C: Emerging Technologies.A multi-part modeling studyTraffic control measures typically call to mind fixed infrastructure, like stop signs and traffic signals. But as vehicles become more technologically advanced, it presents an opportunity for eco-driving, which is a catch-all term for vehicle-based traffic control measures like the use of dynamic speeds to reduce energy consumption.In the near term, eco-driving could involve speed guidance in the form of vehicle dashboards or smartphone apps. In the longer term, eco-driving could involve intelligent speed commands that directly control the acceleration of semi-autonomous and fully autonomous vehicles through vehicle-to-infrastructure communication systems.“Most prior work has focused on how to implement eco-driving. We shifted the frame to consider the question of should we implement eco-driving. If we were to deploy this technology at scale, would it make a difference?” Wu says.To answer that question, the researchers embarked on a multifaceted modeling study that would take the better part of four years to complete.They began by identifying 33 factors that influence vehicle emissions, including temperature, road grade, intersection topology, age of the vehicle, traffic demand, vehicle types, driver behavior, traffic signal timing, road geometry, etc.“One of the biggest challenges was making sure we were diligent and didn’t leave out any major factors,” Wu says.Then they used data from OpenStreetMap, U.S. geological surveys, and other sources to create digital replicas of more than 6,000 signalized intersections in three cities — Atlanta, San Francisco, and Los Angeles — and simulated more than a million traffic scenarios.The researchers used deep reinforcement learning to optimize each scenario for eco-driving to achieve the maximum emissions benefits.Reinforcement learning optimizes the vehicles’ driving behavior through trial-and-error interactions with a high-fidelity traffic simulator, rewarding vehicle behaviors that are more energy-efficient while penalizing those that are not.The researchers cast the problem as a decentralized cooperative multi-agent control problem, where the vehicles cooperate to achieve overall energy efficiency, even among non-participating vehicles, and they act in a decentralized manner, avoiding the need for costly communication between vehicles.However, training vehicle behaviors that generalize across diverse intersection traffic scenarios was a major challenge. The researchers observed that some scenarios are more similar to one another than others, such as scenarios with the same number of lanes or the same number of traffic signal phases.As such, the researchers trained separate reinforcement learning models for different clusters of traffic scenarios, yielding better emission benefits overall.But even with the help of AI, analyzing citywide traffic at the network level would be so computationally intensive it could take another decade to unravel, Wu says.Instead, they broke the problem down and solved each eco-driving scenario at the individual intersection level.“We carefully constrained the impact of eco-driving control at each intersection on neighboring intersections. In this way, we dramatically simplified the problem, which enabled us to perform this analysis at scale, without introducing unknown network effects,” she says.Significant emissions benefitsWhen they analyzed the results, the researchers found that full adoption of eco-driving could result in intersection emissions reductions of between 11 and 22 percent.These benefits differ depending on the layout of a city’s streets. A denser city like San Francisco has less room to implement eco-driving between intersections, offering a possible explanation for reduced emission savings, while Atlanta could see greater benefits given its higher speed limits.Even if only 10 percent of vehicles employ eco-driving, a city could still realize 25 to 50 percent of the total emissions benefit because of car-following dynamics: Non-eco-driving vehicles would follow controlled eco-driving vehicles as they optimize speed to pass smoothly through intersections, reducing their carbon emissions as well.In some cases, eco-driving could also increase vehicle throughput by minimizing emissions. However, Wu cautions that increasing throughput could result in more drivers taking to the roads, reducing emissions benefits.And while their analysis of widely used safety metrics known as surrogate safety measures, such as time to collision, suggest that eco-driving is as safe as human driving, it could cause unexpected behavior in human drivers. More research is needed to fully understand potential safety impacts, Wu says.Their results also show that eco-driving could provide even greater benefits when combined with alternative transportation decarbonization solutions. For instance, 20 percent eco-driving adoption in San Francisco would cut emission levels by 7 percent, but when combined with the projected adoption of hybrid and electric vehicles, it would cut emissions by 17 percent.“This is a first attempt to systematically quantify network-wide environmental benefits of eco-driving. This is a great research effort that will serve as a key reference for others to build on in the assessment of eco-driving systems,” says Hesham Rakha, the Samuel L. Pritchard Professor of Engineering at Virginia Tech, who was not involved with this research.And while the researchers focus on carbon emissions, the benefits are highly correlated with improvements in fuel consumption, energy use, and air quality.“This is almost a free intervention. We already have smartphones in our cars, and we are rapidly adopting cars with more advanced automation features. For something to scale quickly in practice, it must be relatively simple to implement and shovel-ready. Eco-driving fits that bill,” Wu says.This work is funded, in part, by Amazon and the Utah Department of Transportation. More