More stories

  • in

    Reducing carbon emissions from long-haul trucks

    People around the world rely on trucks to deliver the goods they need, and so-called long-haul trucks play a critical role in those supply chains. In the United States, long-haul trucks moved 71 percent of all freight in 2022. But those long-haul trucks are heavy polluters, especially of the carbon emissions that threaten the global climate. According to U.S. Environmental Protection Agency estimates, in 2022 more than 3 percent of all carbon dioxide (CO2) emissions came from long-haul trucks.The problem is that long-haul trucks run almost exclusively on diesel fuel, and burning diesel releases high levels of CO2 and other carbon emissions. Global demand for freight transport is projected to as much as double by 2050, so it’s critical to find another source of energy that will meet the needs of long-haul trucks while also reducing their carbon emissions. And conversion to the new fuel must not be costly. “Trucks are an indispensable part of the modern supply chain, and any increase in the cost of trucking will be felt universally,” notes William H. Green, the Hoyt Hottel Professor in Chemical Engineering and director of the MIT Energy Initiative.For the past year, Green and his research team have been seeking a low-cost, cleaner alternative to diesel. Finding a replacement is difficult because diesel meets the needs of the trucking industry so well. For one thing, diesel has a high energy density — that is, energy content per pound of fuel. There’s a legal limit on the total weight of a truck and its contents, so using an energy source with a lower weight allows the truck to carry more payload — an important consideration, given the low profit margin of the freight industry. In addition, diesel fuel is readily available at retail refueling stations across the country — a critical resource for drivers, who may travel 600 miles in a day and sleep in their truck rather than returning to their home depot. Finally, diesel fuel is a liquid, so it’s easy to distribute to refueling stations and then pump into trucks.Past studies have examined numerous alternative technology options for powering long-haul trucks, but no clear winner has emerged. Now, Green and his team have evaluated the available options based on consistent and realistic assumptions about the technologies involved and the typical operation of a long-haul truck, and assuming no subsidies to tip the cost balance. Their in-depth analysis of converting long-haul trucks to battery electric — summarized below — found a high cost and negligible emissions gains in the near term. Studies of methanol and other liquid fuels from biomass are ongoing, but already a major concern is whether the world can plant and harvest enough biomass for biofuels without destroying the ecosystem. An analysis of hydrogen — also summarized below — highlights specific challenges with using that clean-burning fuel, which is a gas at normal temperatures.Finally, the team identified an approach that could make hydrogen a promising, low-cost option for long-haul trucks. And, says Green, “it’s an option that most people are probably unaware of.” It involves a novel way of using materials that can pick up hydrogen, store it, and then release it when and where it’s needed to serve as a clean-burning fuel.Defining the challenge: A realistic drive cycle, plus diesel values to beatThe MIT researchers believe that the lack of consensus on the best way to clean up long-haul trucking may have a simple explanation: Different analyses are based on different assumptions about the driving behavior of long-haul trucks. Indeed, some of them don’t accurately represent actual long-haul operations. So the first task for the MIT team was to define a representative — and realistic — “drive cycle” for actual long-haul truck operations in the United States. Then the MIT researchers — and researchers elsewhere — can assess potential replacement fuels and engines based on a consistent set of assumptions in modeling and simulation analyses.To define the drive cycle for long-haul operations, the MIT team used a systematic approach to analyze many hours of real-world driving data covering 58,000 miles. They examined 10 features and identified three — daily range, vehicle speed, and road grade — that have the greatest impact on energy demand and thus on fuel consumption and carbon emissions. The representative drive cycle that emerged covers a distance of 600 miles, an average vehicle speed of 55 miles per hour, and a road grade ranging from negative 6 percent to positive 6 percent.The next step was to generate key values for the performance of the conventional diesel “powertrain,” that is, all the components involved in creating power in the engine and delivering it to the wheels on the ground. Based on their defined drive cycle, the researchers simulated the performance of a conventional diesel truck, generating “benchmarks” for fuel consumption, CO2 emissions, cost, and other performance parameters.Now they could perform parallel simulations — based on the same drive-cycle assumptions — of possible replacement fuels and powertrains to see how the cost, carbon emissions, and other performance parameters would compare to the diesel benchmarks.The battery electric optionWhen considering how to decarbonize long-haul trucks, a natural first thought is battery power. After all, battery electric cars and pickup trucks are proving highly successful. Why not switch to battery electric long-haul trucks? “Again, the literature is very divided, with some studies saying that this is the best idea ever, and other studies saying that this makes no sense,” says Sayandeep Biswas, a graduate student in chemical engineering.To assess the battery electric option, the MIT researchers used a physics-based vehicle model plus well-documented estimates for the efficiencies of key components such as the battery pack, generators, motor, and so on. Assuming the previously described drive cycle, they determined operating parameters, including how much power the battery-electric system needs. From there they could calculate the size and weight of the battery required to satisfy the power needs of the battery electric truck.The outcome was disheartening. Providing enough energy to travel 600 miles without recharging would require a 2 megawatt-hour battery. “That’s a lot,” notes Kariana Moreno Sader, a graduate student in chemical engineering. “It’s the same as what two U.S. households consume per month on average.” And the weight of such a battery would significantly reduce the amount of payload that could be carried. An empty diesel truck typically weighs 20,000 pounds. With a legal limit of 80,000 pounds, there’s room for 60,000 pounds of payload. The 2 MWh battery would weigh roughly 27,000 pounds — significantly reducing the allowable capacity for carrying payload.Accounting for that “payload penalty,” the researchers calculated that roughly four electric trucks would be required to replace every three of today’s diesel-powered trucks. Furthermore, each added truck would require an additional driver. The impact on operating expenses would be significant.Analyzing the emissions reductions that might result from shifting to battery electric long-haul trucks also brought disappointing results. One might assume that using electricity would eliminate CO2 emissions. But when the researchers included emissions associated with making that electricity, that wasn’t true.“Battery electric trucks are only as clean as the electricity used to charge them,” notes Moreno Sader. Most of the time, drivers of long-haul trucks will be charging from national grids rather than dedicated renewable energy plants. According to Energy Information Agency statistics, fossil fuels make up more than 60 percent of the current U.S. power grid, so electric trucks would still be responsible for significant levels of carbon emissions. Manufacturing batteries for the trucks would generate additional CO2 emissions.Building the charging infrastructure would require massive upfront capital investment, as would upgrading the existing grid to reliably meet additional energy demand from the long-haul sector. Accomplishing those changes would be costly and time-consuming, which raises further concern about electrification as a means of decarbonizing long-haul freight.In short, switching today’s long-haul diesel trucks to battery electric power would bring major increases in costs for the freight industry and negligible carbon emissions benefits in the near term. Analyses assuming various types of batteries as well as other drive cycles produced comparable results.However, the researchers are optimistic about where the grid is going in the future. “In the long term, say by around 2050, emissions from the grid are projected to be less than half what they are now,” says Moreno Sader. “When we do our calculations based on that prediction, we find that emissions from battery electric trucks would be around 40 percent lower than our calculated emissions based on today’s grid.”For Moreno Sader, the goal of the MIT research is to help “guide the sector on what would be the best option.” With that goal in mind, she and her colleagues are now examining the battery electric option under different scenarios — for example, assuming battery swapping (a depleted battery isn’t recharged but replaced by a fully charged one), short-haul trucking, and other applications that might produce a more cost-competitive outcome, even for the near term.A promising option: hydrogenAs the world looks to get off reliance on fossil fuels for all uses, much attention is focusing on hydrogen. Could hydrogen be a good alternative for today’s diesel-burning long-haul trucks?To find out, the MIT team performed a detailed analysis of the hydrogen option. “We thought that hydrogen would solve a lot of the problems we had with battery electric,” says Biswas. It doesn’t have associated CO2 emissions. Its energy density is far higher, so it doesn’t create the weight problem posed by heavy batteries. In addition, existing compression technology can get enough hydrogen fuel into a regular-sized tank to cover the needed distance and range. “You can actually give drivers the range they want,” he says. “There’s no issue with ‘range anxiety.’”But while using hydrogen for long-haul trucking would reduce carbon emissions, it would cost far more than diesel. Based on their detailed analysis of hydrogen, the researchers concluded that the main source of incurred cost is in transporting it. Hydrogen can be made in a chemical facility, but then it needs to be distributed to refueling stations across the country. Conventionally, there have been two main ways of transporting hydrogen: as a compressed gas and as a cryogenic liquid. As Biswas notes, the former is “super high pressure,” and the latter is “super cold.” The researchers’ calculations show that as much as 80 percent of the cost of delivered hydrogen is due to transportation and refueling, plus there’s the need to build dedicated refueling stations that can meet new environmental and safety standards for handling hydrogen as a compressed gas or a cryogenic liquid.Having dismissed the conventional options for shipping hydrogen, they turned to a less-common approach: transporting hydrogen using “liquid organic hydrogen carriers” (LOHCs), special organic (carbon-containing) chemical compounds that can under certain conditions absorb hydrogen atoms and under other conditions release them.LOHCs are in use today to deliver small amounts of hydrogen for commercial use. Here’s how the process works: In a chemical plant, the carrier compound is brought into contact with hydrogen in the presence of a catalyst under elevated temperature and pressure, and the compound picks up the hydrogen. The “hydrogen-loaded” compound — still a liquid — is then transported under atmospheric conditions. When the hydrogen is needed, the compound is again exposed to a temperature increase and a different catalyst, and the hydrogen is released.LOHCs thus appear to be ideal hydrogen carriers for long-haul trucking. They’re liquid, so they can easily be delivered to existing refueling stations, where the hydrogen would be released; and they contain at least as much energy per gallon as hydrogen in a cryogenic liquid or compressed gas form. However, a detailed analysis of using hydrogen carriers showed that the approach would decrease emissions but at a considerable cost.The problem begins with the “dehydrogenation” step at the retail station. Releasing the hydrogen from the chemical carrier requires heat, which is generated by burning some of the hydrogen being carried by the LOHC. The researchers calculate that getting the needed heat takes 36 percent of that hydrogen. (In theory, the process would take only 27 percent — but in reality, that efficiency won’t be achieved.) So out of every 100 units of starting hydrogen, 36 units are now gone.But that’s not all. The hydrogen that comes out is at near-ambient pressure. So the facility dispensing the hydrogen will need to compress it — a process that the team calculates will use up 20-30 percent of the starting hydrogen.Because of the needed heat and compression, there’s now less than half of the starting hydrogen left to be delivered to the truck — and as a result, the hydrogen fuel becomes twice as expensive. The bottom line is that the technology works, but “when it comes to really beating diesel, the economics don’t work. It’s quite a bit more expensive,” says Biswas. In addition, the refueling stations would require expensive compressors and auxiliary units such as cooling systems. The capital investment and the operating and maintenance costs together imply that the market penetration of hydrogen refueling stations will be slow.A better strategy: onboard release of hydrogen from LOHCsGiven the potential benefits of using of LOHCs, the researchers focused on how to deal with both the heat needed to release the hydrogen and the energy needed to compress it. “That’s when we had the idea,” says Biswas. “Instead of doing the dehydrogenation [hydrogen release] at the refueling station and then loading the truck with hydrogen, why don’t we just take the LOHC and load that onto the truck?” Like diesel, LOHC is a liquid, so it’s easily transported and pumped into trucks at existing refueling stations. “We’ll then make hydrogen as it’s needed based on the power demands of the truck — and we can capture waste heat from the engine exhaust and use it to power the dehydrogenation process,” says Biswas.In their proposed plan, hydrogen-loaded LOHC is created at a chemical “hydrogenation” plant and then delivered to a retail refueling station, where it’s pumped into a long-haul truck. Onboard the truck, the loaded LOHC pours into the fuel-storage tank. From there it moves to the “dehydrogenation unit” — the reactor where heat and a catalyst together promote chemical reactions that separate the hydrogen from the LOHC. The hydrogen is sent to the powertrain, where it burns, producing energy that propels the truck forward.Hot exhaust from the powertrain goes to a “heat-integration unit,” where its waste heat energy is captured and returned to the reactor to help encourage the reaction that releases hydrogen from the loaded LOHC. The unloaded LOHC is pumped back into the fuel-storage tank, where it’s kept in a separate compartment to keep it from mixing with the loaded LOHC. From there, it’s pumped back into the retail refueling station and then transported back to the hydrogenation plant to be loaded with more hydrogen.Switching to onboard dehydrogenation brings down costs by eliminating the need for extra hydrogen compression and by using waste heat in the engine exhaust to drive the hydrogen-release process. So how does their proposed strategy look compared to diesel? Based on a detailed analysis, the researchers determined that using their strategy would be 18 percent more expensive than using diesel, and emissions would drop by 71 percent.But those results need some clarification. The 18 percent cost premium of using LOHC with onboard hydrogen release is based on the price of diesel fuel in 2020. In spring of 2023 the price was about 30 percent higher. Assuming the 2023 diesel price, the LOHC option is actually cheaper than using diesel.Both the cost and emissions outcomes are affected by another assumption: the use of “blue hydrogen,” which is hydrogen produced from natural gas with carbon capture and storage. Another option is to assume the use of “green hydrogen,” which is hydrogen produced using electricity generated from renewable sources, such as wind and solar. Green hydrogen is much more expensive than blue hydrogen, so then the costs would increase dramatically.If in the future the price of green hydrogen drops, the researchers’ proposed plan would shift to green hydrogen — and then the decline in emissions would no longer be 71 percent but rather close to 100 percent. There would be almost no emissions associated with the researchers’ proposed plan for using LHOCs with onboard hydrogen release.Comparing the options on cost and emissionsTo compare the options, Moreno Sader prepared bar charts showing the per-mile cost of shipping by truck in the United States and the CO2 emissions that result using each of the fuels and approaches discussed above: diesel fuel, battery electric, hydrogen as a cryogenic liquid or compressed gas, and LOHC with onboard hydrogen release. The LOHC strategy with onboard dehydrogenation looked promising on both the cost and the emissions charts. In addition to such quantitative measures, the researchers believe that their strategy addresses two other, less-obvious challenges in finding a less-polluting fuel for long-haul trucks.First, the introduction of the new fuel and trucks to use it must not disrupt the current freight-delivery setup. “You have to keep the old trucks running while you’re introducing the new ones,” notes Green. “You cannot have even a day when the trucks aren’t running because it’d be like the end of the economy. Your supermarket shelves would all be empty; your factories wouldn’t be able to run.” The researchers’ plan would be completely compatible with the existing diesel supply infrastructure and would require relatively minor retrofits to today’s long-haul trucks, so the current supply chains would continue to operate while the new fuel and retrofitted trucks are introduced.Second, the strategy has the potential to be adopted globally. Long-haul trucking is important in other parts of the world, and Moreno Sader thinks that “making this approach a reality is going to have a lot of impact, not only in the United States but also in other countries,” including her own country of origin, Colombia. “This is something I think about all the time.” The approach is compatible with the current diesel infrastructure, so the only requirement for adoption is to build the chemical hydrogenation plant. “And I think the capital expenditure related to that will be less than the cost of building a new fuel-supply infrastructure throughout the country,” says Moreno Sader.Testing in the lab“We’ve done a lot of simulations and calculations to show that this is a great idea,” notes Biswas. “But there’s only so far that math can go to convince people.” The next step is to demonstrate their concept in the lab.To that end, the researchers are now assembling all the core components of the onboard hydrogen-release reactor as well as the heat-integration unit that’s key to transferring heat from the engine exhaust to the hydrogen-release reactor. They estimate that this spring they’ll be ready to demonstrate their ability to release hydrogen and confirm the rate at which it’s formed. And — guided by their modeling work — they’ll be able to fine-tune critical components for maximum efficiency and best performance.The next step will be to add an appropriate engine, specially equipped with sensors to provide the critical readings they need to optimize the performance of all their core components together. By the end of 2024, the researchers hope to achieve their goal: the first experimental demonstration of a power-dense, robust onboard hydrogen-release system with highly efficient heat integration.In the meantime, they believe that results from their work to date should help spread the word, bringing their novel approach to the attention of other researchers and experts in the trucking industry who are now searching for ways to decarbonize long-haul trucking.Financial support for development of the representative drive cycle and the diesel benchmarks as well as the analysis of the battery electric option was provided by the MIT Mobility Systems Center of the MIT Energy Initiative. Analysis of LOHC-powered trucks with onboard dehydrogenation was supported by the MIT Climate and Sustainability Consortium. Sayandeep Biswas is supported by a fellowship from the Martin Family Society of Fellows for Sustainability, and Kariana Moreno Sader received fellowship funding from MathWorks through the MIT School of Science. More

  • in

    Microscopic defects in ice influence how massive glaciers flow, study shows

    As they seep and calve into the sea, melting glaciers and ice sheets are raising global water levels at unprecedented rates. To predict and prepare for future sea-level rise, scientists need a better understanding of how fast glaciers melt and what influences their flow.Now, a study by MIT scientists offers a new picture of glacier flow, based on microscopic deformation in the ice. The results show that a glacier’s flow depends strongly on how microscopic defects move through the ice.The researchers found they could estimate a glacier’s flow based on whether the ice is prone to microscopic defects of one kind versus another. They used this relationship between micro- and macro-scale deformation to develop a new model for how glaciers flow. With the new model, they mapped the flow of ice in locations across the Antarctic Ice Sheet.Contrary to conventional wisdom, they found, the ice sheet is not a monolith but instead is more varied in where and how it flows in response to warming-driven stresses. The study “dramatically alters the climate conditions under which marine ice sheets may become unstable and drive rapid rates of sea-level rise,” the researchers write in their paper.“This study really shows the effect of microscale processes on macroscale behavior,” says Meghana Ranganathan PhD ’22, who led the study as a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) and is now a postdoc at Georgia Tech. “These mechanisms happen at the scale of water molecules and ultimately can affect the stability of the West Antarctic Ice Sheet.”“Broadly speaking, glaciers are accelerating, and there are a lot of variants around that,” adds co-author and EAPS Associate Professor Brent Minchew. “This is the first study that takes a step from the laboratory to the ice sheets and starts evaluating what the stability of ice is in the natural environment. That will ultimately feed into our understanding of the probability of catastrophic sea-level rise.”Ranganathan and Minchew’s study appears this week in the Proceedings of the National Academy of Sciences.Micro flowGlacier flow describes the movement of ice from the peak of a glacier, or the center of an ice sheet, down to the edges, where the ice then breaks off and melts into the ocean — a normally slow process that contributes over time to raising the world’s average sea level.In recent years, the oceans have risen at unprecedented rates, driven by global warming and the accelerated melting of glaciers and ice sheets. While the loss of polar ice is known to be a major contributor to sea-level rise, it is also the biggest uncertainty when it comes to making predictions.“Part of it’s a scaling problem,” Ranganathan explains. “A lot of the fundamental mechanisms that cause ice to flow happen at a really small scale that we can’t see. We wanted to pin down exactly what these microphysical processes are that govern ice flow, which hasn’t been represented in models of sea-level change.”The team’s new study builds on previous experiments from the early 2000s by geologists at the University of Minnesota, who studied how small chips of ice deform when physically stressed and compressed. Their work revealed two microscopic mechanisms by which ice can flow: “dislocation creep,” where molecule-sized cracks migrate through the ice, and “grain boundary sliding,” where individual ice crystals slide against each other, causing the boundary between them to move through the ice.The geologists found that ice’s sensitivity to stress, or how likely it is to flow, depends on which of the two mechanisms is dominant. Specifically, ice is more sensitive to stress when microscopic defects occur via dislocation creep rather than grain boundary sliding.Ranganathan and Minchew realized that those findings at the microscopic level could redefine how ice flows at much larger, glacial scales.“Current models for sea-level rise assume a single value for the sensitivity of ice to stress and hold this value constant across an entire ice sheet,” Ranganathan explains. “What these experiments showed was that actually, there’s quite a bit of variability in ice sensitivity, due to which of these mechanisms is at play.”A mapping matchFor their new study, the MIT team took insights from the previous experiments and developed a model to estimate an icy region’s sensitivity to stress, which directly relates to how likely that ice is to flow. The model takes in information such as the ambient temperature, the average size of ice crystals, and the estimated mass of ice in the region, and calculates how much the ice is deforming by dislocation creep versus grain boundary sliding. Depending on which of the two mechanisms is dominant, the model then estimates the region’s sensitivity to stress.The scientists fed into the model actual observations from various locations across the Antarctic Ice Sheet, where others had previously recorded data such as the local height of ice, the size of ice crystals, and the ambient temperature. Based on the model’s estimates, the team generated a map of ice sensitivity to stress across the Antarctic Ice Sheet. When they compared this map to satellite and field measurements taken of the ice sheet over time, they observed a close match, suggesting that the model could be used to accurately predict how glaciers and ice sheets will flow in the future.“As climate change starts to thin glaciers, that could affect the sensitivity of ice to stress,” Ranganathan says. “The instabilities that we expect in Antarctica could be very different, and we can now capture those differences, using this model.”  More

  • in

    Getting to systemic sustainability

    Add up the commitments from the Paris Agreement, the Glasgow Climate Pact, and various commitments made by cities, countries, and businesses, and the world would be able to hold the global average temperature increase to 1.9 degrees Celsius above preindustrial levels, says Ani Dasgupta, the president and chief executive officer of the World Resources Institute (WRI).While that is well above the 1.5 C threshold that many scientists agree would limit the most severe impacts of climate change, it is below the 2.0 degree threshold that could lead to even more catastrophic impacts, such as the collapse of ice sheets and a 30-foot rise in sea levels.However, Dasgupta notes, actions have so far not matched up with commitments.“There’s a huge gap between commitment and outcomes,” Dasgupta said during his talk, “Energizing the global transition,” at the 2024 Earth Day Colloquium co-hosted by the MIT Energy Initiative and MIT Department of Earth, Atmospheric and Planetary Sciences, and sponsored by the Climate Nucleus.Dasgupta noted that oil companies did $6 trillion worth of business across the world last year — $1 trillion more than they were planning. About 7 percent of the world’s remaining tropical forests were destroyed during that same time, he added, and global inequality grew even worse than before.“None of these things were illegal, because the system we have today produces these outcomes,” he said. “My point is that it’s not one thing that needs to change. The whole system needs to change.”People, climate, and natureDasgupta, who previously held positions in nonprofits in India and at the World Bank, is a recognized leader in sustainable cities, poverty alleviation, and building cultures of inclusion. Under his leadership, WRI, a global research nonprofit that studies sustainable practices with the goal of fundamentally transforming the world’s food, land and water, energy, and cities, adopted a new five-year strategy called “Getting the Transition Right for People, Nature, and Climate 2023-2027.” It focuses on creating new economic opportunities to meet people’s essential needs, restore nature, and rapidly lower emissions, while building resilient communities. In fact, during his talk, Dasgupta said that his organization has moved away from talking about initiatives in terms of their impact on greenhouse gas emissions — instead taking a more holistic view of sustainability.“There is no net zero without nature,” Dasgupta said. He showed a slide with a graphic illustrating potential progress toward net-zero goals. “If nature gets diminished, that chart becomes even steeper. It’s very steep right now, but natural systems absorb carbon dioxide. So, if the natural systems keep getting destroyed, that curve becomes harder and harder.”A focus on people is necessary, Dasgupta said, in part because of the unequal climate impacts that the rich and the poor are likely to face in the coming years. “If you made it to this room, you will not be impacted by climate change,” he said. “You have resources to figure out what to do about it. The people who get impacted are people who don’t have resources. It is immensely unfair. Our belief is, if we don’t do climate policy that helps people directly, we won’t be able to make progress.”Where to start?Although Dasgupta stressed that systemic change is needed to bring carbon emissions in line with long-term climate goals, he made the case that it is unrealistic to implement this change around the globe all at once. “This transition will not happen in 196 countries at the same time,” he said. “The question is, how do we get to the tipping point so that it happens at scale? We’ve worked the past few years to ask the question, what is it you need to do to create this tipping point for change?”Analysts at WRI looked for countries that are large producers of carbon, those with substantial tropical forest cover, and those with large quantities of people living in poverty. “We basically tried to draw a map of, where are the biggest challenges for climate change?” Dasgupta said.That map features a relative handful of countries, including the United States, Mexico, China, Brazil, South Africa, India, and Indonesia. Dasgupta said, “Our argument is that, if we could figure out and focus all our efforts to help these countries transition, that will create a ripple effect — of understanding technology, understanding the market, understanding capacity, and understanding the politics of change that will unleash how the rest of these regions will bring change.”Spotlight on the subcontinentDasgupta used one of these countries, his native India, to illustrate the nuanced challenges and opportunities presented by various markets around the globe. In India, he noted, there are around 3 million projected jobs tied to the country’s transition to renewable energy. However, that number is dwarfed by the 10 to 12 million jobs per year the Indian economy needs to create simply to keep up with population growth.“Every developing country faces this question — how to keep growing in a way that reduces their carbon footprint,” Dasgupta said.Five states in India worked with WRI to pool their buying power and procure 5,000 electric buses, saving 60 percent of the cost as a result. Over the next two decades, Dasgupta said, the fleet of electric buses in those five states is expected to increase to 800,000.In the Indian state of Rajasthan, Dasgupta said, 59 percent of power already comes from solar energy. At times, Rajasthan produces more solar than it can use, and officials are exploring ways to either store the excess energy or sell it to other states. But in another state, Jharkhand, where much of the country’s coal is sourced, only 5 percent of power comes from solar. Officials in Jharkhand have reached out to WRI to discuss how to transition their energy economy, as they recognize that coal will fall out of favor in the future, Dasgupta said.“The complexities of the transition are enormous in a country this big,” Dasgupta said. “This is true in most large countries.”The road aheadDespite the challenges ahead, the colloquium was also marked by notes of optimism. In his opening remarks, Robert Stoner, the founding director of the MIT Tata Center for Technology and Design, pointed out how much progress has been made on environmental cleanup since the first Earth Day in 1970. “The world was a very different, much dirtier, place in many ways,” Stoner said. “Our air was a mess, our waterways were a mess, and it was beginning to be noticeable. Since then, Earth Day has become an important part of the fabric of American and global society.”While Dasgupta said that the world presently lacks the “orchestration” among various stakeholders needed to bring climate change under control, he expressed hope that collaboration in key countries could accelerate progress.“I strongly believe that what we need is a very different way of collaborating radically — across organizations like yours, organizations like ours, businesses, and governments,” Dasgupta said. “Otherwise, this transition will not happen at the scale and speed we need.” More

  • in

    School of Engineering welcomes new faculty

    The School of Engineering welcomes 15 new faculty members across six of its academic departments. This new cohort of faculty members, who have either recently started their roles at MIT or will start within the next year, conduct research across a diverse range of disciplines.Many of these new faculty specialize in research that intersects with multiple fields. In addition to positions in the School of Engineering, a number of these faculty have positions at other units across MIT. Faculty with appointments in the Department of Electrical Engineering and Computer Science (EECS) report into both the School of Engineering and the MIT Stephen A. Schwarzman College of Computing. This year, new faculty also have joint appointments between the School of Engineering and the School of Humanities, Arts, and Social Sciences and the School of Science.“I am delighted to welcome this cohort of talented new faculty to the School of Engineering,” says Anantha Chandrakasan, chief innovation and strategy officer, dean of engineering, and Vannevar Bush Professor of Electrical Engineering and Computer Science. “I am particularly struck by the interdisciplinary approach many of these new faculty take in their research. They are working in areas that are poised to have tremendous impact. I look forward to seeing them grow as researchers and educators.”The new engineering faculty include:Stephen Bates joined the Department of Electrical Engineering and Computer Science as an assistant professor in September 2023. He is also a member of the Laboratory for Information and Decision Systems (LIDS). Bates uses data and AI for reliable decision-making in the presence of uncertainty. In particular, he develops tools for statistical inference with AI models, data impacted by strategic behavior, and settings with distribution shift. Bates also works on applications in life sciences and sustainability. He previously worked as a postdoc in the Statistics and EECS departments at the University of California at Berkeley (UC Berkeley). Bates received a BS in statistics and mathematics at Harvard University and a PhD from Stanford University.Abigail Bodner joined the Department of EECS and Department of Earth, Atmospheric and Planetary Sciences as an assistant professor in January. She is also a member of the LIDS. Bodner’s research interests span climate, physical oceanography, geophysical fluid dynamics, and turbulence. Previously, she worked as a Simons Junior Fellow at the Courant Institute of Mathematical Sciences at New York University. Bodner received her BS in geophysics and mathematics and MS in geophysics from Tel Aviv University, and her SM in applied mathematics and PhD from Brown University.Andreea Bobu ’17 will join the Department of Aeronautics and Astronautics as an assistant professor in July. Her research sits at the intersection of robotics, mathematical human modeling, and deep learning. Previously, she was a research scientist at the Boston Dynamics AI Institute, focusing on how robots and humans can efficiently arrive at shared representations of their tasks for more seamless and reliable interactions. Bobu earned a BS in computer science and engineering from MIT and a PhD in electrical engineering and computer science from UC Berkeley.Suraj Cheema will join the Department of Materials Science and Engineering, with a joint appointment in the Department of EECS, as an assistant professor in July. His research explores atomic-scale engineering of electronic materials to tackle challenges related to energy consumption, storage, and generation, aiming for more sustainable microelectronics. This spans computing and energy technologies via integrated ferroelectric devices. He previously worked as a postdoc at UC Berkeley. Cheema earned a BS in applied physics and applied mathematics from Columbia University and a PhD in materials science and engineering from UC Berkeley.Samantha Coday joins the Department of EECS as an assistant professor in July. She will also be a member of the MIT Research Laboratory of Electronics. Her research interests include ultra-dense power converters enabling renewable energy integration, hybrid electric aircraft and future space exploration. To enable high-performance converters for these critical applications her research focuses on the optimization, design, and control of hybrid switched-capacitor converters. Coday earned a BS in electrical engineering and mathematics from Southern Methodist University and an MS and a PhD in electrical engineering and computer science from UC Berkeley.Mitchell Gordon will join the Department of EECS as an assistant professor in July. He will also be a member of the MIT Computer Science and Artificial Intelligence Laboratory. In his research, Gordon designs interactive systems and evaluation approaches that bridge principles of human-computer interaction with the realities of machine learning. He currently works as a postdoc at the University of Washington. Gordon received a BS from the University of Rochester, and MS and PhD from Stanford University, all in computer science.Kaiming He joined the Department of EECS as an associate professor in February. He will also be a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). His research interests cover a wide range of topics in computer vision and deep learning. He is currently focused on building computer models that can learn representations and develop intelligence from and for the complex world. Long term, he hopes to augment human intelligence with improved artificial intelligence. Before joining MIT, He was a research scientist at Facebook AI. He earned a BS from Tsinghua University and a PhD from the Chinese University of Hong Kong.Anna Huang SM ’08 will join the departments of EECS and Music and Theater Arts as assistant professor in September. She will help develop graduate programming focused on music technology. Previously, she spent eight years with Magenta at Google Brain and DeepMind, spearheading efforts in generative modeling, reinforcement learning, and human-computer interaction to support human-AI partnerships in music-making. She is the creator of Music Transformer and Coconet (which powered the Bach Google Doodle). She was a judge and organizer for the AI Song Contest. Anna holds a Canada CIFAR AI Chair at Mila, a BM in music composition, and BS in computer science from the University of Southern California, an MS from the MIT Media Lab, and a PhD from Harvard University.Yael Kalai PhD ’06 will join the Department of EECS as a professor in September. She is also a member of CSAIL. Her research interests include cryptography, the theory of computation, and security and privacy. Kalai currently focuses on both the theoretical and real-world applications of cryptography, including work on succinct and easily verifiable non-interactive proofs. She received her bachelor’s degree from the Hebrew University of Jerusalem, a master’s degree at the Weizmann Institute of Science, and a PhD from MIT.Sendhil Mullainathan will join the departments of EECS and Economics as a professor in July. His research uses machine learning to understand complex problems in human behavior, social policy, and medicine. Previously, Mullainathan spent five years at MIT before joining the faculty at Harvard in 2004, and then the University of Chicago in 2018. He received his BA in computer science, mathematics, and economics from Cornell University and his PhD from Harvard University.Alex Rives will join the Department of EECS as an assistant professor in September, with a core membership in the Broad Institute of MIT and Harvard. In his research, Rives is focused on AI for scientific understanding, discovery, and design for biology. Rives worked with Meta as a New York University graduate student, where he founded and led the Evolutionary Scale Modeling team that developed large language models for proteins. Rives received his BS in philosophy and biology from Yale University and is completing his PhD in computer science at NYU.Sungho Shin will join the Department of Chemical Engineering as an assistant professor in July. His research interests include control theory, optimization algorithms, high-performance computing, and their applications to decision-making in complex systems, such as energy infrastructures. Shin is a postdoc at the Mathematics and Computer Science Division at Argonne National Laboratory. He received a BS in mathematics and chemical engineering from Seoul National University and a PhD in chemical engineering from the University of Wisconsin-Madison.Jessica Stark joined the Department of Biological Engineering as an assistant professor in January. In her research, Stark is developing technologies to realize the largely untapped potential of cell-surface sugars, called glycans, for immunological discovery and immunotherapy. Previously, Stark was an American Cancer Society postdoc at Stanford University. She earned a BS in chemical and biomolecular engineering from Cornell University and a PhD in chemical and biological engineering at Northwestern University.Thomas John “T.J.” Wallin joined the Department of Materials Science and Engineering as an assistant professor in January. As a researcher, Wallin’s interests lay in advanced manufacturing of functional soft matter, with an emphasis on soft wearable technologies and their applications in human-computer interfaces. Previously, he was a research scientist at Meta’s Reality Labs Research working in their haptic interaction team. Wallin earned a BS in physics and chemistry from the College of William and Mary, and an MS and PhD in materials science and engineering from Cornell University.Gioele Zardini joined the Department of Civil and Environmental Engineering as an assistant professor in September. He will also join LIDS and the Institute for Data, Systems, and Society. Driven by societal challenges, Zardini’s research interests include the co-design of sociotechnical systems, compositionality in engineering, applied category theory, decision and control, optimization, and game theory, with society-critical applications to intelligent transportation systems, autonomy, and complex networks and infrastructures. He received his BS, MS, and PhD in mechanical engineering with a focus on robotics, systems, and control from ETH Zurich, and spent time at MIT, Stanford University, and Motional. More

  • in

    H2 underground

    In 1987 in a village in Mali, workers were digging a water well when they felt a rush of air. One of the workers was smoking a cigarette, and the air caught fire, burning a clear blue flame. The well was capped at the time, but in 2012, it was tapped to provide energy for the village, powering a generator for nine years.The fuel source: geologic hydrogen.For decades, hydrogen has been discussed as a potentially revolutionary fuel. But efforts to produce “green” hydrogen (splitting water into hydrogen and oxygen using renewable electricity), “grey” hydrogen (making hydrogen from methane and releasing the biproduct carbon dioxide (CO2) into the atmosphere), “brown” hydrogen (produced through the gasification of coal), and “blue” hydrogen (making hydrogen from methane but capturing the CO2) have thus far proven either expensive and/or energy-intensive. Enter geologic hydrogen. Also known as “orange,” “gold,” “white,” “natural,” and even “clear” hydrogen, geologic hydrogen is generated by natural geochemical processes in the Earth’s crust. While there is still much to learn, a growing number of researchers and industry leaders are hopeful that it may turn out to be an abundant and affordable resource lying right beneath our feet.“There’s a tremendous amount of uncertainty about this,” noted Robert Stoner, the founding director of the MIT Tata Center for Technology and Design, in his opening remarks at the MIT Energy Initiative (MITEI) Spring Symposium. “But the prospect of readily producible clean hydrogen showing up all over the world is a potential near-term game changer.”A new hope for hydrogenThis April, MITEI gathered researchers, industry leaders, and academic experts from around MIT and the world to discuss the challenges and opportunities posed by geologic hydrogen in a daylong symposium entitled “Geologic hydrogen: Are orange and gold the new green?” The field is so new that, until a year ago, the U.S. Department of Energy (DOE)’s website incorrectly claimed that hydrogen only occurs naturally on Earth in compound forms, chemically bonded to other elements.“There’s a common misconception that hydrogen doesn’t occur naturally on Earth,” said Geoffrey Ellis, a research geologist with the U.S. Geological Survey. He noted that natural hydrogen production tends to occur in different locations from where oil and natural gas are likely to be discovered, which explains why geologic hydrogen discoveries have been relatively rare, at least until recently.“Petroleum exploration is not targeting hydrogen,” Ellis said. “Companies are simply not really looking for it, they’re not interested in it, and oftentimes they don’t measure for it. The energy industry spends billions of dollars every year on exploration with very sophisticated technology, and still they drill dry holes all the time. So I think it’s naive to think that we would suddenly be finding hydrogen all the time when we’re not looking for it.”In fact, the number of researchers and startup energy companies with targeted efforts to characterize geologic hydrogen has increased over the past several years — and these searches have uncovered new prospects, said Mary Haas, a venture partner at Breakthrough Energy Ventures. “We’ve seen a dramatic uptick in exploratory activity, now that there is a focused effort by a small community worldwide. At Breakthrough Energy, we are excited about the potential of this space, as well as our role in accelerating its progress,” she said. Haas noted that if geologic hydrogen could be produced at $1 per kilogram, this would be consistent with the DOE’s targeted “liftoff” point for the energy source. “If that happens,” she said, “it would be transformative.”Haas noted that only a small portion of identified hydrogen sites are currently under commercial exploration, and she cautioned that it’s not yet clear how large a role the resource might play in the transition to green energy. But, she said, “It’s worthwhile and important to find out.”Inventing a new energy subsectorGeologic hydrogen is produced when water reacts with iron-rich minerals in rock. Researchers and industry are exploring how to stimulate this natural production by pumping water into promising deposits.In any new exploration area, teams must ask a series of questions to qualify the site, said Avon McIntyre, the executive director of HyTerra Ltd., an Australian company focused on the exploration and production of geologic hydrogen. These questions include: Is the geology favorable? Does local legislation allow for exploration and production? Does the site offer a clear path to value? And what are the carbon implications of producing hydrogen at the site?“We have to be humble,” McIntyre said. “We can’t be too prescriptive and think that we’ll leap straight into success. We have a unique opportunity to stop and think about what this industry will look like, how it will work, and how we can bring together various disciplines.” This was a theme that arose multiple times over the course of the symposium: the idea that many different stakeholders — including those from academia, industry, and government — will need to work together to explore the viability of geologic hydrogen and bring it to market at scale.In addition to the potential for hydrogen production to give rise to greenhouse gas emissions (in cases, for instance, where hydrogen deposits are contaminated with natural gas), researchers and industry must also consider landscape deformation and even potential seismic implications, said Bradford Hager, the Cecil and Ida Green Professor of Earth Sciences in the MIT Department of Earth, Atmospheric and Planetary Sciences.The surface impacts of hydrogen exploration and production will likely be similar to those caused by the hydro-fracturing process (“fracking”) used in oil and natural gas extraction, Hager said.“There will be unavoidable surface deformation. In most places, you don’t want this if there’s infrastructure around,” Hager said. “Seismicity in the stimulated zone itself should not be a problem, because the areas are tested first. But we need to avoid stressing surrounding brittle rocks.”McIntyre noted that the commercial case for hydrogen remains a challenge to quantify, without even a “spot” price that companies can use to make economic calculations. Early on, he said, capturing helium at hydrogen exploration sites could be a path to early cash flow, but that may ultimately serve as a “distraction” as teams attempt to scale up to the primary goal of hydrogen production. He also noted that it is not even yet clear whether hard rock, soft rock, or underwater environments hold the most potential for geologic hydrogen, but all show promise.“If you stack all of these things together,” McIntyre said, “what we end up doing may look very different from what we think we’re going to do right now.”The path aheadWhile the long-term prospects for geologic hydrogen are shrouded in uncertainty, most speakers at the symposium struck a tone of optimism. Ellis noted that the DOE has dedicated $20 million in funding to a stimulated hydrogen program. Paris Smalls, the co-founder and CEO of Eden GeoPower Inc., said “we think there is a path” to producing geologic hydrogen below the $1 per kilogram threshold. And Iwnetim Abate, an assistant professor in the MIT Department of Materials Science and Engineering, said that geologic hydrogen opens up the idea of Earth as a “factory to produce clean fuels,” utilizing the subsurface heat and pressure instead of relying on burning fossil fuels or natural gas for the same purpose.“Earth has had 4.6 billion years to do these experiments,” said Oliver Jagoutz, a professor of geology in the MIT Department of Earth, Atmospheric and Planetary Sciences. “So there is probably a very good solution out there.”Alexis Templeton, a professor of geological sciences at the University of Colorado at Boulder, made the case for moving quickly. “Let’s go to pilot, faster than you might think,” she said. “Why? Because we do have some systems that we understand. We could test the engineering approaches and make sure that we are doing the right tool development, the right technology development, the right experiments in the lab. To do that, we desperately need data from the field.”“This is growing so fast,” Templeton added. “The momentum and the development of geologic hydrogen is really quite substantial. We need to start getting data at scale. And then, I think, more people will jump off the sidelines very quickly.”  More

  • in

    Researchers develop a detector for continuously monitoring toxic gases

    Most systems used to detect toxic gases in industrial or domestic settings can be used only once, or at best a few times. Now, researchers at MIT have developed a detector that could provide continuous monitoring for the presence of these gases, at low cost.The new system combines two existing technologies, bringing them together in a way that preserves the advantages of each while avoiding their limitations. The team used a material called a metal-organic framework, or MOF, which is highly sensitive to tiny traces of gas but whose performance quickly degrades, and combined it with a polymer material that is highly durable and easier to process, but much less sensitive.The results are reported today in the journal Advanced Materials, in a paper by MIT professors Aristide Gumyusenge, Mircea Dinca, Heather Kulik, and Jesus del Alamo, graduate student Heejung Roh, and postdocs Dong-Ha Kim, Yeongsu Cho, and Young-Moo Jo.Highly porous and with large surface areas, MOFs come in a variety of compositions. Some can be insulators, but the ones used for this work are highly electrically conductive. With their sponge-like form, they are effective at capturing molecules of various gases, and the sizes of their pores can be tailored to make them selective for particular kinds of gases. “If you are using them as a sensor, you can recognize if the gas is there if it has an effect on the resistivity of the MOF,” says Gumyusenge, the paper’s senior author and the Merton C. Flemings Career Development Assistant Professor of Materials Science and Engineering.The drawback for these materials’ use as detectors for gases is that they readily become saturated, and then can no longer detect and quantify new inputs. “That’s not what you want. You want to be able to detect and reuse,” Gumyusenge says. “So, we decided to use a polymer composite to achieve this reversibility.”The team used a class of conductive polymers that Gumyusenge and his co-workers had previously shown can respond to gases without permanently binding to them. “The polymer, even though it doesn’t have the high surface area that the MOFs do, will at least provide this recognize-and-release type of phenomenon,” he says.The team combined the polymers in a liquid solution along with the MOF material in powdered form, and deposited the mixture on a substrate, where they dry into a uniform, thin coating. By combining the polymer, with its quick detection capability, and the more sensitive MOFs, in a one-to-one ratio, he says, “suddenly we get a sensor that has both the high sensitivity we get from the MOF and the reversibility that is enabled by the presence of the polymer.”The material changes its electrical resistance when molecules of the gas are temporarily trapped in the material. These changes in resistance can be continuously monitored by simply attaching an ohmmeter to track the resistance over time. Gumyusenge and his students demonstrated the composite material’s ability to detect nitrogen dioxide, a toxic gas produced by many kinds of combustion, in a small lab-scale device. After 100 cycles of detection, the material was still maintaining its baseline performance within a margin of about 5 to 10 percent, demonstrating its long-term use potential.In addition, this material has far greater sensitivity than most presently used detectors for nitrogen dioxide, the team reports. This gas is often detected after the use of stove ovens. And, with this gas recently linked to many asthma cases in the U.S., reliable detection in low concentrations is important. The team demonstrated that this new composite could detect, reversibly, the gas at concentrations as low as 2 parts per million.While their demonstration was specifically aimed at nitrogen dioxide, Gumyusenge says, “we can definitely tailor the chemistry to target other volatile molecules,” as long as they are small polar analytes, “which tend to be most of the toxic gases.”Besides being compatible with a simple hand-held detector or a smoke-alarm type of device, one advantage of the material is that the polymer allows it to be deposited as an extremely thin uniform film, unlike regular MOFs, which are generally in an inefficient powder form. Because the films are so thin, there is little material needed and production material costs could be low; the processing methods could be typical of those used for industrial coating processes. “So, maybe the limiting factor will be scaling up the synthesis of the polymers, which we’ve been synthesizing in small amounts,” Gumyusenge says.“The next steps will be to evaluate these in real-life settings,” he says. For example, the material could be applied as a coating on chimneys or exhaust pipes to continuously monitor gases through readings from an attached resistance monitoring device. In such settings, he says, “we need tests to check if we truly differentiate it from other potential contaminants that we might have overlooked in the lab setting. Let’s put the sensors out in real-world scenarios and see how they do.”The work was supported by the MIT Climate and Sustainability Consortium (MCSC), the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) at MIT, and the U.S. Department of Energy. More

  • in

    Elaine Liu: Charging ahead

    MIT senior Elaine Siyu Liu doesn’t own an electric car, or any car. But she sees the impact of electric vehicles (EVs) and renewables on the grid as two pieces of an energy puzzle she wants to solve.The U.S. Department of Energy reports that the number of public and private EV charging ports nearly doubled in the past three years, and many more are in the works. Users expect to plug in at their convenience, charge up, and drive away. But what if the grid can’t handle it?Electricity demand, long stagnant in the United States, has spiked due to EVs, data centers that drive artificial intelligence, and industry. Grid planners forecast an increase of 2.6 percent to 4.7 percent in electricity demand over the next five years, according to data reported to federal regulators. Everyone from EV charging-station operators to utility-system operators needs help navigating a system in flux.That’s where Liu’s work comes in.Liu, who is studying mathematics and electrical engineering and computer science (EECS), is interested in distribution — how to get electricity from a centralized location to consumers. “I see power systems as a good venue for theoretical research as an application tool,” she says. “I’m interested in it because I’m familiar with the optimization and probability techniques used to map this level of problem.”Liu grew up in Beijing, then after middle school moved with her parents to Canada and enrolled in a prep school in Oakville, Ontario, 30 miles outside Toronto.Liu stumbled upon an opportunity to take part in a regional math competition and eventually started a math club, but at the time, the school’s culture surrounding math surprised her. Being exposed to what seemed to be some students’ aversion to math, she says, “I don’t think my feelings about math changed. I think my feelings about how people feel about math changed.”Liu brought her passion for math to MIT. The summer after her sophomore year, she took on the first of the two Undergraduate Research Opportunity Program projects she completed with electric power system expert Marija Ilić, a joint adjunct professor in EECS and a senior research scientist at the MIT Laboratory for Information and Decision Systems.Predicting the gridSince 2022, with the help of funding from the MIT Energy Initiative (MITEI), Liu has been working with Ilić on identifying ways in which the grid is challenged.One factor is the addition of renewables to the energy pipeline. A gap in wind or sun might cause a lag in power generation. If this lag occurs during peak demand, it could mean trouble for a grid already taxed by extreme weather and other unforeseen events.If you think of the grid as a network of dozens of interconnected parts, once an element in the network fails — say, a tree downs a transmission line — the electricity that used to go through that line needs to be rerouted. This may overload other lines, creating what’s known as a cascade failure.“This all happens really quickly and has very large downstream effects,” Liu says. “Millions of people will have instant blackouts.”Even if the system can handle a single downed line, Liu notes that “the nuance is that there are now a lot of renewables, and renewables are less predictable. You can’t predict a gap in wind or sun. When such things happen, there’s suddenly not enough generation and too much demand. So the same kind of failure would happen, but on a larger and more uncontrollable scale.”Renewables’ varying output has the added complication of causing voltage fluctuations. “We plug in our devices expecting a voltage of 110, but because of oscillations, you will never get exactly 110,” Liu says. “So even when you can deliver enough electricity, if you can’t deliver it at the specific voltage level that is required, that’s a problem.”Liu and Ilić are building a model to predict how and when the grid might fail. Lacking access to privatized data, Liu runs her models with European industry data and test cases made available to universities. “I have a fake power grid that I run my experiments on,” she says. “You can take the same tool and run it on the real power grid.”Liu’s model predicts cascade failures as they evolve. Supply from a wind generator, for example, might drop precipitously over the course of an hour. The model analyzes which substations and which households will be affected. “After we know we need to do something, this prediction tool can enable system operators to strategically intervene ahead of time,” Liu says.Dictating price and powerLast year, Liu turned her attention to EVs, which provide a different kind of challenge than renewables.In 2022, S&P Global reported that lawmakers argued that the U.S. Federal Energy Regulatory Commission’s (FERC) wholesale power rate structure was unfair for EV charging station operators.In addition to operators paying by the kilowatt-hour, some also pay more for electricity during peak demand hours. Only a few EVs charging up during those hours could result in higher costs for the operator even if their overall energy use is low.Anticipating how much power EVs will need is more complex than predicting energy needed for, say, heating and cooling. Unlike buildings, EVs move around, making it difficult to predict energy consumption at any given time. “If users don’t like the price at one charging station or how long the line is, they’ll go somewhere else,” Liu says. “Where to allocate EV chargers is a problem that a lot of people are dealing with right now.”One approach would be for FERC to dictate to EV users when and where to charge and what price they’ll pay. To Liu, this isn’t an attractive option. “No one likes to be told what to do,” she says.Liu is looking at optimizing a market-based solution that would be acceptable to top-level energy producers — wind and solar farms and nuclear plants — all the way down to the municipal aggregators that secure electricity at competitive rates and oversee distribution to the consumer.Analyzing the location, movement, and behavior patterns of all the EVs driven daily in Boston and other major energy hubs, she notes, could help demand aggregators determine where to place EV chargers and how much to charge consumers, akin to Walmart deciding how much to mark up wholesale eggs in different markets.Last year, Liu presented the work at MITEI’s annual research conference. This spring, Liu and Ilić are submitting a paper on the market optimization analysis to a journal of the Institute of Electrical and Electronics Engineers.Liu has come to terms with her early introduction to attitudes toward STEM that struck her as markedly different from those in China. She says, “I think the (prep) school had a very strong ‘math is for nerds’ vibe, especially for girls. There was a ‘why are you giving yourself more work?’ kind of mentality. But over time, I just learned to disregard that.”After graduation, Liu, the only undergraduate researcher in Ilić’s MIT Electric Energy Systems Group, plans to apply to fellowships and graduate programs in EECS, applied math, and operations research.Based on her analysis, Liu says that the market could effectively determine the price and availability of charging stations. Offering incentives for EV owners to charge during the day instead of at night when demand is high could help avoid grid overload and prevent extra costs to operators. “People would still retain the ability to go to a different charging station if they chose to,” she says. “I’m arguing that this works.” More

  • in

    Study: Heavy snowfall and rain may contribute to some earthquakes

    When scientists look for an earthquake’s cause, their search often starts underground. As centuries of seismic studies have made clear, it’s the collision of tectonic plates and the movement of subsurface faults and fissures that primarily trigger a temblor.But MIT scientists have now found that certain weather events may also play a role in setting off some quakes.In a study appearing today in Science Advances, the researchers report that episodes of heavy snowfall and rain likely contributed to a swarm of earthquakes over the past several years in northern Japan. The study is the first to show that climate conditions could initiate some quakes.“We see that snowfall and other environmental loading at the surface impacts the stress state underground, and the timing of intense precipitation events is well-correlated with the start of this earthquake swarm,” says study author William Frank, an assistant professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “So, climate obviously has an impact on the response of the solid earth, and part of that response is earthquakes.”The new study focuses on a series of ongoing earthquakes in Japan’s Noto Peninsula. The team discovered that seismic activity in the region is surprisingly synchronized with certain changes in underground pressure, and that those changes are influenced by seasonal patterns of snowfall and precipitation. The scientists suspect that this new connection between quakes and climate may not be unique to Japan and could play a role in shaking up other parts of the world.Looking to the future, they predict that the climate’s influence on earthquakes could be more pronounced with global warming.“If we’re going into a climate that’s changing, with more extreme precipitation events, and we expect a redistribution of water in the atmosphere, oceans, and continents, that will change how the Earth’s crust is loaded,” Frank adds. “That will have an impact for sure, and it’s a link we could further explore.”The study’s lead author is former MIT research associate Qing-Yu Wang (now at Grenoble Alpes University), and also includes EAPS postdoc Xin Cui, Yang Lu of the University of Vienna, Takashi Hirose of Tohoku University, and Kazushige Obara of the University of Tokyo.Seismic speedSince late 2020, hundreds of small earthquakes have shaken up Japan’s Noto Peninsula — a finger of land that curves north from the country’s main island into the Sea of Japan. Unlike a typical earthquake sequence, which begins as a main shock that gives way to a series of aftershocks before dying out, Noto’s seismic activity is an “earthquake swarm” — a pattern of multiple, ongoing quakes with no obvious main shock, or seismic trigger.The MIT team, along with their colleagues in Japan, aimed to spot any patterns in the swarm that would explain the persistent quakes. They started by looking through the Japanese Meteorological Agency’s catalog of earthquakes that provides data on seismic activity throughout the country over time. They focused on quakes in the Noto Peninsula over the last 11 years, during which the region has experienced episodic earthquake activity, including the most recent swarm.With seismic data from the catalog, the team counted the number of seismic events that occurred in the region over time, and found that the timing of quakes prior to 2020 appeared sporadic and unrelated, compared to late 2020, when earthquakes grew more intense and clustered in time, signaling the start of the swarm, with quakes that are correlated in some way.The scientists then looked to a second dataset of seismic measurements taken by monitoring stations over the same 11-year period. Each station continuously records any displacement, or local shaking that occurs. The shaking from one station to another can give scientists an idea of how fast a seismic wave travels between stations. This “seismic velocity” is related to the structure of the Earth through which the seismic wave is traveling. Wang used the station measurements to calculate the seismic velocity between every station in and around Noto over the last 11 years.The researchers generated an evolving picture of seismic velocity beneath the Noto Peninsula and observed a surprising pattern: In 2020, around when the earthquake swarm is thought to have begun, changes in seismic velocity appeared to be synchronized with the seasons.“We then had to explain why we were observing this seasonal variation,” Frank says.Snow pressureThe team wondered whether environmental changes from season to season could influence the underlying structure of the Earth in a way that would set off an earthquake swarm. Specifically, they looked at how seasonal precipitation would affect the underground “pore fluid pressure” — the amount of pressure that fluids in the Earth’s cracks and fissures exert within the bedrock.“When it rains or snows, that adds weight, which increases pore pressure, which allows seismic waves to travel through slower,” Frank explains. “When all that weight is removed, through evaporation or runoff, all of a sudden, that pore pressure decreases and seismic waves are faster.”Wang and Cui developed a hydromechanical model of the Noto Peninsula to simulate the underlying pore pressure over the last 11 years in response to seasonal changes in precipitation. They fed into the model meteorological data from this same period, including measurements of daily snow, rainfall, and sea-level changes. From their model, they were able to track changes in excess pore pressure beneath the Noto Peninsula, before and during the earthquake swarm. They then compared this timeline of evolving pore pressure with their evolving picture of seismic velocity.“We had seismic velocity observations, and we had the model of excess pore pressure, and when we overlapped them, we saw they just fit extremely well,” Frank says.In particular, they found that when they included snowfall data, and especially, extreme snowfall events, the fit between the model and observations was stronger than if they only considered rainfall and other events. In other words, the ongoing earthquake swarm that Noto residents have been experiencing can be explained in part by seasonal precipitation, and particularly, heavy snowfall events.“We can see that the timing of these earthquakes lines up extremely well with multiple times where we see intense snowfall,” Frank says. “It’s well-correlated with earthquake activity. And we think there’s a physical link between the two.”The researchers suspect that heavy snowfall and similar extreme precipitation could play a role in earthquakes elsewhere, though they emphasize that the primary trigger will always originate underground.“When we first want to understand how earthquakes work, we look to plate tectonics, because that is and will always be the number one reason why an earthquake happens,” Frank says. “But, what are the other things that could affect when and how an earthquake happens? That’s when you start to go to second-order controlling factors, and the climate is obviously one of those.”This research was supported, in part, by the National Science Foundation. More