More stories

  • in

    Two MIT films nominated for New England Emmy Awards

    Two films produced by MIT were honored with Emmy nominations by the National Academy of Television Arts & Sciences Boston/New England Chapter. Both “We Are the Forest” and “No Drop to Spare” illustrate international conversations the MIT community is having about the environment and climate change.“We Are the Forest,” produced by MIT Video Productions (MVP) at MIT Open Learning, was one of six nominees in the Education/Schools category. The documentary highlights the cultural and scientific exchange of the MIT Festival Jazz Ensemble, MIT Wind Ensemble, and MIT Vocal Jazz Ensemble in the Brazilian Amazon. The excursion depicted in the film was part of the ongoing work of Frederick Harris Jr., MIT director of wind and jazz ensembles and senior lecturer in music, to combine Brazilian music and environmental research.“No Drop to Spare,” created by the Department of Mechanical Engineering (MechE), was nominated in the Environment/Science and Video Essayist categories. The film, produced by John Freidah, MechE senior producer and creative lead, follows a team of researchers from the K. Lisa Yang Global Engineering and Research (GEAR) Center working in Kenya, Morocco, and Jordan to deploy affordable, user-driven smart irrigation technology.“We Are the Forest” tells the story of 80 MIT student musicians who traveled to Manaus, Brazil in March 2023. Together with Indigenous Brazilian musicians and activists, the students played music, created instruments with found objects from the rainforest, and connected their musical practice to nature and culture. The trip and the documentary culminated with the concert “Hearing Amazônia: Art and Resistance.”“We have an amazing team who are excited to tell the stories of so many great things that happen at MIT,” says Clayton Hainsworth, director for MVP. “It’s a true pleasure when we get to partner with the Institute’s community on these video projects — from Fred [Harris], with his desire for outreach of the music curriculum, giving students new perspectives and getting beyond the lab; to students getting to experience the world and seeing how that affects their next steps as they go out and make an impact.”The documentary was produced by Hainsworth, directed by Jean Dunoyer, staff editor at MVP, and filmed by Myles Lowery, field production videographer at MVP. Hainsworth credits Dunoyer with refining the story’s main themes: the universality of music as a common human language, and the ways that Indigenous communities can teach and inform the rest of the globe about the environment and the challenges we are all facing.“The film highlights the reach of how MIT touches the world and, more importantly, how the world touches MIT,” says Hainsworth, adding that the work was generously supported by A. Neil Pappalardo ’64 and Jane Pappalardo. “No Drop to Spare” evoked a similar sentiment from Freidah. “What I liked about this story was the potential for great impact,” says Freidah, discussing the MechE film’s production process. “It was global, it was being piloted in three different places in the world, with three different end users, and had three different applications. You sort of go in with an idea in mind of what the story might be, then things bubble up. In this story, as with so many stories, what rose to the top was the students and the impact they were having on the real world and end users.” Freidah has worked with Amos Winter SM ’05, PhD ’11, associate professor of mechanical engineering and MIT GEAR Center principal investigator, to highlight other impact global projects in the past, including producing a video in 2016. That film, “Water is Life,” explores the development of low-cost desalination systems in India. While the phrase “it’s an honor to be nominated” might seem cliched, it remains often used because the sentiment almost always rings true. Although neither film triumphed at this year’s awards ceremony, Freidah says there’s much to be celebrated in the final product. “Seeing the effect this piece had, and how it highlighted our students, that’s the success story — but it’s always nice also to receive recognition from outside.”The 47th Boston/New England Emmy Awards Ceremony took place on June 8 at the Marriott Boston Copley Place. A list of nominees and winners can be found on the National Academy of Television Arts and Sciences Boston/New England Chapter website.  More

  • in

    Startup aims to transform the power grid with superconducting transmission lines

    Last year in Woburn, Massachusetts, a power line was deployed across a 100-foot stretch of land. Passersby wouldn’t have found much interesting about the installation: The line was supported by standard utility poles, the likes of which most of us have driven by millions of times. In fact, the familiarity of the sight is a key part of the technology’s promise.The lines are designed to transport five to 10 times the amount of power of conventional transmission lines, using essentially the same footprint and voltage level. That will be key to helping them overcome the regulatory hurdles and community opposition that has made increasing transmission capacity nearly impossible across large swaths of the globe, particularly in America and Europe, where new power distribution systems play a vital role in the shift to renewable energy and the resilience of the grid.The lines are the product of years of work by the startup VEIR, which was co-founded by Tim Heidel ’05, SM ’06, SM ’09, PhD ’10. They make use of superconducting cables and a proprietary cooling system that will enable initial transmission capacity up to 400 megawatts and, in future versions, up to several gigawatts.“We can deploy much higher power levels at much lower voltage, and so we can deploy the same high power but with a footprint and visual impact that is far less intrusive, and therefore can overcome a lot of the public opposition as well as siting and permitting barriers,” Heidel says.VEIR’s solution comes at a time when more than 10,000 renewable energy projects at various stages of development are seeking permission to connect to U.S. grids. The White House has said the U.S. must more than double existing regional transmission capacity in order to reach 2035 decarbonization goals.All of this comes as electricity demand is skyrocketing amid the increasing use of data centers and AI, and the electrification of everything from passenger vehicles to home heating systems.Despite those trends, building high-power transmission lines remains stubbornly difficult.“Building high-power transmission infrastructure can take a decade or more, and there’s been quite a few examples of projects that folks have had to abandon because they realize that there’s just so much opposition, or there’s too much complexity to pull it off cost effectively,” Heidel says. “We can drop down in voltage but carry the same amount of power because we can build systems that operate at much higher current levels, and that’s how our lines are able to melt into the background and avoid the same opposition.”Heidel says VEIR has built a pipeline of interested customers including utilities, data center operators, industrial companies, and renewable energy developers. VEIR is aiming to complete its first commercial-scale pilot carrying high power in 2026.A career in energyOver more than a decade at MIT, Heidel went from learning about the fundamentals of electrical engineering to studying the electric grid and the power sector more broadly. That journey included earning a bachelor’s, master’s, and PhD from MIT’s Department of Electrical Engineering and Computer Science as well as a master’s in MIT’s Technology and Policy Program, which he earned while working toward his PhD.“I got the energy bug and started to focus exclusively on energy and climate in graduate school,” Heidel says.Following his PhD, Heidel was named research director of MIT’s Future of the Electric Grid study, which was completed in 2011.“That was a fantastic opportunity at the outset of my career to survey the entire landscape and understand challenges facing the power grid and the power sector more broadly,” Heidel says. “It gave me a good foundation for understanding the grid, how it works, who’s involved, how decisions get made, how expansion works, and it looked out over the next 30 years.”After leaving MIT, Heidel worked at the Department of Energy’s Advanced Research Projects Agency-Energy (ARPA-E) and then at Bill Gates’ Breakthrough Energy Ventures (BEV) investment firm, where he continued studying transmission.“Just about every single decarbonization scenario and study that’s been published in the last two decades concludes that to achieve aggressive greenhouse gas emissions reductions, we’re going to have to double or triple the scale of power grids around the world,” Heidel says. “But when we looked at the data on how fast grids were being expanded, the ease with which transmission lines could be built, the cost of building new transmission, just about every indicator was heading in the wrong direction. Transmission was getting more expensive over time and taking longer to build. We desperately need to find a new solution.”Unlike traditional transmission lines made from steel and aluminum, VEIR’s transmission lines leverage decades of progress in the development of high-temperature superconducting tapes and other materials. Some of that progress has been driven by the nuclear fusion industry, which incorporates superconducting materials into some of their nuclear reactor designs.But the core innovation at VEIR is the cooling system. VEIR co-founder and advisor Steve Ashworth developed the rough idea for the cooling system more than 15 years ago at Los Alamos National Laboratory as part of a larger Department of Energy-funded research project. When the project was shut down, the idea was largely forgotten.Heidel and others at Breakthrough Energy Ventures became aware of the innovation in 2019 while researching transmission. Today VEIR’s system is passively cooled with nitrogen, which runs through a vacuum-insulated pipe that surrounds a superconducting cable. Heat exchange units are also used on some transmission towers.Heidel says transmission lines designed to carry that much power are typically far bigger than VEIR’s design, and other attempts at shrinking the footprint of high-power lines were limited to short distances underground.“High power requires high voltage, and high voltage requires tall towers and wide right of ways, and those tall towers and those wide right of ways are deeply unpopular,” Heidel says. “That is a universal truth across just about the entire world.”Moving power around the worldVEIR’s first alternating current (AC) overhead product line is capable of transmission capacities up to 400 megawatts and voltages of up to 69 kilovolts, and the company plans to scale to higher voltage and higher-power products in the future, including direct current (DC) lines.VEIR will sell its equipment to the companies installing transmission lines, with a primary focus on the U.S. market.In the longer term, Heidel believes VEIR’s technology is needed as soon as possible to meet rising electricity demands and new renewable energy projects around the globe. More

  • in

    Making climate models relevant for local decision-makers

    Climate models are a key technology in predicting the impacts of climate change. By running simulations of the Earth’s climate, scientists and policymakers can estimate conditions like sea level rise, flooding, and rising temperatures, and make decisions about how to appropriately respond. But current climate models struggle to provide this information quickly or affordably enough to be useful on smaller scales, such as the size of a city. Now, authors of a new open-access paper published in the Journal of Advances in Modeling Earth Systems have found a method to leverage machine learning to utilize the benefits of current climate models, while reducing the computational costs needed to run them. “It turns the traditional wisdom on its head,” says Sai Ravela, a principal research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) who wrote the paper with EAPS postdoc Anamitra Saha. Traditional wisdomIn climate modeling, downscaling is the process of using a global climate model with coarse resolution to generate finer details over smaller regions. Imagine a digital picture: A global model is a large picture of the world with a low number of pixels. To downscale, you zoom in on just the section of the photo you want to look at — for example, Boston. But because the original picture was low resolution, the new version is blurry; it doesn’t give enough detail to be particularly useful. “If you go from coarse resolution to fine resolution, you have to add information somehow,” explains Saha. Downscaling attempts to add that information back in by filling in the missing pixels. “That addition of information can happen two ways: Either it can come from theory, or it can come from data.” Conventional downscaling often involves using models built on physics (such as the process of air rising, cooling, and condensing, or the landscape of the area), and supplementing it with statistical data taken from historical observations. But this method is computationally taxing: It takes a lot of time and computing power to run, while also being expensive. A little bit of both In their new paper, Saha and Ravela have figured out a way to add the data another way. They’ve employed a technique in machine learning called adversarial learning. It uses two machines: One generates data to go into our photo. But the other machine judges the sample by comparing it to actual data. If it thinks the image is fake, then the first machine has to try again until it convinces the second machine. The end-goal of the process is to create super-resolution data. Using machine learning techniques like adversarial learning is not a new idea in climate modeling; where it currently struggles is its inability to handle large amounts of basic physics, like conservation laws. The researchers discovered that simplifying the physics going in and supplementing it with statistics from the historical data was enough to generate the results they needed. “If you augment machine learning with some information from the statistics and simplified physics both, then suddenly, it’s magical,” says Ravela. He and Saha started with estimating extreme rainfall amounts by removing more complex physics equations and focusing on water vapor and land topography. They then generated general rainfall patterns for mountainous Denver and flat Chicago alike, applying historical accounts to correct the output. “It’s giving us extremes, like the physics does, at a much lower cost. And it’s giving us similar speeds to statistics, but at much higher resolution.” Another unexpected benefit of the results was how little training data was needed. “The fact that that only a little bit of physics and little bit of statistics was enough to improve the performance of the ML [machine learning] model … was actually not obvious from the beginning,” says Saha. It only takes a few hours to train, and can produce results in minutes, an improvement over the months other models take to run. Quantifying risk quicklyBeing able to run the models quickly and often is a key requirement for stakeholders such as insurance companies and local policymakers. Ravela gives the example of Bangladesh: By seeing how extreme weather events will impact the country, decisions about what crops should be grown or where populations should migrate to can be made considering a very broad range of conditions and uncertainties as soon as possible.“We can’t wait months or years to be able to quantify this risk,” he says. “You need to look out way into the future and at a large number of uncertainties to be able to say what might be a good decision.”While the current model only looks at extreme precipitation, training it to examine other critical events, such as tropical storms, winds, and temperature, is the next step of the project. With a more robust model, Ravela is hoping to apply it to other places like Boston and Puerto Rico as part of a Climate Grand Challenges project.“We’re very excited both by the methodology that we put together, as well as the potential applications that it could lead to,” he says.  More

  • in

    Students research pathways for MIT to reach decarbonization goals

    A number of emerging technologies hold promise for helping organizations move away from fossil fuels and achieve deep decarbonization. The challenge is deciding which technologies to adopt, and when.MIT, which has a goal of eliminating direct campus emissions by 2050, must make such decisions sooner than most to achieve its mission. That was the challenge at the heart of the recently concluded class 4.s42 (Building Technology — Carbon Reduction Pathways for the MIT Campus).The class brought together undergraduate and graduate students from across the Institute to learn about different technologies and decide on the best path forward. It concluded with a final report as well as student presentations to members of MIT’s Climate Nucleus on May 9.“The mission of the class is to put together a cohesive document outlining how MIT can reach its goal of decarbonization by 2050,” says Morgan Johnson Quamina, an undergraduate in the Department of Civil and Environmental Engineering. “We’re evaluating how MIT can reach these goals on time, what sorts of technologies can help, and how quickly and aggressively we’ll have to move. The final report details a ton of scenarios for partial and full implementation of different technologies, outlines timelines for everything, and features recommendations.”The class was taught by professor of architecture Christoph Reinhart but included presentations by other faculty about low- and zero-carbon technology areas in their fields, including advanced nuclear reactors, deep geothermal energy, carbon capture, and more.The students’ work served as an extension of MIT’s Campus Decarbonization Working Group, which Reinhart co-chairs with Director of Sustainability Julie Newman. The group is charged with developing a technology roadmap for the campus to reach its goal of decarbonizing its energy systems.Reinhart says the class was a way to leverage the energy and creativity of students to accelerate his group’s work.“It’s very much focused on establishing a vision for what could happen at MIT,” Reinhart says. “We are trying to bring these technologies together so that we see how this [decarbonization process] would actually look on our campus.”A class with impactThroughout the semester, every Thursday from 9 a.m. to 12 p.m., around 20 students gathered to explore different decarbonization technology pathways. They also discussed energy policies, methods for evaluating risk, and future electric grid supply changes in New England.“I love that this work can have a real-world impact,” says Emile Germonpre, a master’s student in the Department of Nuclear Science and Engineering. “You can tell people aren’t thinking about grades or workload — I think people would’ve loved it even if the workload was doubled. Everyone is just intrinsically motivated to help solve this problem.”The classes typically began with an introduction to one of 10 different technologies. The introductions covered technical maturity, ease of implementation, costs, and how to model the technology’s impact on campus emissions. Students were then split into teams to evaluate each technology’s feasibility.“I’ve learned a lot about decarbonization and climate change,” says Johnson Quamina. “As an undergrad, I haven’t had many focused classes like this. But it was really beneficial to learn about some of these technologies I hadn’t even heard of before. It’s awesome to be contributing to the community like this.”As part of the class, students also developed a model that visualizes each intervention’s effect on emissions, allowing users to select interventions or combinations of interventions to see how they shape emissions trajectories.“We have a physics-based model that takes into account every building,” says Reinhart. “You can look at variants where we retrofit buildings, where we add rooftop photovoltaics, nuclear, carbon capture, and adopting different types of district underground heating systems. The point is you can start to see how fast we could do something like this and what the real game-changers are.”The class also designed and conducted a preliminary survey, to be expanded in the fall, that captures the MIT community’s attitudes towards the different technologies. Preliminary results were shared with the Climate Nucleus during students’ May 9 presentations.“I think it’s this unique and wonderful intersection of the forward-looking and innovative nature of academia with real world impact and specificity that you’d typically only find in industry,” Germonpre says. “It lets you work on a tangible project, the MIT campus, while exploring technologies that companies today find too risky to be the first mover on.”From MIT’s campus to the worldThe students recommended MIT form a building energy team to audit and retrofit all campus buildings. They also suggested MIT order a comprehensive geological feasibility survey to support planning regarding shallow and deep borehole fields for harvesting underground heat. A third recommendation was to communicate with the MIT community as well as with regulators and policymakers in the area about the deployment of nuclear batteries and deep geothermal boreholes on campus.The students’ modeling tool can also help members of the working group explore various decarbonization pathways. For instance, installing rooftop photovoltaics now would effectively reduce emissions, but installing them in a few decades, when the regional electricity grid is expected to be reducing its reliance on fossil fuels anyways, would have a much smaller impact.“When you have students working together, the recommendations are a little less filtered, which I think is a good thing,” Reinhart says. “I think there’s a real sense of urgency in the class. For certain choices, we have to basically act now.”Reinhart plans to do more activities related to the Working Group and the class’ recommendations in the fall, and he says he’s currently engaged with the Massachusetts Governor’s Office to explore doing something similar for the state.Students say they plan to keep working on the survey this summer and continue studying their technology areas. In the longer term, they believe the experience will help them in their careers.“Decarbonization is really important, and understanding how we can implement new technologies on campuses or in buildings provides me with a more well-rounded vision for what I could design in my career,” says Johnson Quamina, who wants to work as a structural or environmental engineer but says the class has also inspired her to consider careers in energy.The students’ findings also have implications beyond MIT campus. In accordance with MIT’s 2015 climate plan that committed to using the campus community as a “test bed for change,” the students’ recommendations also hold value for organizations around the world.“The mission is definitely broader than just MIT,” Germonpre says. “We don’t just want to solve MIT’s problem. We’ve dismissed technologies that were too specific to MIT. The goal is for MIT to lead by example and help certain technologies mature so that we can accelerate their impact.” More

  • in

    Improving working environments amid environmental distress

    In less than a decade, MIT economist Namrata Kala has produced a corpus of work too rich, inventive, and diverse to be easily summarized. Let’s try anyway.Kala, an associate professor at the MIT Sloan School of Management, often studies environmental problems and their effects on workers and firms, with implications for government policy, corporate managers, and anyone concerned about climate change. She also examines the effects of innovation on productivity, from farms to factories, and scrutinizes firm organization in light of such major changes.Kala has published papers on topics including the long-term effects of climate change on agriculture in Africa and India; the impact of mechanization on farmers’ incomes; the extent to which linguistic differences create barriers to trade; and even the impact of LED light bulbs on factory productivity. Characteristically, Kala looks at issues of global scale and pinpoints their effects at the level of individuals.Consider one paper Kala and two colleagues published a couple of years ago, about the effects of air pollution on garment factory workers in India. The scholars examined patterns of particulate-matter pollution and linked that to detailed, worker-level data about how productive workers were along the production line. The study shows that air pollution damages sewing productivity, and that some managers (not all) are adept at recognizing which workers are most affected by it.What emerges from much of this work is a real-time picture of human adaptation in a time of environmental distress.“I feel like I’m part of a long tradition of trying to understand resilience and adaptation, but now in the face of a changing world,” Kala says. “Understanding interventions that are good for resilience while the world is changing is what motivates me, along with the fact that the vast majority of the world is vulnerable to events that may impact economic growth.”For her research and teaching, Kala was awarded tenure at MIT last year.Joining academia, then staying in itKala, who grew up in Punjab, India, was long mindful of big issues pertaining to society, the economy, and the environment.“Growing up in India, it’s very difficult not to be interested in the some of the questions that are important for development and environmental economics,” Kala says.However, Kala did not expect that interest to lead her into academia. She attended Delhi University as an undergraduate, earning her degree with honors in economics while expecting to find a job in the area of development. To help facilitate that, Kala enrolled in a one-year master’s program at Yale University, in international and development economics.Before that year was out, Kala had a new realization: Studying development problems was integral to solving them. Academia is not on the sidelines when it comes to development, but helps generate crucial knowledge to foster better and smarter growth policies.“I came to Yale for a one-year master’s because I didn’t know if I wanted to be in a university for another two years,” Kala says. “I wanted to work on problems in the world. And that’s when I became enthralled with research. It was this wonderful year where I could study anything, and it completely changed my perspective on what I could do next. So I did the PhD, and that’s how I became an economist.”After receiving her PhD in 2015, Kala spent the next two years supported by a Prize Fellowship in Economics, History, and Politics at Harvard University and a postdoctoral fellowship at MIT’s own Abdul Latif Jameel Poverty Action Lab (J-PAL). In 2017, she joined the MIT faculty on a full-time basis, and has remained at the Institute since then.The source material for Kala’s studies varies widely, though in all cases she is looking for ways to construct well-defined empirical studies tackling major questions, with key issues often revealed in policy or firm details.“I find reading stuff about policy reform strangely interesting,” she quips.Development, but with environmental qualityIndeed, sometimes the spark for Kala’s studies comes from her own broad knowledge of past policy reforms, combined with an ability to locate data that reveals their effects.For instance, one working paper Kala and a colleague recently completed looks at an Indian policy to move industrial firms out of Delhi in order to help solve the city’s pollution problems; the policy randomly relocated companies in an industrial belt around the city. But what effect did this have on the firms? After examining the records of 20,000 companies, the researchers found these firms’ survival rate was 8 percent to 20 percent lower than if the policy called for them to be clustered more efficiently.That finding suggests how related environmental policies can be designed in the future.“This environmental policy was important in that it improved air quality in Delhi, but there’s a way to do that which also reduces the cost on firms,” Kala says.Kala says she expects India to be the locus of many, though hardly all, of her future studies. The country provides a wide range of opportunities for research.“India currently has both the largest number of poor people in the world as well as 21 of the 30 most polluted cities in the world,” Kala says. “Clearly, the tradeoff between development and environmental quality is extremely salient, and we need progress in understanding industrial policies that are at least environmentally neutral or improving environmental quality.”Kala will continue to look for new ways to take pressing, large-scale issues and study their effects in daily life. But the fact that her work ranges so widely is not just due to the places she studies; it is also because of the place she studies them from. MIT, she believes, has provided her with an environment of its own, which in this case enhances her own productivity.“One thing that helps a lot is having colleagues and co-authors to bounce ideas of off,” Kala says. “Sloan is the heart of so much interdisciplinary work. That is one big reason why I’ve had a broad set of interests and continue to work on many things.”“At Sloan,” she adds, “there are people doing fascinating things that I’m happy to listen to, as well as people in different disciplines working on related things who have a perspective I find extremely enriching. There are excellent economists, but I also go into seminars about work or productivity or the environment and come away with a perspective I don’t think I could have come up with myself.” More

  • in

    Reducing carbon emissions from long-haul trucks

    People around the world rely on trucks to deliver the goods they need, and so-called long-haul trucks play a critical role in those supply chains. In the United States, long-haul trucks moved 71 percent of all freight in 2022. But those long-haul trucks are heavy polluters, especially of the carbon emissions that threaten the global climate. According to U.S. Environmental Protection Agency estimates, in 2022 more than 3 percent of all carbon dioxide (CO2) emissions came from long-haul trucks.The problem is that long-haul trucks run almost exclusively on diesel fuel, and burning diesel releases high levels of CO2 and other carbon emissions. Global demand for freight transport is projected to as much as double by 2050, so it’s critical to find another source of energy that will meet the needs of long-haul trucks while also reducing their carbon emissions. And conversion to the new fuel must not be costly. “Trucks are an indispensable part of the modern supply chain, and any increase in the cost of trucking will be felt universally,” notes William H. Green, the Hoyt Hottel Professor in Chemical Engineering and director of the MIT Energy Initiative.For the past year, Green and his research team have been seeking a low-cost, cleaner alternative to diesel. Finding a replacement is difficult because diesel meets the needs of the trucking industry so well. For one thing, diesel has a high energy density — that is, energy content per pound of fuel. There’s a legal limit on the total weight of a truck and its contents, so using an energy source with a lower weight allows the truck to carry more payload — an important consideration, given the low profit margin of the freight industry. In addition, diesel fuel is readily available at retail refueling stations across the country — a critical resource for drivers, who may travel 600 miles in a day and sleep in their truck rather than returning to their home depot. Finally, diesel fuel is a liquid, so it’s easy to distribute to refueling stations and then pump into trucks.Past studies have examined numerous alternative technology options for powering long-haul trucks, but no clear winner has emerged. Now, Green and his team have evaluated the available options based on consistent and realistic assumptions about the technologies involved and the typical operation of a long-haul truck, and assuming no subsidies to tip the cost balance. Their in-depth analysis of converting long-haul trucks to battery electric — summarized below — found a high cost and negligible emissions gains in the near term. Studies of methanol and other liquid fuels from biomass are ongoing, but already a major concern is whether the world can plant and harvest enough biomass for biofuels without destroying the ecosystem. An analysis of hydrogen — also summarized below — highlights specific challenges with using that clean-burning fuel, which is a gas at normal temperatures.Finally, the team identified an approach that could make hydrogen a promising, low-cost option for long-haul trucks. And, says Green, “it’s an option that most people are probably unaware of.” It involves a novel way of using materials that can pick up hydrogen, store it, and then release it when and where it’s needed to serve as a clean-burning fuel.Defining the challenge: A realistic drive cycle, plus diesel values to beatThe MIT researchers believe that the lack of consensus on the best way to clean up long-haul trucking may have a simple explanation: Different analyses are based on different assumptions about the driving behavior of long-haul trucks. Indeed, some of them don’t accurately represent actual long-haul operations. So the first task for the MIT team was to define a representative — and realistic — “drive cycle” for actual long-haul truck operations in the United States. Then the MIT researchers — and researchers elsewhere — can assess potential replacement fuels and engines based on a consistent set of assumptions in modeling and simulation analyses.To define the drive cycle for long-haul operations, the MIT team used a systematic approach to analyze many hours of real-world driving data covering 58,000 miles. They examined 10 features and identified three — daily range, vehicle speed, and road grade — that have the greatest impact on energy demand and thus on fuel consumption and carbon emissions. The representative drive cycle that emerged covers a distance of 600 miles, an average vehicle speed of 55 miles per hour, and a road grade ranging from negative 6 percent to positive 6 percent.The next step was to generate key values for the performance of the conventional diesel “powertrain,” that is, all the components involved in creating power in the engine and delivering it to the wheels on the ground. Based on their defined drive cycle, the researchers simulated the performance of a conventional diesel truck, generating “benchmarks” for fuel consumption, CO2 emissions, cost, and other performance parameters.Now they could perform parallel simulations — based on the same drive-cycle assumptions — of possible replacement fuels and powertrains to see how the cost, carbon emissions, and other performance parameters would compare to the diesel benchmarks.The battery electric optionWhen considering how to decarbonize long-haul trucks, a natural first thought is battery power. After all, battery electric cars and pickup trucks are proving highly successful. Why not switch to battery electric long-haul trucks? “Again, the literature is very divided, with some studies saying that this is the best idea ever, and other studies saying that this makes no sense,” says Sayandeep Biswas, a graduate student in chemical engineering.To assess the battery electric option, the MIT researchers used a physics-based vehicle model plus well-documented estimates for the efficiencies of key components such as the battery pack, generators, motor, and so on. Assuming the previously described drive cycle, they determined operating parameters, including how much power the battery-electric system needs. From there they could calculate the size and weight of the battery required to satisfy the power needs of the battery electric truck.The outcome was disheartening. Providing enough energy to travel 600 miles without recharging would require a 2 megawatt-hour battery. “That’s a lot,” notes Kariana Moreno Sader, a graduate student in chemical engineering. “It’s the same as what two U.S. households consume per month on average.” And the weight of such a battery would significantly reduce the amount of payload that could be carried. An empty diesel truck typically weighs 20,000 pounds. With a legal limit of 80,000 pounds, there’s room for 60,000 pounds of payload. The 2 MWh battery would weigh roughly 27,000 pounds — significantly reducing the allowable capacity for carrying payload.Accounting for that “payload penalty,” the researchers calculated that roughly four electric trucks would be required to replace every three of today’s diesel-powered trucks. Furthermore, each added truck would require an additional driver. The impact on operating expenses would be significant.Analyzing the emissions reductions that might result from shifting to battery electric long-haul trucks also brought disappointing results. One might assume that using electricity would eliminate CO2 emissions. But when the researchers included emissions associated with making that electricity, that wasn’t true.“Battery electric trucks are only as clean as the electricity used to charge them,” notes Moreno Sader. Most of the time, drivers of long-haul trucks will be charging from national grids rather than dedicated renewable energy plants. According to Energy Information Agency statistics, fossil fuels make up more than 60 percent of the current U.S. power grid, so electric trucks would still be responsible for significant levels of carbon emissions. Manufacturing batteries for the trucks would generate additional CO2 emissions.Building the charging infrastructure would require massive upfront capital investment, as would upgrading the existing grid to reliably meet additional energy demand from the long-haul sector. Accomplishing those changes would be costly and time-consuming, which raises further concern about electrification as a means of decarbonizing long-haul freight.In short, switching today’s long-haul diesel trucks to battery electric power would bring major increases in costs for the freight industry and negligible carbon emissions benefits in the near term. Analyses assuming various types of batteries as well as other drive cycles produced comparable results.However, the researchers are optimistic about where the grid is going in the future. “In the long term, say by around 2050, emissions from the grid are projected to be less than half what they are now,” says Moreno Sader. “When we do our calculations based on that prediction, we find that emissions from battery electric trucks would be around 40 percent lower than our calculated emissions based on today’s grid.”For Moreno Sader, the goal of the MIT research is to help “guide the sector on what would be the best option.” With that goal in mind, she and her colleagues are now examining the battery electric option under different scenarios — for example, assuming battery swapping (a depleted battery isn’t recharged but replaced by a fully charged one), short-haul trucking, and other applications that might produce a more cost-competitive outcome, even for the near term.A promising option: hydrogenAs the world looks to get off reliance on fossil fuels for all uses, much attention is focusing on hydrogen. Could hydrogen be a good alternative for today’s diesel-burning long-haul trucks?To find out, the MIT team performed a detailed analysis of the hydrogen option. “We thought that hydrogen would solve a lot of the problems we had with battery electric,” says Biswas. It doesn’t have associated CO2 emissions. Its energy density is far higher, so it doesn’t create the weight problem posed by heavy batteries. In addition, existing compression technology can get enough hydrogen fuel into a regular-sized tank to cover the needed distance and range. “You can actually give drivers the range they want,” he says. “There’s no issue with ‘range anxiety.’”But while using hydrogen for long-haul trucking would reduce carbon emissions, it would cost far more than diesel. Based on their detailed analysis of hydrogen, the researchers concluded that the main source of incurred cost is in transporting it. Hydrogen can be made in a chemical facility, but then it needs to be distributed to refueling stations across the country. Conventionally, there have been two main ways of transporting hydrogen: as a compressed gas and as a cryogenic liquid. As Biswas notes, the former is “super high pressure,” and the latter is “super cold.” The researchers’ calculations show that as much as 80 percent of the cost of delivered hydrogen is due to transportation and refueling, plus there’s the need to build dedicated refueling stations that can meet new environmental and safety standards for handling hydrogen as a compressed gas or a cryogenic liquid.Having dismissed the conventional options for shipping hydrogen, they turned to a less-common approach: transporting hydrogen using “liquid organic hydrogen carriers” (LOHCs), special organic (carbon-containing) chemical compounds that can under certain conditions absorb hydrogen atoms and under other conditions release them.LOHCs are in use today to deliver small amounts of hydrogen for commercial use. Here’s how the process works: In a chemical plant, the carrier compound is brought into contact with hydrogen in the presence of a catalyst under elevated temperature and pressure, and the compound picks up the hydrogen. The “hydrogen-loaded” compound — still a liquid — is then transported under atmospheric conditions. When the hydrogen is needed, the compound is again exposed to a temperature increase and a different catalyst, and the hydrogen is released.LOHCs thus appear to be ideal hydrogen carriers for long-haul trucking. They’re liquid, so they can easily be delivered to existing refueling stations, where the hydrogen would be released; and they contain at least as much energy per gallon as hydrogen in a cryogenic liquid or compressed gas form. However, a detailed analysis of using hydrogen carriers showed that the approach would decrease emissions but at a considerable cost.The problem begins with the “dehydrogenation” step at the retail station. Releasing the hydrogen from the chemical carrier requires heat, which is generated by burning some of the hydrogen being carried by the LOHC. The researchers calculate that getting the needed heat takes 36 percent of that hydrogen. (In theory, the process would take only 27 percent — but in reality, that efficiency won’t be achieved.) So out of every 100 units of starting hydrogen, 36 units are now gone.But that’s not all. The hydrogen that comes out is at near-ambient pressure. So the facility dispensing the hydrogen will need to compress it — a process that the team calculates will use up 20-30 percent of the starting hydrogen.Because of the needed heat and compression, there’s now less than half of the starting hydrogen left to be delivered to the truck — and as a result, the hydrogen fuel becomes twice as expensive. The bottom line is that the technology works, but “when it comes to really beating diesel, the economics don’t work. It’s quite a bit more expensive,” says Biswas. In addition, the refueling stations would require expensive compressors and auxiliary units such as cooling systems. The capital investment and the operating and maintenance costs together imply that the market penetration of hydrogen refueling stations will be slow.A better strategy: onboard release of hydrogen from LOHCsGiven the potential benefits of using of LOHCs, the researchers focused on how to deal with both the heat needed to release the hydrogen and the energy needed to compress it. “That’s when we had the idea,” says Biswas. “Instead of doing the dehydrogenation [hydrogen release] at the refueling station and then loading the truck with hydrogen, why don’t we just take the LOHC and load that onto the truck?” Like diesel, LOHC is a liquid, so it’s easily transported and pumped into trucks at existing refueling stations. “We’ll then make hydrogen as it’s needed based on the power demands of the truck — and we can capture waste heat from the engine exhaust and use it to power the dehydrogenation process,” says Biswas.In their proposed plan, hydrogen-loaded LOHC is created at a chemical “hydrogenation” plant and then delivered to a retail refueling station, where it’s pumped into a long-haul truck. Onboard the truck, the loaded LOHC pours into the fuel-storage tank. From there it moves to the “dehydrogenation unit” — the reactor where heat and a catalyst together promote chemical reactions that separate the hydrogen from the LOHC. The hydrogen is sent to the powertrain, where it burns, producing energy that propels the truck forward.Hot exhaust from the powertrain goes to a “heat-integration unit,” where its waste heat energy is captured and returned to the reactor to help encourage the reaction that releases hydrogen from the loaded LOHC. The unloaded LOHC is pumped back into the fuel-storage tank, where it’s kept in a separate compartment to keep it from mixing with the loaded LOHC. From there, it’s pumped back into the retail refueling station and then transported back to the hydrogenation plant to be loaded with more hydrogen.Switching to onboard dehydrogenation brings down costs by eliminating the need for extra hydrogen compression and by using waste heat in the engine exhaust to drive the hydrogen-release process. So how does their proposed strategy look compared to diesel? Based on a detailed analysis, the researchers determined that using their strategy would be 18 percent more expensive than using diesel, and emissions would drop by 71 percent.But those results need some clarification. The 18 percent cost premium of using LOHC with onboard hydrogen release is based on the price of diesel fuel in 2020. In spring of 2023 the price was about 30 percent higher. Assuming the 2023 diesel price, the LOHC option is actually cheaper than using diesel.Both the cost and emissions outcomes are affected by another assumption: the use of “blue hydrogen,” which is hydrogen produced from natural gas with carbon capture and storage. Another option is to assume the use of “green hydrogen,” which is hydrogen produced using electricity generated from renewable sources, such as wind and solar. Green hydrogen is much more expensive than blue hydrogen, so then the costs would increase dramatically.If in the future the price of green hydrogen drops, the researchers’ proposed plan would shift to green hydrogen — and then the decline in emissions would no longer be 71 percent but rather close to 100 percent. There would be almost no emissions associated with the researchers’ proposed plan for using LHOCs with onboard hydrogen release.Comparing the options on cost and emissionsTo compare the options, Moreno Sader prepared bar charts showing the per-mile cost of shipping by truck in the United States and the CO2 emissions that result using each of the fuels and approaches discussed above: diesel fuel, battery electric, hydrogen as a cryogenic liquid or compressed gas, and LOHC with onboard hydrogen release. The LOHC strategy with onboard dehydrogenation looked promising on both the cost and the emissions charts. In addition to such quantitative measures, the researchers believe that their strategy addresses two other, less-obvious challenges in finding a less-polluting fuel for long-haul trucks.First, the introduction of the new fuel and trucks to use it must not disrupt the current freight-delivery setup. “You have to keep the old trucks running while you’re introducing the new ones,” notes Green. “You cannot have even a day when the trucks aren’t running because it’d be like the end of the economy. Your supermarket shelves would all be empty; your factories wouldn’t be able to run.” The researchers’ plan would be completely compatible with the existing diesel supply infrastructure and would require relatively minor retrofits to today’s long-haul trucks, so the current supply chains would continue to operate while the new fuel and retrofitted trucks are introduced.Second, the strategy has the potential to be adopted globally. Long-haul trucking is important in other parts of the world, and Moreno Sader thinks that “making this approach a reality is going to have a lot of impact, not only in the United States but also in other countries,” including her own country of origin, Colombia. “This is something I think about all the time.” The approach is compatible with the current diesel infrastructure, so the only requirement for adoption is to build the chemical hydrogenation plant. “And I think the capital expenditure related to that will be less than the cost of building a new fuel-supply infrastructure throughout the country,” says Moreno Sader.Testing in the lab“We’ve done a lot of simulations and calculations to show that this is a great idea,” notes Biswas. “But there’s only so far that math can go to convince people.” The next step is to demonstrate their concept in the lab.To that end, the researchers are now assembling all the core components of the onboard hydrogen-release reactor as well as the heat-integration unit that’s key to transferring heat from the engine exhaust to the hydrogen-release reactor. They estimate that this spring they’ll be ready to demonstrate their ability to release hydrogen and confirm the rate at which it’s formed. And — guided by their modeling work — they’ll be able to fine-tune critical components for maximum efficiency and best performance.The next step will be to add an appropriate engine, specially equipped with sensors to provide the critical readings they need to optimize the performance of all their core components together. By the end of 2024, the researchers hope to achieve their goal: the first experimental demonstration of a power-dense, robust onboard hydrogen-release system with highly efficient heat integration.In the meantime, they believe that results from their work to date should help spread the word, bringing their novel approach to the attention of other researchers and experts in the trucking industry who are now searching for ways to decarbonize long-haul trucking.Financial support for development of the representative drive cycle and the diesel benchmarks as well as the analysis of the battery electric option was provided by the MIT Mobility Systems Center of the MIT Energy Initiative. Analysis of LOHC-powered trucks with onboard dehydrogenation was supported by the MIT Climate and Sustainability Consortium. Sayandeep Biswas is supported by a fellowship from the Martin Family Society of Fellows for Sustainability, and Kariana Moreno Sader received fellowship funding from MathWorks through the MIT School of Science. More

  • in

    Microscopic defects in ice influence how massive glaciers flow, study shows

    As they seep and calve into the sea, melting glaciers and ice sheets are raising global water levels at unprecedented rates. To predict and prepare for future sea-level rise, scientists need a better understanding of how fast glaciers melt and what influences their flow.Now, a study by MIT scientists offers a new picture of glacier flow, based on microscopic deformation in the ice. The results show that a glacier’s flow depends strongly on how microscopic defects move through the ice.The researchers found they could estimate a glacier’s flow based on whether the ice is prone to microscopic defects of one kind versus another. They used this relationship between micro- and macro-scale deformation to develop a new model for how glaciers flow. With the new model, they mapped the flow of ice in locations across the Antarctic Ice Sheet.Contrary to conventional wisdom, they found, the ice sheet is not a monolith but instead is more varied in where and how it flows in response to warming-driven stresses. The study “dramatically alters the climate conditions under which marine ice sheets may become unstable and drive rapid rates of sea-level rise,” the researchers write in their paper.“This study really shows the effect of microscale processes on macroscale behavior,” says Meghana Ranganathan PhD ’22, who led the study as a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) and is now a postdoc at Georgia Tech. “These mechanisms happen at the scale of water molecules and ultimately can affect the stability of the West Antarctic Ice Sheet.”“Broadly speaking, glaciers are accelerating, and there are a lot of variants around that,” adds co-author and EAPS Associate Professor Brent Minchew. “This is the first study that takes a step from the laboratory to the ice sheets and starts evaluating what the stability of ice is in the natural environment. That will ultimately feed into our understanding of the probability of catastrophic sea-level rise.”Ranganathan and Minchew’s study appears this week in the Proceedings of the National Academy of Sciences.Micro flowGlacier flow describes the movement of ice from the peak of a glacier, or the center of an ice sheet, down to the edges, where the ice then breaks off and melts into the ocean — a normally slow process that contributes over time to raising the world’s average sea level.In recent years, the oceans have risen at unprecedented rates, driven by global warming and the accelerated melting of glaciers and ice sheets. While the loss of polar ice is known to be a major contributor to sea-level rise, it is also the biggest uncertainty when it comes to making predictions.“Part of it’s a scaling problem,” Ranganathan explains. “A lot of the fundamental mechanisms that cause ice to flow happen at a really small scale that we can’t see. We wanted to pin down exactly what these microphysical processes are that govern ice flow, which hasn’t been represented in models of sea-level change.”The team’s new study builds on previous experiments from the early 2000s by geologists at the University of Minnesota, who studied how small chips of ice deform when physically stressed and compressed. Their work revealed two microscopic mechanisms by which ice can flow: “dislocation creep,” where molecule-sized cracks migrate through the ice, and “grain boundary sliding,” where individual ice crystals slide against each other, causing the boundary between them to move through the ice.The geologists found that ice’s sensitivity to stress, or how likely it is to flow, depends on which of the two mechanisms is dominant. Specifically, ice is more sensitive to stress when microscopic defects occur via dislocation creep rather than grain boundary sliding.Ranganathan and Minchew realized that those findings at the microscopic level could redefine how ice flows at much larger, glacial scales.“Current models for sea-level rise assume a single value for the sensitivity of ice to stress and hold this value constant across an entire ice sheet,” Ranganathan explains. “What these experiments showed was that actually, there’s quite a bit of variability in ice sensitivity, due to which of these mechanisms is at play.”A mapping matchFor their new study, the MIT team took insights from the previous experiments and developed a model to estimate an icy region’s sensitivity to stress, which directly relates to how likely that ice is to flow. The model takes in information such as the ambient temperature, the average size of ice crystals, and the estimated mass of ice in the region, and calculates how much the ice is deforming by dislocation creep versus grain boundary sliding. Depending on which of the two mechanisms is dominant, the model then estimates the region’s sensitivity to stress.The scientists fed into the model actual observations from various locations across the Antarctic Ice Sheet, where others had previously recorded data such as the local height of ice, the size of ice crystals, and the ambient temperature. Based on the model’s estimates, the team generated a map of ice sensitivity to stress across the Antarctic Ice Sheet. When they compared this map to satellite and field measurements taken of the ice sheet over time, they observed a close match, suggesting that the model could be used to accurately predict how glaciers and ice sheets will flow in the future.“As climate change starts to thin glaciers, that could affect the sensitivity of ice to stress,” Ranganathan says. “The instabilities that we expect in Antarctica could be very different, and we can now capture those differences, using this model.”  More

  • in

    Getting to systemic sustainability

    Add up the commitments from the Paris Agreement, the Glasgow Climate Pact, and various commitments made by cities, countries, and businesses, and the world would be able to hold the global average temperature increase to 1.9 degrees Celsius above preindustrial levels, says Ani Dasgupta, the president and chief executive officer of the World Resources Institute (WRI).While that is well above the 1.5 C threshold that many scientists agree would limit the most severe impacts of climate change, it is below the 2.0 degree threshold that could lead to even more catastrophic impacts, such as the collapse of ice sheets and a 30-foot rise in sea levels.However, Dasgupta notes, actions have so far not matched up with commitments.“There’s a huge gap between commitment and outcomes,” Dasgupta said during his talk, “Energizing the global transition,” at the 2024 Earth Day Colloquium co-hosted by the MIT Energy Initiative and MIT Department of Earth, Atmospheric and Planetary Sciences, and sponsored by the Climate Nucleus.Dasgupta noted that oil companies did $6 trillion worth of business across the world last year — $1 trillion more than they were planning. About 7 percent of the world’s remaining tropical forests were destroyed during that same time, he added, and global inequality grew even worse than before.“None of these things were illegal, because the system we have today produces these outcomes,” he said. “My point is that it’s not one thing that needs to change. The whole system needs to change.”People, climate, and natureDasgupta, who previously held positions in nonprofits in India and at the World Bank, is a recognized leader in sustainable cities, poverty alleviation, and building cultures of inclusion. Under his leadership, WRI, a global research nonprofit that studies sustainable practices with the goal of fundamentally transforming the world’s food, land and water, energy, and cities, adopted a new five-year strategy called “Getting the Transition Right for People, Nature, and Climate 2023-2027.” It focuses on creating new economic opportunities to meet people’s essential needs, restore nature, and rapidly lower emissions, while building resilient communities. In fact, during his talk, Dasgupta said that his organization has moved away from talking about initiatives in terms of their impact on greenhouse gas emissions — instead taking a more holistic view of sustainability.“There is no net zero without nature,” Dasgupta said. He showed a slide with a graphic illustrating potential progress toward net-zero goals. “If nature gets diminished, that chart becomes even steeper. It’s very steep right now, but natural systems absorb carbon dioxide. So, if the natural systems keep getting destroyed, that curve becomes harder and harder.”A focus on people is necessary, Dasgupta said, in part because of the unequal climate impacts that the rich and the poor are likely to face in the coming years. “If you made it to this room, you will not be impacted by climate change,” he said. “You have resources to figure out what to do about it. The people who get impacted are people who don’t have resources. It is immensely unfair. Our belief is, if we don’t do climate policy that helps people directly, we won’t be able to make progress.”Where to start?Although Dasgupta stressed that systemic change is needed to bring carbon emissions in line with long-term climate goals, he made the case that it is unrealistic to implement this change around the globe all at once. “This transition will not happen in 196 countries at the same time,” he said. “The question is, how do we get to the tipping point so that it happens at scale? We’ve worked the past few years to ask the question, what is it you need to do to create this tipping point for change?”Analysts at WRI looked for countries that are large producers of carbon, those with substantial tropical forest cover, and those with large quantities of people living in poverty. “We basically tried to draw a map of, where are the biggest challenges for climate change?” Dasgupta said.That map features a relative handful of countries, including the United States, Mexico, China, Brazil, South Africa, India, and Indonesia. Dasgupta said, “Our argument is that, if we could figure out and focus all our efforts to help these countries transition, that will create a ripple effect — of understanding technology, understanding the market, understanding capacity, and understanding the politics of change that will unleash how the rest of these regions will bring change.”Spotlight on the subcontinentDasgupta used one of these countries, his native India, to illustrate the nuanced challenges and opportunities presented by various markets around the globe. In India, he noted, there are around 3 million projected jobs tied to the country’s transition to renewable energy. However, that number is dwarfed by the 10 to 12 million jobs per year the Indian economy needs to create simply to keep up with population growth.“Every developing country faces this question — how to keep growing in a way that reduces their carbon footprint,” Dasgupta said.Five states in India worked with WRI to pool their buying power and procure 5,000 electric buses, saving 60 percent of the cost as a result. Over the next two decades, Dasgupta said, the fleet of electric buses in those five states is expected to increase to 800,000.In the Indian state of Rajasthan, Dasgupta said, 59 percent of power already comes from solar energy. At times, Rajasthan produces more solar than it can use, and officials are exploring ways to either store the excess energy or sell it to other states. But in another state, Jharkhand, where much of the country’s coal is sourced, only 5 percent of power comes from solar. Officials in Jharkhand have reached out to WRI to discuss how to transition their energy economy, as they recognize that coal will fall out of favor in the future, Dasgupta said.“The complexities of the transition are enormous in a country this big,” Dasgupta said. “This is true in most large countries.”The road aheadDespite the challenges ahead, the colloquium was also marked by notes of optimism. In his opening remarks, Robert Stoner, the founding director of the MIT Tata Center for Technology and Design, pointed out how much progress has been made on environmental cleanup since the first Earth Day in 1970. “The world was a very different, much dirtier, place in many ways,” Stoner said. “Our air was a mess, our waterways were a mess, and it was beginning to be noticeable. Since then, Earth Day has become an important part of the fabric of American and global society.”While Dasgupta said that the world presently lacks the “orchestration” among various stakeholders needed to bring climate change under control, he expressed hope that collaboration in key countries could accelerate progress.“I strongly believe that what we need is a very different way of collaborating radically — across organizations like yours, organizations like ours, businesses, and governments,” Dasgupta said. “Otherwise, this transition will not happen at the scale and speed we need.” More