More stories

  • in

    Reducing carbon emissions from long-haul trucks

    People around the world rely on trucks to deliver the goods they need, and so-called long-haul trucks play a critical role in those supply chains. In the United States, long-haul trucks moved 71 percent of all freight in 2022. But those long-haul trucks are heavy polluters, especially of the carbon emissions that threaten the global climate. According to U.S. Environmental Protection Agency estimates, in 2022 more than 3 percent of all carbon dioxide (CO2) emissions came from long-haul trucks.The problem is that long-haul trucks run almost exclusively on diesel fuel, and burning diesel releases high levels of CO2 and other carbon emissions. Global demand for freight transport is projected to as much as double by 2050, so it’s critical to find another source of energy that will meet the needs of long-haul trucks while also reducing their carbon emissions. And conversion to the new fuel must not be costly. “Trucks are an indispensable part of the modern supply chain, and any increase in the cost of trucking will be felt universally,” notes William H. Green, the Hoyt Hottel Professor in Chemical Engineering and director of the MIT Energy Initiative.For the past year, Green and his research team have been seeking a low-cost, cleaner alternative to diesel. Finding a replacement is difficult because diesel meets the needs of the trucking industry so well. For one thing, diesel has a high energy density — that is, energy content per pound of fuel. There’s a legal limit on the total weight of a truck and its contents, so using an energy source with a lower weight allows the truck to carry more payload — an important consideration, given the low profit margin of the freight industry. In addition, diesel fuel is readily available at retail refueling stations across the country — a critical resource for drivers, who may travel 600 miles in a day and sleep in their truck rather than returning to their home depot. Finally, diesel fuel is a liquid, so it’s easy to distribute to refueling stations and then pump into trucks.Past studies have examined numerous alternative technology options for powering long-haul trucks, but no clear winner has emerged. Now, Green and his team have evaluated the available options based on consistent and realistic assumptions about the technologies involved and the typical operation of a long-haul truck, and assuming no subsidies to tip the cost balance. Their in-depth analysis of converting long-haul trucks to battery electric — summarized below — found a high cost and negligible emissions gains in the near term. Studies of methanol and other liquid fuels from biomass are ongoing, but already a major concern is whether the world can plant and harvest enough biomass for biofuels without destroying the ecosystem. An analysis of hydrogen — also summarized below — highlights specific challenges with using that clean-burning fuel, which is a gas at normal temperatures.Finally, the team identified an approach that could make hydrogen a promising, low-cost option for long-haul trucks. And, says Green, “it’s an option that most people are probably unaware of.” It involves a novel way of using materials that can pick up hydrogen, store it, and then release it when and where it’s needed to serve as a clean-burning fuel.Defining the challenge: A realistic drive cycle, plus diesel values to beatThe MIT researchers believe that the lack of consensus on the best way to clean up long-haul trucking may have a simple explanation: Different analyses are based on different assumptions about the driving behavior of long-haul trucks. Indeed, some of them don’t accurately represent actual long-haul operations. So the first task for the MIT team was to define a representative — and realistic — “drive cycle” for actual long-haul truck operations in the United States. Then the MIT researchers — and researchers elsewhere — can assess potential replacement fuels and engines based on a consistent set of assumptions in modeling and simulation analyses.To define the drive cycle for long-haul operations, the MIT team used a systematic approach to analyze many hours of real-world driving data covering 58,000 miles. They examined 10 features and identified three — daily range, vehicle speed, and road grade — that have the greatest impact on energy demand and thus on fuel consumption and carbon emissions. The representative drive cycle that emerged covers a distance of 600 miles, an average vehicle speed of 55 miles per hour, and a road grade ranging from negative 6 percent to positive 6 percent.The next step was to generate key values for the performance of the conventional diesel “powertrain,” that is, all the components involved in creating power in the engine and delivering it to the wheels on the ground. Based on their defined drive cycle, the researchers simulated the performance of a conventional diesel truck, generating “benchmarks” for fuel consumption, CO2 emissions, cost, and other performance parameters.Now they could perform parallel simulations — based on the same drive-cycle assumptions — of possible replacement fuels and powertrains to see how the cost, carbon emissions, and other performance parameters would compare to the diesel benchmarks.The battery electric optionWhen considering how to decarbonize long-haul trucks, a natural first thought is battery power. After all, battery electric cars and pickup trucks are proving highly successful. Why not switch to battery electric long-haul trucks? “Again, the literature is very divided, with some studies saying that this is the best idea ever, and other studies saying that this makes no sense,” says Sayandeep Biswas, a graduate student in chemical engineering.To assess the battery electric option, the MIT researchers used a physics-based vehicle model plus well-documented estimates for the efficiencies of key components such as the battery pack, generators, motor, and so on. Assuming the previously described drive cycle, they determined operating parameters, including how much power the battery-electric system needs. From there they could calculate the size and weight of the battery required to satisfy the power needs of the battery electric truck.The outcome was disheartening. Providing enough energy to travel 600 miles without recharging would require a 2 megawatt-hour battery. “That’s a lot,” notes Kariana Moreno Sader, a graduate student in chemical engineering. “It’s the same as what two U.S. households consume per month on average.” And the weight of such a battery would significantly reduce the amount of payload that could be carried. An empty diesel truck typically weighs 20,000 pounds. With a legal limit of 80,000 pounds, there’s room for 60,000 pounds of payload. The 2 MWh battery would weigh roughly 27,000 pounds — significantly reducing the allowable capacity for carrying payload.Accounting for that “payload penalty,” the researchers calculated that roughly four electric trucks would be required to replace every three of today’s diesel-powered trucks. Furthermore, each added truck would require an additional driver. The impact on operating expenses would be significant.Analyzing the emissions reductions that might result from shifting to battery electric long-haul trucks also brought disappointing results. One might assume that using electricity would eliminate CO2 emissions. But when the researchers included emissions associated with making that electricity, that wasn’t true.“Battery electric trucks are only as clean as the electricity used to charge them,” notes Moreno Sader. Most of the time, drivers of long-haul trucks will be charging from national grids rather than dedicated renewable energy plants. According to Energy Information Agency statistics, fossil fuels make up more than 60 percent of the current U.S. power grid, so electric trucks would still be responsible for significant levels of carbon emissions. Manufacturing batteries for the trucks would generate additional CO2 emissions.Building the charging infrastructure would require massive upfront capital investment, as would upgrading the existing grid to reliably meet additional energy demand from the long-haul sector. Accomplishing those changes would be costly and time-consuming, which raises further concern about electrification as a means of decarbonizing long-haul freight.In short, switching today’s long-haul diesel trucks to battery electric power would bring major increases in costs for the freight industry and negligible carbon emissions benefits in the near term. Analyses assuming various types of batteries as well as other drive cycles produced comparable results.However, the researchers are optimistic about where the grid is going in the future. “In the long term, say by around 2050, emissions from the grid are projected to be less than half what they are now,” says Moreno Sader. “When we do our calculations based on that prediction, we find that emissions from battery electric trucks would be around 40 percent lower than our calculated emissions based on today’s grid.”For Moreno Sader, the goal of the MIT research is to help “guide the sector on what would be the best option.” With that goal in mind, she and her colleagues are now examining the battery electric option under different scenarios — for example, assuming battery swapping (a depleted battery isn’t recharged but replaced by a fully charged one), short-haul trucking, and other applications that might produce a more cost-competitive outcome, even for the near term.A promising option: hydrogenAs the world looks to get off reliance on fossil fuels for all uses, much attention is focusing on hydrogen. Could hydrogen be a good alternative for today’s diesel-burning long-haul trucks?To find out, the MIT team performed a detailed analysis of the hydrogen option. “We thought that hydrogen would solve a lot of the problems we had with battery electric,” says Biswas. It doesn’t have associated CO2 emissions. Its energy density is far higher, so it doesn’t create the weight problem posed by heavy batteries. In addition, existing compression technology can get enough hydrogen fuel into a regular-sized tank to cover the needed distance and range. “You can actually give drivers the range they want,” he says. “There’s no issue with ‘range anxiety.’”But while using hydrogen for long-haul trucking would reduce carbon emissions, it would cost far more than diesel. Based on their detailed analysis of hydrogen, the researchers concluded that the main source of incurred cost is in transporting it. Hydrogen can be made in a chemical facility, but then it needs to be distributed to refueling stations across the country. Conventionally, there have been two main ways of transporting hydrogen: as a compressed gas and as a cryogenic liquid. As Biswas notes, the former is “super high pressure,” and the latter is “super cold.” The researchers’ calculations show that as much as 80 percent of the cost of delivered hydrogen is due to transportation and refueling, plus there’s the need to build dedicated refueling stations that can meet new environmental and safety standards for handling hydrogen as a compressed gas or a cryogenic liquid.Having dismissed the conventional options for shipping hydrogen, they turned to a less-common approach: transporting hydrogen using “liquid organic hydrogen carriers” (LOHCs), special organic (carbon-containing) chemical compounds that can under certain conditions absorb hydrogen atoms and under other conditions release them.LOHCs are in use today to deliver small amounts of hydrogen for commercial use. Here’s how the process works: In a chemical plant, the carrier compound is brought into contact with hydrogen in the presence of a catalyst under elevated temperature and pressure, and the compound picks up the hydrogen. The “hydrogen-loaded” compound — still a liquid — is then transported under atmospheric conditions. When the hydrogen is needed, the compound is again exposed to a temperature increase and a different catalyst, and the hydrogen is released.LOHCs thus appear to be ideal hydrogen carriers for long-haul trucking. They’re liquid, so they can easily be delivered to existing refueling stations, where the hydrogen would be released; and they contain at least as much energy per gallon as hydrogen in a cryogenic liquid or compressed gas form. However, a detailed analysis of using hydrogen carriers showed that the approach would decrease emissions but at a considerable cost.The problem begins with the “dehydrogenation” step at the retail station. Releasing the hydrogen from the chemical carrier requires heat, which is generated by burning some of the hydrogen being carried by the LOHC. The researchers calculate that getting the needed heat takes 36 percent of that hydrogen. (In theory, the process would take only 27 percent — but in reality, that efficiency won’t be achieved.) So out of every 100 units of starting hydrogen, 36 units are now gone.But that’s not all. The hydrogen that comes out is at near-ambient pressure. So the facility dispensing the hydrogen will need to compress it — a process that the team calculates will use up 20-30 percent of the starting hydrogen.Because of the needed heat and compression, there’s now less than half of the starting hydrogen left to be delivered to the truck — and as a result, the hydrogen fuel becomes twice as expensive. The bottom line is that the technology works, but “when it comes to really beating diesel, the economics don’t work. It’s quite a bit more expensive,” says Biswas. In addition, the refueling stations would require expensive compressors and auxiliary units such as cooling systems. The capital investment and the operating and maintenance costs together imply that the market penetration of hydrogen refueling stations will be slow.A better strategy: onboard release of hydrogen from LOHCsGiven the potential benefits of using of LOHCs, the researchers focused on how to deal with both the heat needed to release the hydrogen and the energy needed to compress it. “That’s when we had the idea,” says Biswas. “Instead of doing the dehydrogenation [hydrogen release] at the refueling station and then loading the truck with hydrogen, why don’t we just take the LOHC and load that onto the truck?” Like diesel, LOHC is a liquid, so it’s easily transported and pumped into trucks at existing refueling stations. “We’ll then make hydrogen as it’s needed based on the power demands of the truck — and we can capture waste heat from the engine exhaust and use it to power the dehydrogenation process,” says Biswas.In their proposed plan, hydrogen-loaded LOHC is created at a chemical “hydrogenation” plant and then delivered to a retail refueling station, where it’s pumped into a long-haul truck. Onboard the truck, the loaded LOHC pours into the fuel-storage tank. From there it moves to the “dehydrogenation unit” — the reactor where heat and a catalyst together promote chemical reactions that separate the hydrogen from the LOHC. The hydrogen is sent to the powertrain, where it burns, producing energy that propels the truck forward.Hot exhaust from the powertrain goes to a “heat-integration unit,” where its waste heat energy is captured and returned to the reactor to help encourage the reaction that releases hydrogen from the loaded LOHC. The unloaded LOHC is pumped back into the fuel-storage tank, where it’s kept in a separate compartment to keep it from mixing with the loaded LOHC. From there, it’s pumped back into the retail refueling station and then transported back to the hydrogenation plant to be loaded with more hydrogen.Switching to onboard dehydrogenation brings down costs by eliminating the need for extra hydrogen compression and by using waste heat in the engine exhaust to drive the hydrogen-release process. So how does their proposed strategy look compared to diesel? Based on a detailed analysis, the researchers determined that using their strategy would be 18 percent more expensive than using diesel, and emissions would drop by 71 percent.But those results need some clarification. The 18 percent cost premium of using LOHC with onboard hydrogen release is based on the price of diesel fuel in 2020. In spring of 2023 the price was about 30 percent higher. Assuming the 2023 diesel price, the LOHC option is actually cheaper than using diesel.Both the cost and emissions outcomes are affected by another assumption: the use of “blue hydrogen,” which is hydrogen produced from natural gas with carbon capture and storage. Another option is to assume the use of “green hydrogen,” which is hydrogen produced using electricity generated from renewable sources, such as wind and solar. Green hydrogen is much more expensive than blue hydrogen, so then the costs would increase dramatically.If in the future the price of green hydrogen drops, the researchers’ proposed plan would shift to green hydrogen — and then the decline in emissions would no longer be 71 percent but rather close to 100 percent. There would be almost no emissions associated with the researchers’ proposed plan for using LHOCs with onboard hydrogen release.Comparing the options on cost and emissionsTo compare the options, Moreno Sader prepared bar charts showing the per-mile cost of shipping by truck in the United States and the CO2 emissions that result using each of the fuels and approaches discussed above: diesel fuel, battery electric, hydrogen as a cryogenic liquid or compressed gas, and LOHC with onboard hydrogen release. The LOHC strategy with onboard dehydrogenation looked promising on both the cost and the emissions charts. In addition to such quantitative measures, the researchers believe that their strategy addresses two other, less-obvious challenges in finding a less-polluting fuel for long-haul trucks.First, the introduction of the new fuel and trucks to use it must not disrupt the current freight-delivery setup. “You have to keep the old trucks running while you’re introducing the new ones,” notes Green. “You cannot have even a day when the trucks aren’t running because it’d be like the end of the economy. Your supermarket shelves would all be empty; your factories wouldn’t be able to run.” The researchers’ plan would be completely compatible with the existing diesel supply infrastructure and would require relatively minor retrofits to today’s long-haul trucks, so the current supply chains would continue to operate while the new fuel and retrofitted trucks are introduced.Second, the strategy has the potential to be adopted globally. Long-haul trucking is important in other parts of the world, and Moreno Sader thinks that “making this approach a reality is going to have a lot of impact, not only in the United States but also in other countries,” including her own country of origin, Colombia. “This is something I think about all the time.” The approach is compatible with the current diesel infrastructure, so the only requirement for adoption is to build the chemical hydrogenation plant. “And I think the capital expenditure related to that will be less than the cost of building a new fuel-supply infrastructure throughout the country,” says Moreno Sader.Testing in the lab“We’ve done a lot of simulations and calculations to show that this is a great idea,” notes Biswas. “But there’s only so far that math can go to convince people.” The next step is to demonstrate their concept in the lab.To that end, the researchers are now assembling all the core components of the onboard hydrogen-release reactor as well as the heat-integration unit that’s key to transferring heat from the engine exhaust to the hydrogen-release reactor. They estimate that this spring they’ll be ready to demonstrate their ability to release hydrogen and confirm the rate at which it’s formed. And — guided by their modeling work — they’ll be able to fine-tune critical components for maximum efficiency and best performance.The next step will be to add an appropriate engine, specially equipped with sensors to provide the critical readings they need to optimize the performance of all their core components together. By the end of 2024, the researchers hope to achieve their goal: the first experimental demonstration of a power-dense, robust onboard hydrogen-release system with highly efficient heat integration.In the meantime, they believe that results from their work to date should help spread the word, bringing their novel approach to the attention of other researchers and experts in the trucking industry who are now searching for ways to decarbonize long-haul trucking.Financial support for development of the representative drive cycle and the diesel benchmarks as well as the analysis of the battery electric option was provided by the MIT Mobility Systems Center of the MIT Energy Initiative. Analysis of LOHC-powered trucks with onboard dehydrogenation was supported by the MIT Climate and Sustainability Consortium. Sayandeep Biswas is supported by a fellowship from the Martin Family Society of Fellows for Sustainability, and Kariana Moreno Sader received fellowship funding from MathWorks through the MIT School of Science. More

  • in

    Getting to systemic sustainability

    Add up the commitments from the Paris Agreement, the Glasgow Climate Pact, and various commitments made by cities, countries, and businesses, and the world would be able to hold the global average temperature increase to 1.9 degrees Celsius above preindustrial levels, says Ani Dasgupta, the president and chief executive officer of the World Resources Institute (WRI).While that is well above the 1.5 C threshold that many scientists agree would limit the most severe impacts of climate change, it is below the 2.0 degree threshold that could lead to even more catastrophic impacts, such as the collapse of ice sheets and a 30-foot rise in sea levels.However, Dasgupta notes, actions have so far not matched up with commitments.“There’s a huge gap between commitment and outcomes,” Dasgupta said during his talk, “Energizing the global transition,” at the 2024 Earth Day Colloquium co-hosted by the MIT Energy Initiative and MIT Department of Earth, Atmospheric and Planetary Sciences, and sponsored by the Climate Nucleus.Dasgupta noted that oil companies did $6 trillion worth of business across the world last year — $1 trillion more than they were planning. About 7 percent of the world’s remaining tropical forests were destroyed during that same time, he added, and global inequality grew even worse than before.“None of these things were illegal, because the system we have today produces these outcomes,” he said. “My point is that it’s not one thing that needs to change. The whole system needs to change.”People, climate, and natureDasgupta, who previously held positions in nonprofits in India and at the World Bank, is a recognized leader in sustainable cities, poverty alleviation, and building cultures of inclusion. Under his leadership, WRI, a global research nonprofit that studies sustainable practices with the goal of fundamentally transforming the world’s food, land and water, energy, and cities, adopted a new five-year strategy called “Getting the Transition Right for People, Nature, and Climate 2023-2027.” It focuses on creating new economic opportunities to meet people’s essential needs, restore nature, and rapidly lower emissions, while building resilient communities. In fact, during his talk, Dasgupta said that his organization has moved away from talking about initiatives in terms of their impact on greenhouse gas emissions — instead taking a more holistic view of sustainability.“There is no net zero without nature,” Dasgupta said. He showed a slide with a graphic illustrating potential progress toward net-zero goals. “If nature gets diminished, that chart becomes even steeper. It’s very steep right now, but natural systems absorb carbon dioxide. So, if the natural systems keep getting destroyed, that curve becomes harder and harder.”A focus on people is necessary, Dasgupta said, in part because of the unequal climate impacts that the rich and the poor are likely to face in the coming years. “If you made it to this room, you will not be impacted by climate change,” he said. “You have resources to figure out what to do about it. The people who get impacted are people who don’t have resources. It is immensely unfair. Our belief is, if we don’t do climate policy that helps people directly, we won’t be able to make progress.”Where to start?Although Dasgupta stressed that systemic change is needed to bring carbon emissions in line with long-term climate goals, he made the case that it is unrealistic to implement this change around the globe all at once. “This transition will not happen in 196 countries at the same time,” he said. “The question is, how do we get to the tipping point so that it happens at scale? We’ve worked the past few years to ask the question, what is it you need to do to create this tipping point for change?”Analysts at WRI looked for countries that are large producers of carbon, those with substantial tropical forest cover, and those with large quantities of people living in poverty. “We basically tried to draw a map of, where are the biggest challenges for climate change?” Dasgupta said.That map features a relative handful of countries, including the United States, Mexico, China, Brazil, South Africa, India, and Indonesia. Dasgupta said, “Our argument is that, if we could figure out and focus all our efforts to help these countries transition, that will create a ripple effect — of understanding technology, understanding the market, understanding capacity, and understanding the politics of change that will unleash how the rest of these regions will bring change.”Spotlight on the subcontinentDasgupta used one of these countries, his native India, to illustrate the nuanced challenges and opportunities presented by various markets around the globe. In India, he noted, there are around 3 million projected jobs tied to the country’s transition to renewable energy. However, that number is dwarfed by the 10 to 12 million jobs per year the Indian economy needs to create simply to keep up with population growth.“Every developing country faces this question — how to keep growing in a way that reduces their carbon footprint,” Dasgupta said.Five states in India worked with WRI to pool their buying power and procure 5,000 electric buses, saving 60 percent of the cost as a result. Over the next two decades, Dasgupta said, the fleet of electric buses in those five states is expected to increase to 800,000.In the Indian state of Rajasthan, Dasgupta said, 59 percent of power already comes from solar energy. At times, Rajasthan produces more solar than it can use, and officials are exploring ways to either store the excess energy or sell it to other states. But in another state, Jharkhand, where much of the country’s coal is sourced, only 5 percent of power comes from solar. Officials in Jharkhand have reached out to WRI to discuss how to transition their energy economy, as they recognize that coal will fall out of favor in the future, Dasgupta said.“The complexities of the transition are enormous in a country this big,” Dasgupta said. “This is true in most large countries.”The road aheadDespite the challenges ahead, the colloquium was also marked by notes of optimism. In his opening remarks, Robert Stoner, the founding director of the MIT Tata Center for Technology and Design, pointed out how much progress has been made on environmental cleanup since the first Earth Day in 1970. “The world was a very different, much dirtier, place in many ways,” Stoner said. “Our air was a mess, our waterways were a mess, and it was beginning to be noticeable. Since then, Earth Day has become an important part of the fabric of American and global society.”While Dasgupta said that the world presently lacks the “orchestration” among various stakeholders needed to bring climate change under control, he expressed hope that collaboration in key countries could accelerate progress.“I strongly believe that what we need is a very different way of collaborating radically — across organizations like yours, organizations like ours, businesses, and governments,” Dasgupta said. “Otherwise, this transition will not happen at the scale and speed we need.” More

  • in

    MIT scholars will take commercial break with entrepreneurial scholarship

    Two MIT scholars, each with a strong entrepreneurial drive, have received 2024 Kavanaugh Fellowship awards, advancing their quest to turn pioneering research into profitable commercial enterprises.The Kavanaugh Translational Fellows Program gives scholars training to lead organizations that will bring their research to market. PhD candidates Grant Knappe and Arjav Shah are this year’s recipients. Knappe is developing a drug delivery platform for an emerging class of medicines called nucleic acid therapeutics. Shah is using hydrogel microparticles to clean up water polluted by heavy metals and other contaminants.Knappe and Shah will begin their fellowship with years of entrepreneurial expertise under their belts. They’ve developed and refined their business plans through MIT’s innovation ecosystem, including the Sandbox, the Legatum Center, the Venture Mentoring Service, the National Science Foundation’s I-Corps Program, and Blueprint by The Engine. Now, the yearlong Kavanaugh Fellowship will give the scholars time to focus exclusively on testing their business plans and exercising decision-making skills — critical to startup success — with the guidance of MIT mentors.“It’s a testament to the support and direction they’ve received from the MIT community that their entrepreneurial aspirations have evolved and matured over time,” says Michael J. Cima, program director for the Kavanaugh program and the David H. Koch Professor of Engineering in the Department of Materials Science and Engineering.Founded in 2016, the Kavanaugh program was instrumental in helping past fellows launch several robust startups, including low-carbon cement manufacturer Sublime Systems and SiTration, which is using a new type of filtration membrane to extract critical materials such as lithium.A safer way to deliver breakthrough medicinesNucleic acid therapeutics, including mRNA and CRISPR, are disrupting today’s clinical landscape thanks to their promise of targeting disease treatment according to genetic blueprints. But the first methods of delivering these molecules to the body used viruses as their transport, raising patient safety concerns.“Humans have figured out how to engineer certain viruses found in nature to deliver specific cargoes [for disease treatment],” says Knappe. “But because they look like viruses, the human immune system sees them as a danger signal and creates an immune reaction that can be harmful to patients.”Given the safety profile issues of viral delivery, researchers turned to non-viral technologies that use lipid nanoparticle technology, a mixture of different lipid-like materials, assembled into particles to protect the mRNA therapeutic from getting degraded before it reaches a cell of interest. “Because they don’t look like viruses there, the immune system generally tolerates them,” adds Knappe.Recent data show lipid nanoparticles can now target the lung, opening the potential for novel treatments of deadly cancers and other diseases.Knappe’s work in MIT’s Bathe BioNanoLab focused on building such a non-viral delivery platform based on a different technology: nucleic acid nanoparticles, which combine the attractive components of both viral and non-viral systems. Knappe will spend his Kavanaugh Fellowship year developing proof-of-concept data for his drug delivery method and building the team and funding needed to commercialize the technology.A PhD candidate in the Department of Chemical Engineering (ChemE), Knappe was initially attracted to MIT because of its intellectual openness. “You can work with any faculty member in other departments. I wasn’t restricted to the chemical engineering faculty,” says Knappe, whose supervisor, Professor Mark Bathe, is in the Department of Biological Engineering.Knappe, who is from New Jersey, welcomes the challenges that will come in his Kavanaugh year, including the need to pinpoint the right story that will convince venture capitalists and other funders to bet on his technology. Attracting talent is also top of mind. “How do you convince really talented people that have a lot of opportunities to work on what you work on? Building the first team is going to be critical,” he says. The network Knappe has been building in his years at MIT is paying dividends now.Targeting “forever chemicals” in waterThat network includes Shah. The two fellows met when they worked on the MIT Science Policy Review, a student-run journal concerned with the intersection of science, technology, and policy. Knappe and Shah did not compete directly academically but used their biweekly coffee walks as a welcome sounding board. Naturally, they were pleased when they found out they had both been chosen for the Kavanaugh Fellowship. So far, they have been too busy to celebrate over a beer.“We are good collaborators with research, as well,” says Shah. “Now we’re going on this entrepreneurial journey together. It’s been exciting.”Shah is a PhD candidate in ChemE’s Chemical Engineering Practice program. He got interested in the global imperative for cleaner water at a young age. His hometown of Surat is the heart of India’s textile industry. “Growing up, it wasn’t hard to see the dye-colored water flowing into your rivers and streams,” Shah says. “Playing a role in fostering positive change in water treatment fills me with a profound sense of purpose.”Shah’s work, broadly, is to clean toxic chemicals called micropollutants from water in an efficient and sustainable manner. “It’s humanly impossible to turn a blind eye to our water problems,” he says, which can be categorized as accessibility, availability, and quality. Water problems are global and complex, not just because of the technological challenges but also sociopolitical ones, he adds.Manufactured chemicals called per- and polyfluoroalkyl substances (PFAS), or “forever chemicals,” are in the news these days. PFAS, which go into making nonstick cookware and waterproof clothing, are just one of more than 10,000 such emerging contaminants that have leached into water streams. “These are extremely difficult to remove using existing systems because of their chemical diversity and low concentrations,” Shah says. “The concentrations are akin to dropping an aspirin tablet in an Olympic-sized swimming pool.” But no less toxic for that.In the lab at MIT, Shah is working with Devashish Gokhale, a fellow PhD student, and Patrick S. Doyle, the Robert T. Haslam (1911) Professor of Chemical Engineering, to commercialize an innovative microparticle technology, hydroGel, to remove these micropollutants in an effective, facile, and sustainable manner. Hydrogels are a broad class of polymer materials that can hold large quantities of water.“Our materials are like Boba beads. We are trying to save the world with our Boba beads,” says Shah with a laugh. “And we have functionalized these particles with tunable chemistries to target different micropollutants in a single unit operation.”Due to its outsized environmental impact, industrial water is the first application Shah is targeting. Today, wastewater treatment emits more than 3 percent of global carbon dioxide emissions, which is more than the shipping industry’s emissions, for example. The current state of the art for removing micropollutants in the industry is to use activated carbon filters. “[This technology] comes from coal, so it’s unsustainable,” Shah says. And the activated carbon filters are hard to reuse. “Our particles are reusable, theoretically infinitely.”“I’m very excited to be able to take advantage of the mentorship we have from the Kavanaugh team to take this technology to its next inflection point, so that we are ready to go out in the market and start making a huge impact,” he says.A dream communityShah and Knappe have become adept at navigating the array of support and mentorship opportunities MIT has to offer. Shah worked with a small team of seasoned professionals in the water space from the MIT Venture Mentoring Service. “They’ve helped us every step of the way as we think about commercializing the technology,” he says.Shah worked with MIT Sandbox, which provides a seed grant to help find the right product-market fit. He is also a fellow with the Legatum Center for Development and Entrepreneurship, which focuses on entrepreneurship in emerging countries in growth markets.“We’re exploring the potential for this technology and its application in a lot of different markets, including India. Because that’s close to my heart,” Shah says. “The Legatum community has been unique, where you can have those extremely hard conversations, confront yourself with those fears, and then talk it out with the group of fellows.”The Abdul Latif Jameel Water and Food Systems Lab, or J-WAFS, has been an integral part of Shah’s journey with research and commercialization support through its Solutions Grant and a travel award to the Stockholm World Water Week in August 2023.Knappe has also taken advantage of many innovation programs, including MIT’s Blueprint by the Engine, which helps researchers explore commercial opportunities of their work, plus programs outside of MIT but with strong on-campus ties such as Nucleate Activator and Frequency Bio.It was during one of these programs that he was inspired by two postdocs working in Bathe’s lab and spinning out biotech startups from their research, Floris Engelhardt and James Banal. Engelhardt helped spearhead Kano Therapeutics, and Banal launched Cache DNA.“I was passively absorbing and watching everything that they were going through and what they were excited about and challenged with. I still talk to them pretty regularly to this day,” Knappe says. “It’s been really great to have them as continual mentors, throughout my PhD and as I transition out of the lab.”Shah says he is grateful not only for being selected for the Kavanaugh Fellowship but to MIT as a community. “MIT has been more than a dream come true,” he says. He will have the opportunity to explore a different side of the institution as he enters the MBA program at MIT Sloan School of Management this fall. Shah expects this program, along with his Kavanaugh training, will supply the skills he needs to scale the business so it can make a difference in the world.“I always keep coming back to the question ‘How does what I do matter to the person on the street?’ This guides me to look at the bigger picture, to contextualize my research to solving important problems,” Shah says. “So many great technologies are being worked on each day, but only a minuscule fraction make it to the market.”Knappe is equally dedicated to serving a larger purpose. “With the right infrastructure, between basic fundamental science, conducted in academia, funded by government, and then translated by companies, we can make products that could improve everyone’s life across the world,” he says.Past Kavanaugh Fellows are credited with spearheading commercial outfits that have indeed made a difference. This year’s fellows are poised to follow their lead. But first they will have that beer together to celebrate. More

  • in

    Repurposed beer yeast may offer a cost-effective way to remove lead from water

    Every year, beer breweries generate and discard thousands of tons of surplus yeast. Researchers from MIT and Georgia Tech have now come up with a way to repurpose that yeast to absorb lead from contaminated water.Through a process called biosorption, yeast can quickly absorb even trace amounts of lead and other heavy metals from water. The researchers showed that they could package the yeast inside hydrogel capsules to create a filter that removes lead from water. Because the yeast cells are encapsulated, they can be easily removed from the water once it’s ready to drink.“We have the hydrogel surrounding the free yeast that exists in the center, and this is porous enough to let water come in, interact with yeast as if they were freely moving in water, and then come out clean,” says Patricia Stathatou, a former postdoc at the MIT Center for Bits and Atoms, who is now a research scientist at Georgia Tech and an incoming assistant professor at Georgia Tech’s School of Chemical and Biomolecular Engineering. “The fact that the yeast themselves are bio-based, benign, and biodegradable is a significant advantage over traditional technologies.”The researchers envision that this process could be used to filter drinking water coming out of a faucet in homes, or scaled up to treat large quantities of water at treatment plants.MIT graduate student Devashish Gokhale and Stathatou are the lead authors of the study, which appears today in the journal RSC Sustainability. Patrick Doyle, the Robert T. Haslam Professor of Chemical Engineering at MIT, is the senior author of the paper, and Christos Athanasiou, an assistant professor of aerospace engineering at Georgia Tech and a former visiting scholar at MIT, is also an author.Absorbing leadThe new study builds on work that Stathatou and Athanasiou began in 2021, when Athanasiou was a visiting scholar at MIT’s Center for Bits and Atoms. That year, they calculated that waste yeast discarded from a single brewery in Boston would be enough to treat the city’s entire water supply.Through biosorption, a process that is not fully understood, yeast cells can bind to and absorb heavy metal ions, even at challenging initial concentrations below 1 part per million. The MIT team found that this process could effectively decontaminate water with low concentrations of lead. However, one key obstacle remained, which was how to remove yeast from the water after they absorb the lead.In a serendipitous coincidence, Stathatou and Athanasiou happened to present their research at the AIChE Annual Meeting in Boston in 2021, where Gokhale, a student in Doyle’s lab, was presenting his own research on using hydrogels to capture micropollutants in water. The two sets of researchers decided to join forces and explore whether the yeast-based strategy could be easier to scale up if the yeast were encapsulated in hydrogels developed by Gokhale and Doyle.“What we decided to do was make these hollow capsules — something like a multivitamin pill, but instead of filling them up with vitamins, we fill them up with yeast cells,” Gokhale says. “These capsules are porous, so the water can go into the capsules and the yeast are able to bind all of that lead, but the yeast themselves can’t escape into the water.”The capsules are made from a polymer called polyethylene glycol (PEG), which is widely used in medical applications. To form the capsules, the researchers suspend freeze-dried yeast in water, then mix them with the polymer subunits. When UV light is shone on the mixture, the polymers link together to form capsules with yeast trapped inside.Each capsule is about half a millimeter in diameter. Because the hydrogels are very thin and porous, water can easily pass through and encounter the yeast inside, while the yeast remain trapped.In this study, the researchers showed that the encapsulated yeast could remove trace lead from water just as rapidly as the unencapsulated yeast from Stathatou and Athanasiou’s original 2021 study.Scaling upLed by Athanasiou, the researchers tested the mechanical stability of the hydrogel capsules and found that the capsules and the yeast inside can withstand forces similar to those generated by water running from a faucet. They also calculated that the yeast-laden capsules should be able to withstand forces generated by flows in water treatment plants serving several hundred residences.“Lack of mechanical robustness is a common cause of failure of previous attempts to scale-up biosorption using immobilized cells; in our work we wanted to make sure that this aspect is thoroughly addressed from the very beginning to ensure scalability,” Athanasiou says.After assessing the mechanical robustness of the yeast-laden capsules, the researchers constructed a proof-of-concept packed-bed biofilter, capable of treating trace lead-contaminated water and meeting U.S. Environmental Protection Agency drinking water guidelines while operating continuously for 12 days.This process would likely consume less energy than existing physicochemical processes for removing trace inorganic compounds from water, such as precipitation and membrane filtration, the researchers say.This approach, rooted in circular economy principles, could minimize waste and environmental impact while also fostering economic opportunities within local communities. Although numerous lead contamination incidents have been reported in various locations in the United States, this approach could have an especially significant impact in low-income areas that have historically faced environmental pollution and limited access to clean water, and may not be able to afford other ways to remediate it, the researchers say.“We think that there’s an interesting environmental justice aspect to this, especially when you start with something as low-cost and sustainable as yeast, which is essentially available anywhere,” Gokhale says.The researchers are now exploring strategies for recycling and replacing the yeast once they’re used up, and trying to calculate how often that will need to occur. They also hope to investigate whether they could use feedstocks derived from biomass to make the hydrogels, instead of fossil-fuel-based polymers, and whether the yeast can be used to capture other types of contaminants.“Moving forward, this is a technology that can be evolved to target other trace contaminants of emerging concern, such as PFAS or even microplastics,” Stathatou says. “We really view this as an example with a lot of potential applications in the future.”The research was funded by the Rasikbhai L. Meswani Fellowship for Water Solutions, the MIT Abdul Latif Jameel Water and Food Systems Lab (J-WAFS), and the Renewable Bioproducts Institute at Georgia Tech. More

  • in

    Q&A: Claire Walsh on how J-PAL’s King Climate Action Initiative tackles the twin climate and poverty crises

    The King Climate Action Initiative (K-CAI) is the flagship climate change program of the Abdul Latif Jameel Poverty Action Lab (J-PAL), which innovates, tests, and scales solutions at the nexus of climate change and poverty alleviation, together with policy partners worldwide.

    Claire Walsh is the associate director of policy at J-PAL Global at MIT. She is also the project director of K-CAI. Here, Walsh talks about the work of K-CAI since its launch in 2020, and describes the ways its projects are making a difference. This is part of an ongoing series exploring how the MIT School of Humanities, Arts, and Social Sciences is addressing the climate crisis.

    Q: According to the King Climate Action Initiative (K-CAI), any attempt to address poverty effectively must also simultaneously address climate change. Why is that?

    A: Climate change will disproportionately harm people in poverty, particularly in low- and middle-income countries, because they tend to live in places that are more exposed to climate risk. These are nations in sub-Saharan Africa and South and Southeast Asia where low-income communities rely heavily on agriculture for their livelihoods, so extreme weather — heat, droughts, and flooding — can be devastating for people’s jobs and food security. In fact, the World Bank estimates that up to 130 million more people may be pushed into poverty by climate change by 2030.

    This is unjust because these countries have historically emitted the least; their people didn’t cause the climate crisis. At the same time, they are trying to improve their economies and improve people’s welfare, so their energy demands are increasing, and they are emitting more. But they don’t have the same resources as wealthy nations for mitigation or adaptation, and many developing countries understandably don’t feel eager to put solving a problem they didn’t create at the top of their priority list. This makes finding paths forward to cutting emissions on a global scale politically challenging.

    For these reasons, the problems of enhancing the well-being of people experiencing poverty, addressing inequality, and reducing pollution and greenhouse gases are inextricably linked.

    Q: So how does K-CAI tackle this hybrid challenge?

    A: Our initiative is pretty unique. We are a competitive, policy-based research and development fund that focuses on innovating, testing, and scaling solutions. We support researchers from MIT and other universities, and their collaborators, who are actually implementing programs, whether NGOs [nongovernmental organizations], government, or the private sector. We fund pilots of small-scale ideas in a real-world setting to determine if they hold promise, followed by larger randomized, controlled trials of promising solutions in climate change mitigation, adaptation, pollution reduction, and energy access. Our goal is to determine, through rigorous research, if these solutions are actually working — for example, in cutting emissions or protecting forests or helping vulnerable communities adapt to climate change. And finally, we offer path-to-scale grants which enable governments and NGOs to expand access to programs that have been tested and have strong evidence of impact.

    We think this model is really powerful. Since we launched in 2020, we have built a portfolio of over 30 randomized evaluations and 13 scaling projects in more than 35 countries. And to date, these projects have informed the scale ups of evidence-based climate policies that have reached over 15 million people.

    Q: It seems like K-CAI is advancing a kind of policy science, demanding proof of a program’s capacity to deliver results at each stage. 

    A: This is one of the factors that drew me to J-PAL back in 2012. I majored in anthropology and studied abroad in Uganda. From those experiences I became very passionate about pursuing a career focused on poverty reduction. To me, it is unfair that in a world full of so much wealth and so much opportunity there exists so much extreme poverty. I wanted to dedicate my career to that, but I’m also a very detail-oriented nerd who really cares about whether a program that claims to be doing something for people is accomplishing what it claims.

    It’s been really rewarding to see demand from governments and NGOs for evidence-informed policymaking grow over my 12 years at J-PAL. This policy science approach holds exciting promise to help transform public policy and climate policy in the coming decades.  

    Q: Can you point to K-CAI-funded projects that meet this high bar and are now making a significant impact?

    A: Several examples jump to mind. In the state of Gujarat, India, pollution regulators are trying to cut particulate matter air pollution, which is devastating to human health. The region is home to many major industries whose emissions negatively affect most of the state’s 70 million residents.

    We partnered with state pollution regulators — kind of a regional EPA [Environmental Protection Agency] — to test an emissions trading scheme that is used widely in the U.S. and Europe but not in low- and middle-income countries. The government monitors pollution levels using technology installed at factories that sends data in real time, so the regulator knows exactly what their emissions look like. The regulator sets a cap on the overall level of pollution, allocates permits to pollute, and industries can trade emissions permits.

    In 2019, researchers in the J-PAL network conducted the world’s first randomized, controlled trial of this emissions trading scheme and found that it cut pollution by 20 to 30 percent — a surprising reduction. It also reduced firms’ costs, on average, because the costs of compliance went down. The state government was eager to scale up the pilot, and in the past two years, two other cities, including Ahmedabad, the biggest city in the state, have adopted the concept.

    We are also supporting a project in Niger, whose economy is hugely dependent on rain-fed agriculture but with climate change is experiencing rapid desertification. Researchers in the J-PAL network have been testing training farmers in a simple, inexpensive rainwater harvesting technique, where farmers dig a half-moon-shaped hole called a demi-lune right before the rainy season. This demi-lune feeds crops that are grown directly on top of it, and helps return land that resembled flat desert to arable production.

    Researchers found that training farmers in this simple technology increased adoption from 4 percent to 94 percent and that demi-lunes increased agricultural output and revenue for farmers from the first year. K-CAI is funding a path-to-scale grant so local implementers can teach this technique to over 8,000 farmers and build a more cost-effective program model. If this takes hold, the team will work with local partners to scale the training to other relevant regions of the country and potentially other countries in the Sahel.

    One final example that we are really proud of, because we first funded it as a pilot and now it’s in the path to scale phase: We supported a team of researchers working with partners in Bangladesh trying to reduce carbon emissions and other pollution from brick manufacturing, an industry that generates 17 percent of the country’s carbon emissions. The scale of manufacturing is so great that at some times of year, Dhaka (the capital of Bangladesh) looks like Mordor.

    Workers form these bricks and stack hundreds of thousands of them, which they then fire by burning coal. A team of local researchers and collaborators from our J-PAL network found that you can reduce the amount of coal needed for the kilns by making some low-cost changes to the manufacturing process, including stacking the bricks in a way that increases airflow in the kiln and feeding the coal fires more frequently in smaller rather than larger batches.

    In the randomized, controlled trial K-CAI supported, researchers found that this cut carbon and pollution emissions significantly, and now the government has invited the team to train 1,000 brick manufacturers in Dhaka in these techniques.

    Q: These are all fascinating and powerful instances of implementing ideas that address a range of problems in different parts of the world. But can K-CAI go big enough and fast enough to take a real bite out of the twin poverty and climate crisis?

    A: We’re not trying to find silver bullets. We are trying to build a large playbook of real solutions that work to solve specific problems in specific contexts. As you build those up in the hundreds, you have a deep bench of effective approaches to solve problems that can add up in a meaningful way. And because J-PAL works with governments and NGOs that have the capacity to take the research into action, since 2003, over 600 million people around the world have been reached by policies and programs that are informed by evidence that J-PAL-affiliated researchers produced. While global challenges seem daunting, J-PAL has shown that in 20 years we can achieve a great deal, and there is huge potential for future impact.

    But unfortunately, globally, there is an underinvestment in policy innovation to combat climate change that may generate quicker, lower-cost returns at a large scale — especially in policies that determine which technologies get adopted or commercialized. For example, a lot of the huge fall in prices of renewable energy was enabled by early European government investments in solar and wind, and then continuing support for innovation in renewable energy.

    That’s why I think social sciences have so much to offer in the fight against climate change and poverty; we are working where technology meets policy and where technology meets real people, which often determines their success or failure. The world should be investing in policy, economic, and social innovation just as much as it is investing in technological innovation.

    Q: Do you need to be an optimist in your job?

    A: I am half-optimist, half-pragmatist. I have no control over the climate change outcome for the world. And regardless of whether we can successfully avoid most of the potential damages of climate change, when I look back, I’m going to ask myself, “Did I fight or not?” The only choice I have is whether or not I fought, and I want to be a fighter. More

  • in

    Shining a light on oil fields to make them more sustainable

    Operating an oil field is complex and there is a staggeringly long list of things that can go wrong.

    One of the most common problems is spills of the salty brine that’s a toxic byproduct of pumping oil. Another is over- or under-pumping that can lead to machine failure and methane leaks. (The oil and gas industry is the largest industrial emitter of methane in the U.S.) Then there are extreme weather events, which range from winter frosts to blazing heat, that can put equipment out of commission for months. One of the wildest problems Sebastien Mannai SM ’14, PhD ’18 has encountered are hogs that pop open oil tanks with their snouts to enjoy on-demand oil baths.

    Mannai helps oil field owners detect and respond to these problems while optimizing the operation of their machinery to prevent the issues from occurring in the first place. He is the founder and CEO of Amplified Industries, a company selling oil field monitoring and control tools that help make the industry more efficient and sustainable.

    Amplified Industries’ sensors and analytics give oil well operators real-time alerts when things go wrong, allowing them to respond to issues before they become disasters.

    “We’re able to find 99 percent of the issues affecting these machines, from mechanical failures to human errors, including issues happening thousands of feet underground,” Mannai explains. “With our AI solution, operators can put the wells on autopilot, and the system automatically adjusts or shuts the well down as soon as there’s an issue.”

    Amplified currently works with private companies in states spanning from Texas to Wyoming, that own and operate as many as 3,000 wells. Such companies make up the majority of oil well operators in the U.S. and operate both new and older, more failure-prone equipment that has been in the field for decades.

    Such operators also have a harder time responding to environmental regulations like the Environmental Protection Agency’s new methane guidelines, which seek to dramatically reduce emissions of the potent greenhouse gas in the industry over the next few years.

    “These operators don’t want to be releasing methane,” Mannai explains. “Additionally, when gas gets into the pumping equipment, it leads to premature failures. We can detect gas and slow the pump down to prevent it. It’s the best of both worlds: The operators benefit because their machines are working better, saving them money while also giving them a smaller environmental footprint with fewer spills and methane leaks.”

    Leveraging “every MIT resource I possibly could”

    Mannai learned about the cutting-edge technology used in the space and aviation industries as he pursued his master’s degree at the Gas Turbine Laboratory in MIT’s Department of Aeronautics and Astronautics. Then, during his PhD at MIT, he worked with an oil services company and discovered the oil and gas industry was still relying on decades-old technologies and equipment.

    “When I first traveled to the field, I could not believe how old-school the actual operations were,” says Mannai, who has previously worked in rocket engine and turbine factories. “A lot of oil wells have to be adjusted by feel and rules of thumb. The operators have been let down by industrial automation and data companies.”

    Monitoring oil wells for problems typically requires someone in a pickup truck to drive hundreds of miles between wells looking for obvious issues, Mannai says. The sensors that are deployed are expensive and difficult to replace. Over time, they’re also often damaged in the field to the point of being unusable, forcing technicians to make educated guesses about the status of each well.

    “We often see that equipment unplugged or programmed incorrectly because it is incredibly over-complicated and ill-designed for the reality of the field,” Mannai says. “Workers on the ground often have to rip it out and bypass the control system to pump by hand. That’s how you end up with so many spills and wells pumping at suboptimal levels.”

    To build a better oil field monitoring system, Mannai received support from the MIT Sandbox Innovation Fund and the Venture Mentoring Service (VMS). He also participated in the delta V summer accelerator at the Martin Trust Center for MIT Entrepreneurship, the fuse program during IAP, and the MIT I-Corps program, and took a number of classes at the MIT Sloan School of Management. In 2019, Amplified Industries — which operated under the name Acoustic Wells until recently — won the MIT $100K Entrepreneurship competition.

    “My approach was to sign up to every possible entrepreneurship related program and to leverage every MIT resource I possibly could,” Mannai says. “MIT was amazing for us.”

    Mannai officially launched the company after his postdoc at MIT, and Amplified raised its first round of funding in early 2020. That year, Amplified’s small team moved into the Greentown Labs startup incubator in Somerville.

    Mannai says building the company’s battery-powered, low-cost sensors was a huge challenge. The sensors run machine-learning inference models and their batteries last for 10 years. They also had to be able to handle extreme conditions, from the scorching hot New Mexico desert to the swamps of Louisiana and the freezing cold winters in North Dakota.

    “We build very rugged, resilient hardware; it’s a must in those environments” Mannai says. “But it’s also very simple to deploy, so if a device does break, it’s like changing a lightbulb: We ship them a new one and it takes them a couple of minutes to swap it out.”

    Customers equip each well with four or five of Amplified’s sensors, which attach to the well’s cables and pipes to measure variables like tension, pressure, and amps. Vast amounts of data are then sent to Amplified’s cloud and processed by their analytics engine. Signal processing methods and AI models are used to diagnose problems and control the equipment in real-time, while generating notifications for the operators when something goes wrong. Operators can then remotely adjust the well or shut it down.

    “That’s where AI is important, because if you just record everything and put it in a giant dashboard, you create way more work for people,” Mannai says. “The critical part is the ability to process and understand this newly recorded data and make it readily usable in the real world.”

    Amplified’s dashboard is customized for different people in the company, so field technicians can quickly respond to problems and managers or owners can get a high-level view of how everything is running.

    Mannai says often when Amplified’s sensors are installed, they’ll immediately start detecting problems that were unknown to engineers and technicians in the field. To date, Amplified has prevented hundreds of thousands of gallons worth of brine water spills, which are particularly damaging to surrounding vegetation because of their high salt and sulfur content.

    Preventing those spills is only part of Amplified’s positive environmental impact; the company is now turning its attention toward the detection of methane leaks.

    Helping a changing industry

    The EPA’s proposed new Waste Emissions Charge for oil and gas companies would start at $900 per metric ton of reported methane emissions in 2024 and increase to $1,500 per metric ton in 2026 and beyond.

    Mannai says Amplified is well-positioned to help companies comply with the new rules. Its equipment has already showed it can detect various kinds of leaks across the field, purely based on analytics of existing data.

    “Detecting methane leaks typically requires someone to walk around every valve and piece of piping with a thermal camera or sniffer, but these operators often have thousands of valves and hundreds of miles of pipes,” Mannai says. “What we see in the field is that a lot of times people don’t know where the pipes are because oil wells change owners so frequently, or they will miss an intermittent leak.”

    Ultimately Mannai believes a strong data backend and modernized sensing equipment will become the backbone of the industry, and is a necessary prerequisite to both improving efficiency and cleaning up the industry.

    “We’re selling a service that ensures your equipment is working optimally all the time,” Mannai says. “That means a lot fewer fines from the EPA, but it also means better-performing equipment. There’s a mindset change happening across the industry, and we’re helping make that transition as easy and affordable as possible.” More

  • in

    Understanding the impacts of mining on local environments and communities

    Hydrosocial displacement refers to the idea that resolving water conflict in one area can shift the conflict to a different area. The concept was coined by Scott Odell, a visiting researcher in MIT’s Environmental Solutions Initiative (ESI). As part of ESI’s Program on Mining and the Circular Economy, Odell researches the impacts of extractive industries on local environments and communities, especially in Latin America. He discovered that hydrosocial displacements are often in regions where the mining industry is vying for use of precious water sources that are already stressed due to climate change.

    Odell is working with John Fernández, ESI director and professor in the Department of Architecture, on a project that is examining the converging impacts of climate change, mining, and agriculture in Chile. The work is funded by a seed grant from MIT’s Abdul Latif Jameel Water and Food Systems Lab (J-WAFS). Specifically, the project seeks to answer how the expansion of seawater desalination by the mining industry is affecting local populations, and how climate change and mining affect Andean glaciers and the agricultural communities dependent upon them.By working with communities in mining areas, Odell and Fernández are gaining a sense of the burden that mining minerals needed for the clean energy transition is placing on local populations, and the types of conflicts that arise when water sources become polluted or scarce. This work is of particular importance considering over 100 countries pledged a commitment to the clean energy transition at the recent United Nations climate change conference, known as COP28.

    Play video

    J-WAFS Community Spotlight on Scott Odell

    Water, humanity’s lifebloodAt the March 2023 United Nations (U.N.) Water Conference in New York, U.N. Secretary-General António Guterres warned “water is in deep trouble. We are draining humanity’s lifeblood through vampiric overconsumption and unsustainable use and evaporating it through global heating.” A quarter of the world’s population already faces “extremely high water stress,” according to the World Resources Institute. In an effort to raise awareness of major water-related issues and inspire action for innovative solutions, the U.N. created World Water Day, observed every year on March 22. This year’s theme is “Water for Peace,” underscoring the fact that even though water is a basic human right and intrinsic to every aspect of life, it is increasingly fought over as supplies dwindle due to problems including drought, overuse, or mismanagement.  

    The “Water for Peace” theme is exemplified in Fernández and Odell’s J-WAFS project, where findings are intended to inform policies to reduce social and environmental harms inflicted on mining communities and their limited water sources.“Despite broad academic engagement with mining and climate change separately, there has been a lack of analysis of the societal implications of the interactions between mining and climate change,” says Odell. “This project is helping to fill the knowledge gap. Results will be summarized in Spanish and English and distributed to interested and relevant parties in Chile, ensuring that the results can be of benefit to those most impacted by these challenges,” he adds.

    The effects of mining for the clean energy transition

    Global climate change is understood to be the most pressing environmental issue facing humanity today. Mitigating climate change requires reducing carbon emissions by transitioning away from conventional energy derived from burning fossil fuels, to more sustainable energy sources like solar and wind power. Because copper is an excellent conductor of electricity, it will be a crucial element in the clean energy transition, in which more solar panels, wind turbines, and electric vehicles will be manufactured. “We are going to see a major increase in demand for copper due to the clean energy transition,” says Odell.

    In 2021, Chile produced 26 percent of the world’s copper, more than twice as much as any other country, Odell explains. Much of Chile’s mining is concentrated in and around the Atacama Desert — the world’s driest desert. Unfortunately, mining requires large amounts of water for a variety of processes, including controlling dust at the extraction site, cooling machinery, and processing and transporting ore.

    Chile is also one of the world’s largest exporters of agricultural products. Farmland is typically situated in the valleys downstream of several mines in the high Andes region, meaning mines get first access to water. This can lead to water conflict between mining operations and agricultural communities. Compounding the problem of mining for greener energy materials to combat climate change, are the very effects of climate change. According to the Chilean government, the country has suffered 13 years of the worst drought in history. While this is detrimental to the mining industry, it is also concerning for those working in agriculture, including the Indigenous Atacameño communities that live closest to the Escondida mine, the largest copper mine in the world. “There was never a lot of water to go around, even before the mine,” Odell says. The addition of Escondida stresses an already strained water system, leaving Atacameño farmers and individuals vulnerable to severe water insecurity.

    What’s more, waste from mining, known as tailings, includes minerals and chemicals that can contaminate water in nearby communities if not properly handled and stored. Odell says the secure storage of tailings is a high priority in earthquake-prone Chile. “If an earthquake were to hit and damage a tailings dam, it could mean toxic materials flowing downstream and destroying farms and communities,” he says.

    Chile’s treasured glaciers are another piece of the mining, climate change, and agricultural puzzle. Caroline White-Nockleby, a PhD candidate in MIT’s Program in Science, Technology, and Society, is working with Odell and Fernández on the J-WAFS project and leading the research specifically on glaciers. “These may not be the picturesque bright blue glaciers that you might think of, but they are, nonetheless, an important source of water downstream,” says White-Nockleby. She goes on to explain that there are a few different ways that mines can impact glaciers.

    In some cases, mining companies have proposed to move or even destroy glaciers to get at the ore beneath. Other impacts include dust from mining that falls on glaciers. White-Nockleby says, “this makes the glaciers a darker color, so, instead of reflecting the sun’s rays away, [the glacier] may absorb the heat and melt faster.” This shows that even when not directly intervening with glaciers, mining activities can cause glacial decline, adding to the threat glaciers already face due to climate change. She also notes that “glaciers are an important water storage facility,” describing how, on an annual cycle, glaciers freeze and melt, allowing runoff that downstream agricultural communities can utilize. If glaciers suddenly melt too quickly, flooding of downstream communities can occur.

    Desalination offers a possible, but imperfect, solution

    Chile’s extensive coastline makes it uniquely positioned to utilize desalination — the removal of salts from seawater — to address water insecurity. Odell says that “over the last decade or so, there’s been billions of dollars of investments in desalination in Chile.”

    As part of his dissertation work at Clark University, Odell found broad optimism in Chile for solving water issues in the mining industry through desalination. Not only was the mining industry committed to building desalination plants, there was also political support, and support from some community members in highland communities near the mines. Yet, despite the optimism and investment, desalinated water was not replacing the use of continental water. He concluded that “desalination can’t solve water conflict if it doesn’t reduce demand for continental water supplies.”

    However, after publishing those results, Odell learned that new estimates at the national level showed that desalination operations had begun to replace the use of continental water after 2018. In two case studies that he currently focuses on — the Escondida and Los Pelambres copper mines — the mining companies have expanded their desalination objectives in order to reduce extraction from key continental sources. This seems to be due to a variety of factors. For one thing, in 2022, Chile’s water code was reformed to prioritize human water consumption and environmental protection of water during scarcity and in the allocation of future rights. It also shortened the granting of water rights from “in perpetuity” to 30 years. Under this new code, it is possible that the mining industry may have expanded its desalination efforts because it viewed continental water resources as less secure, Odell surmises.

    As part of the J-WAFS project, Odell has found that recent reactions have been mixed when it comes to the rapid increase in the use of desalination. He spent over two months doing fieldwork in Chile by conducting interviews with members of government, industry, and civil society at the Escondida, Los Pelambres, and Andina mining sites, as well as in Chile’s capital city, Santiago. He has spoken to local and national government officials, leaders of fishing unions, representatives of mining and desalination companies, and farmers. He observed that in the communities where the new desalination plants are being built, there have been concerns from community members as to whether they will get access to the desalinated water, or if it will belong solely to the mines.

    Interviews at the Escondida and Los Pelambres sites, in which desalination operations are already in place or under construction, indicate acceptance of the presence of desalination plants combined with apprehension about unknown long-term environmental impacts. At a third mining site, Andina, there have been active protests against a desalination project that would supply water to a neighboring mine, Los Bronces. In that community, there has been a blockade of the desalination operation by the fishing federation. “They were blockading that operation for three months because of concerns over what the desalination plant would do to their fishing grounds,” Odell says. And this is where the idea of hydrosocial displacement comes into the picture, he explains. Even though desalination operations are easing tensions with highland agricultural communities, new issues are arising for the communities on the coast. “We can’t just look to desalination to solve our problems if it’s going to create problems somewhere else” Odell advises.

    Within the process of hydrosocial displacement, interacting geographical, technical, economic, and political factors constrain the range of responses to address the water conflict. For example, communities that have more political and financial power tend to be better equipped to solve water conflict than less powerful communities. In addition, hydrosocial concerns usually follow the flow of water downstream, from the highlands to coastal regions. Odell says that this raises the need to look at water from a broader perspective.

    “We tend to address water concerns one by one and that can, in practice, end up being kind of like whack-a-mole,” says Odell. “When we think of the broader hydrological system, water is very much linked, and we need to look across the watershed. We can’t just be looking at the specific community affected now, but who else is affected downstream, and will be affected in the long term. If we do solve a water issue by moving it somewhere else, like moving a tailings dam somewhere else, or building a desalination plant, resources are needed in the receiving community to respond to that,” suggests Odell.

    The company building the desalination plant and the fishing federation ultimately reached an agreement and the desalination operation will be moving forward. But Odell notes, “the protest highlights concern about the impacts of the operation on local livelihoods and environments within the much larger context of industrial pollution in the area.”

    The power of communities

    The protest by the fishing federation is one example of communities coming together to have their voices heard. Recent proposals by mining companies that would affect glaciers and other water sources used by agriculture communities have led to other protests that resulted in new agreements to protect local water supplies and the withdrawal of some of the mining proposals.Odell observes that communities have also gone to the courts to raise their concerns. The Atacameño communities, for example, have drawn attention to over-extraction of water resources by the Escondida mine. “Community members are also pursuing education in these topics so that there’s not such a power imbalance between mining companies and local communities,” Odell remarks. This demonstrates the power local communities can have to protect continental water resources.The political and social landscape of Chile may also be changing in favor of local communities. Beginning with what is now referred to as the Estallido Social (social outburst) over inequality in 2019, Chile has undergone social upheaval that resulted in voters calling for a new constitution. Gabriel Boric, a progressive candidate, whose top priorities include social and environmental issues, was elected president during this period. These trends have brought major attention to issues of economic inequality, environmental harms of mining, and environmental justice, which is putting pressure on the mining industry to make a case for its operations in the country, and to justify the environmental costs of mining.

    What happens after the mine dries up?

    From his fieldwork interviews, Odell has learned that the development of mines within communities can offer benefits. Mining companies typically invest directly in communities through employment, road construction, and sometimes even by building or investing in schools, stadiums, or health clinics. Indirectly, mines can have spillover effects in the economy since miners might support local restaurants, hotels, or stores. But what happens when the mine closes? As one community member Odell interviewed stated: “When the mine is gone, what are we going to have left besides a big hole in the ground?”

    Odell suggests that a multi-pronged approach should be taken to address the future state of water and mining. First, he says we need to have broader conversations about the nature of our consumption and production at domestic and global scales. “Mining is driven indirectly by our consumption of energy and directly by our consumption of everything from our buildings to devices to cars,” Odell states. “We should be looking for ways to moderate our consumption and consume smarter through both policy and practice so that we don’t solve climate change while creating new environmental harms through mining.”One of the main ways we can do this is by advancing the circular economy by recycling metals already in the system, or even in landfills, to help build our new clean energy infrastructure. Even so, the clean energy transition will still require mining, but according to Odell, that mining can be done better. “Mining companies and government need to do a better job of consulting with communities. We need solid plans and financing for mine closures in place from the beginning of mining operations, so that when the mine dries up, there’s the money needed to secure tailings dams and protect the communities who will be there forever,” Odell concludes.Overall, it will take an engaged society — from the mining industry to government officials to individuals — to think critically about the role we each play in our quest for a more sustainable planet, and what that might mean for the most vulnerable populations among us. More

  • in

    Study finds lands used for grazing can worsen or help climate change

    When it comes to global climate change, livestock grazing can be either a blessing or a curse, according to a new study, which offers clues on how to tell the difference.

    If managed properly, the study shows, grazing can actually increase the amount of carbon from the air that gets stored in the ground and sequestered for the long run. But if there is too much grazing, soil erosion can result, and the net effect is to cause more carbon losses, so that the land becomes a net carbon source, instead of a carbon sink. And the study found that the latter is far more common around the world today.

    The new work, published today in the journal Nature Climate Change, provides ways to determine the tipping point between the two, for grazing lands in a given climate zone and soil type. It also provides an estimate of the total amount of carbon that has been lost over past decades due to livestock grazing, and how much could be removed from the atmosphere if grazing optimization management implemented. The study was carried out by Cesar Terrer, an assistant professor of civil and environmental engineering at MIT; Shuai Ren, a PhD student at the Chinese Academy of Sciences whose thesis is co-supervised by Terrer; and four others.

    “This has been a matter of debate in the scientific literature for a long time,” Terrer says. “In general experiments, grazing decreases soil carbon stocks, but surprisingly, sometimes grazing increases soil carbon stocks, which is why it’s been puzzling.”

    What happens, he explains, is that “grazing could stimulate vegetation growth through easing resource constraints such as light and nutrients, thereby increasing root carbon inputs to soils, where carbon can stay there for centuries or millennia.”

    But that only works up to a certain point, the team found after a careful analysis of 1,473 soil carbon observations from different grazing studies from many locations around the world. “When you cross a threshold in grazing intensity, or the amount of animals grazing there, that is when you start to see sort of a tipping point — a strong decrease in the amount of carbon in the soil,” Terrer explains.

    That loss is thought to be primarily from increased soil erosion on the denuded land. And with that erosion, Terrer says, “basically you lose a lot of the carbon that you have been locking in for centuries.”

    The various studies the team compiled, although they differed somewhat, essentially used similar methodology, which is to fence off a portion of land so that livestock can’t access it, and then after some time take soil samples from within the enclosure area, and from comparable nearby areas that have been grazed, and compare the content of carbon compounds.

    “Along with the data on soil carbon for the control and grazed plots,” he says, “we also collected a bunch of other information, such as the mean annual temperature of the site, mean annual precipitation, plant biomass, and properties of the soil, like pH and nitrogen content. And then, of course, we estimate the grazing intensity — aboveground biomass consumed, because that turns out to be the key parameter.”  

    With artificial intelligence models, the authors quantified the importance of each of these parameters, those drivers of intensity — temperature, precipitation, soil properties — in modulating the sign (positive or negative) and magnitude of the impact of grazing on soil carbon stocks. “Interestingly, we found soil carbon stocks increase and then decrease with grazing intensity, rather than the expected linear response,” says Ren.

    Having developed the model through AI methods and validated it, including by comparing its predictions with those based on underlying physical principles, they can then apply the model to estimating both past and future effects. “In this case,” Terrer says, “we use the model to quantify the historical loses in soil carbon stocks from grazing. And we found that 46 petagrams [billion metric tons] of soil carbon, down to a depth of one meter, have been lost in the last few decades due to grazing.”

    By way of comparison, the total amount of greenhouse gas emissions per year from all fossil fuels is about 10 petagrams, so the loss from grazing equals more than four years’ worth of all the world’s fossil emissions combined.

    What they found was “an overall decline in soil carbon stocks, but with a lot of variability.” Terrer says. The analysis showed that the interplay between grazing intensity and environmental conditions such as temperature could explain the variability, with higher grazing intensity and hotter climates resulting in greater carbon loss. “This means that policy-makers should take into account local abiotic and biotic factors to manage rangelands efficiently,” Ren notes. “By ignoring such complex interactions, we found that using IPCC [Intergovernmental Panel on Climate Change] guidelines would underestimate grazing-induced soil carbon loss by a factor of three globally.”

    Using an approach that incorporates local environmental conditions, the team produced global, high-resolution maps of optimal grazing intensity and the threshold of intensity at which carbon starts to decrease very rapidly. These maps are expected to serve as important benchmarks for evaluating existing grazing practices and provide guidance to local farmers on how to effectively manage their grazing lands.

    Then, using that map, the team estimated how much carbon could be captured if all grazing lands were limited to their optimum grazing intensity. Currently, the authors found, about 20 percent of all pasturelands have crossed the thresholds, leading to severe carbon losses. However, they found that under the optimal levels, global grazing lands would sequester 63 petagrams of carbon. “It is amazing,” Ren says. “This value is roughly equivalent to a 30-year carbon accumulation from global natural forest regrowth.”

    That would be no simple task, of course. To achieve optimal levels, the team found that approximately 75 percent of all grazing areas need to reduce grazing intensity. Overall, if the world seriously reduces the amount of grazing, “you have to reduce the amount of meat that’s available for people,” Terrer says.

    “Another option is to move cattle around,” he says, “from areas that are more severely affected by grazing intensity, to areas that are less affected. Those rotations have been suggested as an opportunity to avoid the more drastic declines in carbon stocks without necessarily reducing the availability of meat.”

    This study didn’t delve into these social and economic implications, Terrer says. “Our role is to just point out what would be the opportunity here. It shows that shifts in diets can be a powerful way to mitigate climate change.”

    “This is a rigorous and careful analysis that provides our best look to date at soil carbon changes due to livestock grazing practiced worldwide,” say Ben Bond-Lamberty, a terrestrial ecosystem research scientist at Pacific Northwest National Laboratory, who was not associated with this work. “The authors’ analysis gives us a unique estimate of soil carbon losses due to grazing and, intriguingly, where and how the process might be reversed.”

    He adds: “One intriguing aspect to this work is the discrepancies between its results and the guidelines currently used by the IPCC — guidelines that affect countries’ commitments, carbon-market pricing, and policies.” However, he says, “As the authors note, the amount of carbon historically grazed soils might be able to take up is small relative to ongoing human emissions. But every little bit helps!”

    “Improved management of working lands can be a powerful tool to combat climate change,” says Jonathan Sanderman, carbon program director of the Woodwell Climate Research Center in Falmouth, Massachusetts, who was not associated with this work. He adds, “This work demonstrates that while, historically, grazing has been a large contributor to climate change, there is significant potential to decrease the climate impact of livestock by optimizing grazing intensity to rebuild lost soil carbon.”

    Terrer states that for now, “we have started a new study, to evaluate the consequences of shifts in diets for carbon stocks. I think that’s the million-dollar question: How much carbon could you sequester, compared to business as usual, if diets shift to more vegan or vegetarian?” The answers will not be simple, because a shift to more vegetable-based diets would require more cropland, which can also have different environmental impacts. Pastures take more land than crops, but produce different kinds of emissions. “What’s the overall impact for climate change? That is the question we’re interested in,” he says.

    The research team included Juan Li, Yingfao Cao, Sheshan Yang, and Dan Liu, all with the  Chinese Academy of Sciences. The work was supported by the Second Tibetan Plateau Scientific Expedition and Research Program, and the Science and Technology Major Project of Tibetan Autonomous Region of China. More