More stories

  • in

    Ancient Amazonians intentionally created fertile “dark earth”

    The Amazon river basin is known for its immense and lush tropical forests, so one might assume that the Amazon’s land is equally rich. In fact, the soils underlying the forested vegetation, particularly in the hilly uplands, are surprisingly infertile. Much of the Amazon’s soil is acidic and low in nutrients, making it notoriously difficult to farm.

    But over the years, archaeologists have dug up mysteriously black and fertile patches of ancient soils in hundreds of sites across the Amazon. This “dark earth” has been found in and around human settlements dating back hundreds to thousands of years. And it has been a matter of some debate as to whether the super-rich soil was purposefully created or a coincidental byproduct of these ancient cultures.

    Now, a study led by researchers at MIT, the University of Florida, and in Brazil aims to settle the debate over dark earth’s origins. The team has pieced together results from soil analyses, ethnographic observations, and interviews with modern Indigenous communities, to show that dark earth was intentionally produced by ancient Amazonians as a way to improve the soil and sustain large and complex societies.

    “If you want to have large settlements, you need a nutritional base. But the soil in the Amazon is extensively leached of nutrients, and naturally poor for growing most crops,” says Taylor Perron, the Cecil and Ida Green Professor of Earth, Atmospheric and Planetary Sciences at MIT. “We argue here that people played a role in creating dark earth, and intentionally modified the ancient environment to make it a better place for human populations.”

    And as it turns out, dark earth contains huge amounts of stored carbon. As generations worked the soil, for instance by enriching it with scraps of food, charcoal, and waste, the earth accumulated the carbon-rich detritus and kept it locked up for hundreds to thousands of years. By purposely producing dark earth, then, early Amazonians may have also unintentionally created a powerful, carbon-sequestering soil.

    “The ancient Amazonians put a lot of carbon in the soil, and a lot of that is still there today,” says co-author Samuel Goldberg, who performed the data analysis as a graduate student at MIT and is now an assistant professor at the University of Miami. “That’s exactly what we want for climate change mitigation efforts. Maybe we could adapt some of their indigenous strategies on a larger scale, to lock up carbon in soil, in ways that we now know would stay there for a long time.”

    The team’s study appears today in Science Advances. Other authors include former MIT postdoc and lead author Morgan Schmidt, anthropologist Michael Heckenberger of the University of Florida, and collaborators from multiple institutions across Brazil.

    Modern intent

    In their current study, the team synthesized observations and data that Schmidt, Heckenberger, and others had previously gathered, while working with Indigenous communities in the Amazon since the early 2000s,  with new data collected in 2018-19. The scientists focused their fieldwork in the Kuikuro Indigenous Territory in the Upper Xingu River basin in the southeastern Amazon. This region is home to modern Kuikuro villages as well as archaeological sites where the ancestors of the Kuikuro are thought to have lived. Over multiple visits to the region, Schmidt, then a graduate student at the University of Florida, was struck by the darker soil around some archaeological sites.

    “When I saw this dark earth and how fertile it was, and started digging into what was known about it, I found it was a mysterious thing — no one really knew where it came from,” he says.

    Schmidt and his colleagues began making observations of the modern Kuikuro’s practices of managing the soil. These practices include generating “middens” — piles of waste and food scraps, similar to compost heaps, that are maintained in certain locations around the center of a village. After some time, these waste piles decompose and mix with the soil to form a dark and fertile earth, that residents then use to plant crops. The researchers also observed that Kuikuro farmers spread organic waste and ash on farther fields, which also generates dark earth, where they can then grow more crops.

    “We saw activities they did to modify the soil and increase the elements, like spreading ash on the ground, or spreading charcoal around the base of the tree, which were obviously intentional actions,” Schmidt says.

    In addition to these observations, they also conducted interviews with villagers to document the Kuikuro’s beliefs and practices relating to dark earth. In some of these interviews, villagers referred to dark earth as “eegepe,” and described their daily practices in creating and cultivating the rich soil to improve its agricultural potential.

    Based on these observations and interviews with the Kuikuro, it was clear that Indigenous communities today intentionally produce dark earth, through their practices to improve the soil. But could the dark earth found in nearby archaeological sites have been made through similar intentional practices?

    A bridge in soil

    In search of a connection, Schmidt joined Perron’s group as a postdoc at MIT. Together, he, Perron, and Goldberg carried out a meticulous analysis of soils in both archaeological and modern sites in the Upper Xingu region. They discovered similarities in dark earth’s spatial structure: Deposits of dark earth were found in a radial pattern, concentrating mostly in the center of both modern and ancient settlements, and stretching, like spokes of a wheel, out to the edges. Modern and ancient dark earth was also similar in composition, and was enriched in the same elements, such as carbon, phosphorus, and other nutrients.

    “These are all the elements that are in humans, animals, and plants, and they’re the ones that reduce the aluminum toxicity in soil, which is a notorious problem in the Amazon,” Schmidt says. “All these elements make the soil better for plant growth.”

    “The key bridge between the modern and ancient times is the soil,” Goldberg adds. “Because we see this correspondence between the two time periods, we can infer that these practices that we can observe and ask people about today, were also happening in the past.”

    In other words, the team was able to show for the first time that ancient Amazonians intentionally worked the soil, likely through practices similar to today’s, in order to grow enough crops to sustain large communities.

    Going a step further, the team calculated the amount of carbon in ancient dark earth. They combined their measurements of soil samples, with maps of where dark earth has been found through several ancient settlements. Their estimates revealed that each ancient village contains several thousand tons of carbon that has been sequestered in the soil for hundreds of years as a result of Indigenous, human activities.

    As the team concludes in their paper, “modern sustainable agriculture and climate change mitigation efforts, inspired by the persistent fertility of ancient dark earth, can draw on traditional methods practiced to this day by Indigenous Amazonians.”

    This research at MIT was supported, in part, by the Abdul Latif Jameel Water and Food Systems Lab and the Department of the Air Force Artificial Intelligence Accelerator. Field research was supported by grants to the University of Florida from the National Science Foundation, the Wenner-Gren Foundation and the William Talbott Hillman Foundation, and was sponsored in Brazil by the Museu Goeldi and Museu Nacional. More

  • in

    How to tackle the global deforestation crisis

    Imagine if France, Germany, and Spain were completely blanketed in forests — and then all those trees were quickly chopped down. That’s nearly the amount of deforestation that occurred globally between 2001 and 2020, with profound consequences.

    Deforestation is a major contributor to climate change, producing between 6 and 17 percent of global greenhouse gas emissions, according to a 2009 study. Meanwhile, because trees also absorb carbon dioxide, removing it from the atmosphere, they help keep the Earth cooler. And climate change aside, forests protect biodiversity.

    “Climate change and biodiversity make this a global problem, not a local problem,” says MIT economist Ben Olken. “Deciding to cut down trees or not has huge implications for the world.”

    But deforestation is often financially profitable, so it continues at a rapid rate. Researchers can now measure this trend closely: In the last quarter-century, satellite-based technology has led to a paradigm change in charting deforestation. New deforestation datasets, based on the Landsat satellites, for instance, track forest change since 2000 with resolution at 30 meters, while many other products now offer frequent imaging at close resolution.

    “Part of this revolution in measurement is accuracy, and the other part is coverage,” says Clare Balboni, an assistant professor of economics at the London School of Economics (LSE). “On-site observation is very expensive and logistically challenging, and you’re talking about case studies. These satellite-based data sets just open up opportunities to see deforestation at scale, systematically, across the globe.”

    Balboni and Olken have now helped write a new paper providing a road map for thinking about this crisis. The open-access article, “The Economics of Tropical Deforestation,” appears this month in the Annual Review of Economics. The co-authors are Balboni, a former MIT faculty member; Aaron Berman, a PhD candidate in MIT’s Department of Economics; Robin Burgess, an LSE professor; and Olken, MIT’s Jane Berkowitz Carlton and Dennis William Carlton Professor of Microeconomics. Balboni and Olken have also conducted primary research in this area, along with Burgess.

    So, how can the world tackle deforestation? It starts with understanding the problem.

    Replacing forests with farms

    Several decades ago, some thinkers, including the famous MIT economist Paul Samuelson in the 1970s, built models to study forests as a renewable resource; Samuelson calculated the “maximum sustained yield” at which a forest could be cleared while being regrown. These frameworks were designed to think about tree farms or the U.S. national forest system, where a fraction of trees would be cut each year, and then new trees would be grown over time to take their place.

    But deforestation today, particularly in tropical areas, often looks very different, and forest regeneration is not common.

    Indeed, as Balboni and Olken emphasize, deforestation is now rampant partly because the profits from chopping down trees come not just from timber, but from replacing forests with agriculture. In Brazil, deforestation has increased along with agricultural prices; in Indonesia, clearing trees accelerated as the global price of palm oil went up, leading companies to replace forests with palm tree orchards.

    All this tree-clearing creates a familiar situation: The globally shared costs of climate change from deforestation are “externalities,” as economists say, imposed on everyone else by the people removing forest land. It is akin to a company that pollutes into a river, affecting the water quality of residents.

    “Economics has changed the way it thinks about this over the last 50 years, and two things are central,” Olken says. “The relevance of global externalities is very important, and the conceptualization of alternate land uses is very important.” This also means traditional forest-management guidance about regrowth is not enough. With the economic dynamics in mind, which policies might work, and why?

    The search for solutions

    As Balboni and Olken note, economists often recommend “Pigouvian” taxes (named after the British economist Arthur Pigou) in these cases, levied against people imposing externalities on others. And yet, it can be hard to identify who is doing the deforesting.

    Instead of taxing people for clearing forests, governments can pay people to keep forests intact. The UN uses Payments for Environmental Services (PES) as part of its REDD+ (Reducing Emissions from Deforestation and forest Degradation) program. However, it is similarly tough to identify the optimal landowners to subsidize, and these payments may not match the quick cash-in of deforestation. A 2017 study in Uganda showed PES reduced deforestation somewhat; a 2022 study in Indonesia found no reduction; another 2022 study, in Brazil, showed again that some forest protection resulted.

    “There’s mixed evidence from many of these [studies],” Balboni says. These policies, she notes, must reach people who would otherwise clear forests, and a key question is, “How can we assess their success compared to what would have happened anyway?”

    Some places have tried cash transfer programs for larger populations. In Indonesia, a 2020 study found such subsidies reduced deforestation near villages by 30 percent. But in Mexico, a similar program meant more people could afford milk and meat, again creating demand for more agriculture and thus leading to more forest-clearing.

    At this point, it might seem that laws simply banning deforestation in key areas would work best — indeed, about 16 percent of the world’s land overall is protected in some way. Yet the dynamics of protection are tricky. Even with protected areas in place, there is still “leakage” of deforestation into other regions. 

    Still more approaches exist, including “nonstate agreements,” such as the Amazon Soy Moratorium in Brazil, in which grain traders pledged not to buy soy from deforested lands, and reduced deforestation without “leakage.”

    Also, intriguingly, a 2008 policy change in the Brazilian Amazon made agricultural credit harder to obtain by requiring recipients to comply with environmental and land registration rules. The result? Deforestation dropped by up to 60 percent over nearly a decade. 

    Politics and pulp

    Overall, Balboni and Olken observe, beyond “externalities,” two major challenges exist. One, it is often unclear who holds property rights in forests. In these circumstances, deforestation seems to increase. Two, deforestation is subject to political battles.

    For instance, as economist Bard Harstad of Stanford University has observed, environmental lobbying is asymmetric. Balboni and Olken write: “The conservationist lobby must pay the government in perpetuity … while the deforestation-oriented lobby need pay only once to deforest in the present.” And political instability leads to more deforestation because “the current administration places lower value on future conservation payments.”

    Even so, national political measures can work. In the Amazon from 2001 to 2005, Brazilian deforestation rates were three to four times higher than on similar land across the border, but that imbalance vanished once the country passed conservation measures in 2006. However, deforestation ramped up again after a 2014 change in government. Looking at particular monitoring approaches, a study of Brazil’s satellite-based Real-Time System for Detection of Deforestation (DETER), launched in 2004, suggests that a 50 percent annual increase in its use in municipalities created a 25 percent reduction in deforestation from 2006 to 2016.

    How precisely politics matters may depend on the context. In a 2021 paper, Balboni and Olken (with three colleagues) found that deforestation actually decreased around elections in Indonesia. Conversely, in Brazil, one study found that deforestation rates were 8 to 10 percent higher where mayors were running for re-election between 2002 and 2012, suggesting incumbents had deforestation industry support.

    “The research there is aiming to understand what the political economy drivers are,” Olken says, “with the idea that if you understand those things, reform in those countries is more likely.”

    Looking ahead, Balboni and Olken also suggest that new research estimating the value of intact forest land intact could influence public debates. And while many scholars have studied deforestation in Brazil and Indonesia, fewer have examined the Democratic Republic of Congo, another deforestation leader, and sub-Saharan Africa.

    Deforestation is an ongoing crisis. But thanks to satellites and many recent studies, experts know vastly more about the problem than they did a decade or two ago, and with an economics toolkit, can evaluate the incentives and dynamics at play.

    “To the extent that there’s ambuiguity across different contexts with different findings, part of the point of our review piece is to draw out common themes — the important considerations in determining which policy levers can [work] in different circumstances,” Balboni says. “That’s a fast-evolving area. We don’t have all the answers, but part of the process is bringing together growing evidence about [everything] that affects how successful those choices can be.” More

  • in

    Pixel-by-pixel analysis yields insights into lithium-ion batteries

    By mining data from X-ray images, researchers at MIT, Stanford University, SLAC National Accelerator, and the Toyota Research Institute have made significant new discoveries about the reactivity of lithium iron phosphate, a material used in batteries for electric cars and in other rechargeable batteries.

    The new technique has revealed several phenomena that were previously impossible to see, including variations in the rate of lithium intercalation reactions in different regions of a lithium iron phosphate nanoparticle.

    The paper’s most significant practical finding — that these variations in reaction rate are correlated with differences in the thickness of the carbon coating on the surface of the particles — could lead to improvements in the efficiency of charging and discharging such batteries.

    “What we learned from this study is that it’s the interfaces that really control the dynamics of the battery, especially in today’s modern batteries made from nanoparticles of the active material. That means that our focus should really be on engineering that interface,” says Martin Bazant, the E.G. Roos Professor of Chemical Engineering and a professor of mathematics at MIT, who is the senior author of the study.

    This approach to discovering the physics behind complex patterns in images could also be used to gain insights into many other materials, not only other types of batteries but also biological systems, such as dividing cells in a developing embryo.

    “What I find most exciting about this work is the ability to take images of a system that’s undergoing the formation of some pattern, and learning the principles that govern that,” Bazant says.

    Hongbo Zhao PhD ’21, a former MIT graduate student who is now a postdoc at Princeton University, is the lead author of the new study, which appears today in Nature. Other authors include Richard Bratz, the Edwin R. Gilliland Professor of Chemical Engineering at MIT; William Chueh, an associate professor of materials science and engineering at Stanford and director of the SLAC-Stanford Battery Center; and Brian Storey, senior director of Energy and Materials at the Toyota Research Institute.

    “Until now, we could make these beautiful X-ray movies of battery nanoparticles at work, but it was challenging to measure and understand subtle details of how they function because the movies were so information-rich,” Chueh says. “By applying image learning to these nanoscale movies, we can extract insights that were not previously possible.”

    Modeling reaction rates

    Lithium iron phosphate battery electrodes are made of many tiny particles of lithium iron phosphate, surrounded by an electrolyte solution. A typical particle is about 1 micron in diameter and about 100 nanometers thick. When the battery discharges, lithium ions flow from the electrolyte solution into the material by an electrochemical reaction known as ion intercalation. When the battery charges, the intercalation reaction is reversed, and ions flow in the opposite direction.

    “Lithium iron phosphate (LFP) is an important battery material due to low cost, a good safety record, and its use of abundant elements,” Storey says. “We are seeing an increased use of LFP in the EV market, so the timing of this study could not be better.”

    Before the current study, Bazant had done a great deal of theoretical modeling of patterns formed by lithium-ion intercalation. Lithium iron phosphate prefers to exist in one of two stable phases: either full of lithium ions or empty. Since 2005, Bazant has been working on mathematical models of this phenomenon, known as phase separation, which generates distinctive patterns of lithium-ion flow driven by intercalation reactions. In 2015, while on sabbatical at Stanford, he began working with Chueh to try to interpret images of lithium iron phosphate particles from scanning tunneling X-ray microscopy.

    Using this type of microscopy, the researchers can obtain images that reveal the concentration of lithium ions, pixel-by-pixel, at every point in the particle. They can scan the particles several times as the particles charge or discharge, allowing them to create movies of how lithium ions flow in and out of the particles.

    In 2017, Bazant and his colleagues at SLAC received funding from the Toyota Research Institute to pursue further studies using this approach, along with other battery-related research projects.

    By analyzing X-ray images of 63 lithium iron phosphate particles as they charged and discharged, the researchers found that the movement of lithium ions within the material could be nearly identical to the computer simulations that Bazant had created earlier. Using all 180,000 pixels as measurements, the researchers trained the computational model to produce equations that accurately describe the nonequilibrium thermodynamics and reaction kinetics of the battery material.
    By analyzing X-ray images of lithium iron phosphate particles as they charged and discharged, researchers have shown that the movement of lithium ions within the material was nearly identical to computer simulations they had created earlier.  In each pair, the actual particles are on the left and the simulations are on the right.Courtesy of the researchers

    “Every little pixel in there is jumping from full to empty, full to empty. And we’re mapping that whole process, using our equations to understand how that’s happening,” Bazant says.

    The researchers also found that the patterns of lithium-ion flow that they observed could reveal spatial variations in the rate at which lithium ions are absorbed at each location on the particle surface.

    “It was a real surprise to us that we could learn the heterogeneities in the system — in this case, the variations in surface reaction rate — simply by looking at the images,” Bazant says. “There are regions that seem to be fast and others that seem to be slow.”

    Furthermore, the researchers showed that these differences in reaction rate were correlated with the thickness of the carbon coating on the surface of the lithium iron phosphate particles. That carbon coating is applied to lithium iron phosphate to help it conduct electricity — otherwise the material would conduct too slowly to be useful as a battery.

    “We discovered at the nano scale that variation of the carbon coating thickness directly controls the rate, which is something you could never figure out if you didn’t have all of this modeling and image analysis,” Bazant says.

    The findings also offer quantitative support for a hypothesis Bazant formulated several years ago: that the performance of lithium iron phosphate electrodes is limited primarily by the rate of coupled ion-electron transfer at the interface between the solid particle and the carbon coating, rather than the rate of lithium-ion diffusion in the solid.

    Optimized materials

    The results from this study suggest that optimizing the thickness of the carbon layer on the electrode surface could help researchers to design batteries that would work more efficiently, the researchers say.

    “This is the first study that’s been able to directly attribute a property of the battery material with a physical property of the coating,” Bazant says. “The focus for optimizing and designing batteries should be on controlling reaction kinetics at the interface of the electrolyte and electrode.”

    “This publication is the culmination of six years of dedication and collaboration,” Storey says. “This technique allows us to unlock the inner workings of the battery in a way not previously possible. Our next goal is to improve battery design by applying this new understanding.”  

    In addition to using this type of analysis on other battery materials, Bazant anticipates that it could be useful for studying pattern formation in other chemical and biological systems.

    This work was supported by the Toyota Research Institute through the Accelerated Materials Design and Discovery program. More

  • in

    AI pilot programs look to reduce energy use and emissions on MIT campus

    Smart thermostats have changed the way many people heat and cool their homes by using machine learning to respond to occupancy patterns and preferences, resulting in a lower energy draw. This technology — which can collect and synthesize data — generally focuses on single-dwelling use, but what if this type of artificial intelligence could dynamically manage the heating and cooling of an entire campus? That’s the idea behind a cross-departmental effort working to reduce campus energy use through AI building controls that respond in real-time to internal and external factors. 

    Understanding the challenge

    Heating and cooling can be an energy challenge for campuses like MIT, where existing building management systems (BMS) can’t respond quickly to internal factors like occupancy fluctuations or external factors such as forecast weather or the carbon intensity of the grid. This results in using more energy than needed to heat and cool spaces, often to sub-optimal levels. By engaging AI, researchers have begun to establish a framework to understand and predict optimal temperature set points (the temperature at which a thermostat has been set to maintain) at the individual room level and take into consideration a host of factors, allowing the existing systems to heat and cool more efficiently, all without manual intervention. 

    “It’s not that different from what folks are doing in houses,” explains Les Norford, a professor of architecture at MIT, whose work in energy studies, controls, and ventilation connected him with the effort. “Except we have to think about things like how long a classroom may be used in a day, weather predictions, time needed to heat and cool a room, the effect of the heat from the sun coming in the window, and how the classroom next door might impact all of this.” These factors are at the crux of the research and pilots that Norford and a team are focused on. That team includes Jeremy Gregory, executive director of the MIT Climate and Sustainability Consortium; Audun Botterud, principal research scientist for the Laboratory for Information and Decision Systems; Steve Lanou, project manager in the MIT Office of Sustainability (MITOS); Fran Selvaggio, Department of Facilities Senior Building Management Systems engineer; and Daisy Green and You Lin, both postdocs.

    The group is organized around the call to action to “explore possibilities to employ artificial intelligence to reduce on-campus energy consumption” outlined in Fast Forward: MIT’s Climate Action Plan for the Decade, but efforts extend back to 2019. “As we work to decarbonize our campus, we’re exploring all avenues,” says Vice President for Campus Services and Stewardship Joe Higgins, who originally pitched the idea to students at the 2019 MIT Energy Hack. “To me, it was a great opportunity to utilize MIT expertise and see how we can apply it to our campus and share what we learn with the building industry.” Research into the concept kicked off at the event and continued with undergraduate and graduate student researchers running differential equations and managing pilots to test the bounds of the idea. Soon, Gregory, who is also a MITOS faculty fellow, joined the project and helped identify other individuals to join the team. “My role as a faculty fellow is to find opportunities to connect the research community at MIT with challenges MIT itself is facing — so this was a perfect fit for that,” Gregory says. 

    Early pilots of the project focused on testing thermostat set points in NW23, home to the Department of Facilities and Office of Campus Planning, but Norford quickly realized that classrooms provide many more variables to test, and the pilot was expanded to Building 66, a mixed-use building that is home to classrooms, offices, and lab spaces. “We shifted our attention to study classrooms in part because of their complexity, but also the sheer scale — there are hundreds of them on campus, so [they offer] more opportunities to gather data and determine parameters of what we are testing,” says Norford. 

    Developing the technology

    The work to develop smarter building controls starts with a physics-based model using differential equations to understand how objects can heat up or cool down, store heat, and how the heat may flow across a building façade. External data like weather, carbon intensity of the power grid, and classroom schedules are also inputs, with the AI responding to these conditions to deliver an optimal thermostat set point each hour — one that provides the best trade-off between the two objectives of thermal comfort of occupants and energy use. That set point then tells the existing BMS how much to heat up or cool down a space. Real-life testing follows, surveying building occupants about their comfort. Botterud, whose research focuses on the interactions between engineering, economics, and policy in electricity markets, works to ensure that the AI algorithms can then translate this learning into energy and carbon emission savings. 

    Currently the pilots are focused on six classrooms within Building 66, with the intent to move onto lab spaces before expanding to the entire building. “The goal here is energy savings, but that’s not something we can fully assess until we complete a whole building,” explains Norford. “We have to work classroom by classroom to gather the data, but are looking at a much bigger picture.” The research team used its data-driven simulations to estimate significant energy savings while maintaining thermal comfort in the six classrooms over two days, but further work is needed to implement the controls and measure savings across an entire year. 

    With significant savings estimated across individual classrooms, the energy savings derived from an entire building could be substantial, and AI can help meet that goal, explains Botterud: “This whole concept of scalability is really at the heart of what we are doing. We’re spending a lot of time in Building 66 to figure out how it works and hoping that these algorithms can be scaled up with much less effort to other rooms and buildings so solutions we are developing can make a big impact at MIT,” he says.

    Part of that big impact involves operational staff, like Selvaggio, who are essential in connecting the research to current operations and putting them into practice across campus. “Much of the BMS team’s work is done in the pilot stage for a project like this,” he says. “We were able to get these AI systems up and running with our existing BMS within a matter of weeks, allowing the pilots to get off the ground quickly.” Selvaggio says in preparation for the completion of the pilots, the BMS team has identified an additional 50 buildings on campus where the technology can easily be installed in the future to start energy savings. The BMS team also collaborates with the building automation company, Schneider Electric, that has implemented the new control algorithms in Building 66 classrooms and is ready to expand to new pilot locations. 

    Expanding impact

    The successful completion of these programs will also open the possibility for even greater energy savings — bringing MIT closer to its decarbonization goals. “Beyond just energy savings, we can eventually turn our campus buildings into a virtual energy network, where thousands of thermostats are aggregated and coordinated to function as a unified virtual entity,” explains Higgins. These types of energy networks can accelerate power sector decarbonization by decreasing the need for carbon-intensive power plants at peak times and allowing for more efficient power grid energy use.

    As pilots continue, they fulfill another call to action in Fast Forward — for campus to be a “test bed for change.” Says Gregory: “This project is a great example of using our campus as a test bed — it brings in cutting-edge research to apply to decarbonizing our own campus. It’s a great project for its specific focus, but also for serving as a model for how to utilize the campus as a living lab.” More

  • in

    Harnessing hydrogen’s potential to address long-haul trucking emissions

    The transportation of goods forms the basis of today’s globally distributed supply chains, and long-haul trucking is a central and critical link in this complex system. To meet climate goals around the world, it is necessary to develop decarbonized solutions to replace diesel powertrains, but given trucking’s indispensable and vast role, these solutions must be both economically viable and practical to implement. While hydrogen-based options, as an alternative to diesel, have the potential to become a promising decarbonization strategy, hydrogen has significant limitations when it comes to delivery and refueling.These roadblocks, combined with hydrogen’s compelling decarbonization potential, are what motivated a team of MIT researchers led by William H. Green, the Hoyt Hottel Professor in Chemical Engineering, to explore a cost-effective way to transport and store hydrogen using liquid organic hydrogen carriers (LOHCs). The team is developing a disruptive technology that allows LOHCs to not only deliver the hydrogen to the trucks, but also store the hydrogen onboard.Their findings were recently published in Energy and Fuels, a peer-reviewed journal of the American Chemical Society, in a paper titled “Perspective on Decarbonizing Long-Haul Trucks Using Onboard Dehydrogenation of Liquid Organic Hydrogen Carriers.” The MIT team is led by Green, and includes graduate students Sayandeep Biswas and Kariana Moreno Sader. Their research is supported by the MIT Climate and Sustainability Consortium (MCSC) through its Seed Awards program and MathWorks, and ties into the work within the MCSC’s Tough Transportation Modes focus area.An “onboard” approachCurrently, LOHCs, which work within existing retail fuel distribution infrastructure, are used to deliver hydrogen gas to refueling stations, where it is then compressed and delivered onto trucks equipped with hydrogen fuel cell or combustion engines.“This current approach incurs significant energy loss due to endothermic hydrogen release and compression at the retail station” says Green. “To address this, our work is exploring a more efficient application, with LOHC-powered trucks featuring onboard dehydrogenation.”To implement such a design, the team aims to modify the truck’s powertrain (the system inside a vehicle that produces the energy to propel it forward) to allow onboard hydrogen release from the LOHCs, using waste heat from the engine exhaust to power the “dehydrogenation” process. 

    Proposed process flow diagram for onboard dehydrogenation. Component sizes are not to scale and have been enlarged for illustrative purposes.

    Image courtesy of the Green Group.

    Previous item
    Next item

    The dehydrogenation process happens within a high-temperature reactor, which continually receives hydrogen-rich LOHCs from the fuel storage tank. Hydrogen released from the reactor is fed to the engine, after passing through a separator to remove any lingering LOHC. On its way to the engine, some of the hydrogen gets diverted to a burner to heat the reactor, which helps to augment the reactor heating provided by the engine exhaust gases.Acknowledging and addressing hydrogen’s drawbacksThe team’s paper underscores that current uses of hydrogen, including LOHC systems, to decarbonize the trucking sector have drawbacks. Regardless of technical improvements, these existing options remain prohibitively expensive due to the high cost of retail hydrogen delivery.“We present an alternative option that addresses a lot of the challenges and seems to be a viable way in which hydrogen can be used in this transportation context,” says Biswas, who was recently elected to the MIT Martin Family Society of Fellows for Sustainability for his work in this area. “Hydrogen, when used through LOHCs, has clear benefits for long-hauling, such as scalability and fast refueling time. There is also an enormous potential to improve delivery and refueling to further reduce cost, and our system is working to do that.”“Utilizing hydrogen is an option that is globally accessible, and could be extended to countries like the one where I am from,” says Moreno Sader, who is originally from Colombia. “Since it synergizes with existing infrastructure, large upfront investments are not necessary. The global applicability is huge.”Moreno Sader is a MathWorks Fellow, and, along with the rest of the team, has been using MATLAB tools to develop models and simulations for this work.Different sectors coming togetherDecarbonizing transportation modes, including long-haul trucking, requires expertise and perspectives from different industries — an approach that resonates with the MCSC’s mission.The team’s groundbreaking research into LOHC-powered trucking is among several projects supported by the MCSC within its Tough Transportation Modes focus area, led by postdoc Impact Fellow Danika MacDonell. The MCSC-supported projects were chosen to tackle a complementary set of societally important and industry-relevant challenges to decarbonizing heavy-duty transportation, which span a range of sectors and solution pathways. Other projects focus, for example, on logistics optimization for electrified trucking fleets, or air quality and climate impacts of ammonia-powered shipping.The MCSC works to support and amplify the impact of these projects by engaging the research teams with industry partners from a variety of sectors. In addition, the MCSC pursues a collective multisectoral approach to decarbonizing transportation by facilitating shared learning across the different projects through regular cross-team discussion.The research led by Green celebrates this cross-sector theme by integrating industry-leading computing tools provided by MathWorks with cutting-edge developments in chemical engineering, as well as industry-leading commercial LOHC reactor demonstrations, to build a compelling vision for cost-effective LOHC-powered trucking.The review and research conducted in the Energy and Fuels article lays the groundwork for further investigations into LOHC-powered truck design. The development of such a vehicle — with a power-dense, efficient, and robust onboard hydrogen release system — requires dedicated investigations and further optimization of core components geared specifically toward the trucking application. More

  • in

    Technologies for water conservation and treatment move closer to commercialization

    The Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) provides Solutions Grants to help MIT researchers launch startup companies or products to commercialize breakthrough technologies in water and food systems. The Solutions Grant Program began in 2015 and is supported by Community Jameel. In addition to one-year, renewable grants of up to $150,000, the program also matches grantees with industry mentors and facilitates introductions to potential investors. Since its inception, the J-WAFS Solutions Program has awarded over $3 million in funding to the MIT community. Numerous startups and products, including a portable desalination device and a company commercializing a novel food safety sensor, have spun out of this support.

    The 2023 J-WAFS Solutions Grantees are Professor C. Cem Tasan of the Department of Materials Science and Engineering and Professor Andrew Whittle of the Department of Civil and Environmental Engineering. Tasan’s project involves reducing water use in steel manufacturing and Whittle’s project tackles harmful algal blooms in water. Project work commences this September.

    “This year’s Solutions Grants are being award to professors Tasan and Whittle to help commercialize technologies they have been developing at MIT,” says J-WAFS executive director Renee J. Robins. “With J-WAFS’ support, we hope to see the teams move their technologies from the lab to the market, so they can have a beneficial impact on water use and water quality challenges,” Robins adds.

    Reducing water consumption by solid-state steelmaking

    Water is a major requirement for steel production. The steel industry ranks fourth in industrial freshwater consumption worldwide, since large amounts of water are needed mainly for cooling purposes in the process. Unfortunately, a strong correlation has also been shown to exist between freshwater use in steelmaking and water contamination. As the global demand for steel increases and freshwater availability decreases due to climate change, improved methods for more sustainable steel production are needed.

    A strategy to reduce the water footprint of steelmaking is to explore steel recycling processes that avoid liquid metal processing. With this motivation, Cem Tasan, the Thomas B. King Associate Professor of Metallurgy in the Department of Materials Science and Engineering, and postdoc Onur Guvenc PhD created a new process called Scrap Metal Consolidation (SMC). SMC is based on a well-established metal forming process known as roll bonding. Conventionally, roll bonding requires intensive prior surface treatment of the raw material, specific atmospheric conditions, and high deformation levels. Tasan and Guvenc’s research revealed that SMC can overcome these restrictions by enabling the solid-state bonding of scrap into a sheet metal form, even when the surface quality, atmospheric conditions, and deformation levels are suboptimal. Through lab-scale proof-of-principle investigations, they have already identified SMC process conditions and validated the mechanical formability of resulting steel sheets, focusing on mild steel, the most common sheet metal scrap.

    The J-WAFS Solutions Grant will help the team to build customer product prototypes, design the processing unit, and develop a scale-up strategy and business model. By simultaneously decreasing water usage, energy demand, contamination risk, and carbon dioxide burden, SMC has the potential to decrease the energy need for steel recycling by up to 86 percent, as well as reduce the linked carbon dioxide emissions and safeguard the freshwater resources that would otherwise be directed to industrial consumption. 

    Detecting harmful algal blooms in water before it’s too late

    Harmful algal blooms (HABs) are a growing problem in both freshwater and saltwater environments worldwide, causing an estimated $13 billion in annual damage to drinking water, water for recreational use, commercial fishing areas, and desalination activities. HABs pose a threat to both human health and aquaculture, thereby threatening the food supply. Toxins in HABs are produced by some cyanobacteria, or blue-green algae, whose communities change in composition in response to eutrophication from agricultural runoff, sewer overflows, or other events. Mitigation of risks from HABs are most effective when there is advance warning of these changes in algal communities. 

    Most in situ measurements of algae are based on fluorescence spectroscopy that is conducted with LED-induced fluorescence (LEDIF) devices, or probes that induce fluorescence of specific algal pigments using LED light sources. While LEDIFs provide reasonable estimates of concentrations of individual pigments, they lack resolution to discriminate algal classes within complex mixtures found in natural water bodies. In prior research, Andrew Whittle, the Edmund K. Turner Professor of Civil and Environmental Engineering, worked with colleagues to design REMORA, a low-cost, field-deployable prototype spectrofluorometer for measuring induced fluorescence. This research was part of a collaboration between MIT and the AMS Institute. Whittle and the team successfully trained a machine learning model to discriminate and quantify cell concentrations for mixtures of different algal groups in water samples through an extensive laboratory calibration program using various algae cultures. The group demonstrated these capabilities in a series of field measurements at locations in Boston and Amsterdam. 

    Whittle will work with Fábio Duarte of the Department of Urban Studies and Planning, the Senseable City Lab, and MIT’s Center for Real Estate to refine the design of REMORA. They will develop software for autonomous operation of the sensor that can be deployed remotely on mobile vessels or platforms to enable high-resolution spatiotemporal monitoring for harmful algae. Sensor commercialization will hopefully be able to exploit the unique capabilities of REMORA for long-term monitoring applications by water utilities, environmental regulatory agencies, and water-intensive industries.  More

  • in

    Study suggests energy-efficient route to capturing and converting CO2

    In the race to draw down greenhouse gas emissions around the world, scientists at MIT are looking to carbon-capture technologies to decarbonize the most stubborn industrial emitters.

    Steel, cement, and chemical manufacturing are especially difficult industries to decarbonize, as carbon and fossil fuels are inherent ingredients in their production. Technologies that can capture carbon emissions and convert them into forms that feed back into the production process could help to reduce the overall emissions from these “hard-to-abate” sectors.

    But thus far, experimental technologies that capture and convert carbon dioxide do so as two separate processes, that themselves require a huge amount of energy to run. The MIT team is looking to combine the two processes into one integrated and far more energy-efficient system that could potentially run on renewable energy to both capture and convert carbon dioxide from concentrated, industrial sources.

    In a study appearing today in ACS Catalysis, the researchers reveal the hidden functioning of how carbon dioxide can be both captured and converted through a single electrochemical process. The process involves using an electrode to attract carbon dioxide released from a sorbent, and to convert it into a reduced, reusable form.

    Others have reported similar demonstrations, but the mechanisms driving the electrochemical reaction have remained unclear. The MIT team carried out extensive experiments to determine that driver, and found that, in the end, it came down to the partial pressure of carbon dioxide. In other words, the more pure carbon dioxide that makes contact with the electrode, the more efficiently the electrode can capture and convert the molecule.

    Knowledge of this main driver, or “active species,” can help scientists tune and optimize similar electrochemical systems to efficiently capture and convert carbon dioxide in an integrated process.

    The study’s results imply that, while these electrochemical systems would probably not work for very dilute environments (for instance, to capture and convert carbon emissions directly from the air), they would be well-suited to the highly concentrated emissions generated by industrial processes, particularly those that have no obvious renewable alternative.

    “We can and should switch to renewables for electricity production. But deeply decarbonizing industries like cement or steel production is challenging and will take a longer time,” says study author Betar Gallant, the Class of 1922 Career Development Associate Professor at MIT. “Even if we get rid of all our power plants, we need some solutions to deal with the emissions from other industries in the shorter term, before we can fully decarbonize them. That’s where we see a sweet spot, where something like this system could fit.”

    The study’s MIT co-authors are lead author and postdoc Graham Leverick and graduate student Elizabeth Bernhardt, along with Aisyah Illyani Ismail, Jun Hui Law, Arif Arifutzzaman, and Mohamed Kheireddine Aroua of Sunway University in Malaysia.

    Breaking bonds

    Carbon-capture technologies are designed to capture emissions, or “flue gas,” from the smokestacks of power plants and manufacturing facilities. This is done primarily using large retrofits to funnel emissions into chambers filled with a “capture” solution — a mix of amines, or ammonia-based compounds, that chemically bind with carbon dioxide, producing a stable form that can be separated out from the rest of the flue gas.

    High temperatures are then applied, typically in the form of fossil-fuel-generated steam, to release the captured carbon dioxide from its amine bond. In its pure form, the gas can then be pumped into storage tanks or underground, mineralized, or further converted into chemicals or fuels.

    “Carbon capture is a mature technology, in that the chemistry has been known for about 100 years, but it requires really large installations, and is quite expensive and energy-intensive to run,” Gallant notes. “What we want are technologies that are more modular and flexible and can be adapted to more diverse sources of carbon dioxide. Electrochemical systems can help to address that.”

    Her group at MIT is developing an electrochemical system that both recovers the captured carbon dioxide and converts it into a reduced, usable product. Such an integrated system, rather than a decoupled one, she says, could be entirely powered with renewable electricity rather than fossil-fuel-derived steam.

    Their concept centers on an electrode that would fit into existing chambers of carbon-capture solutions. When a voltage is applied to the electrode, electrons flow onto the reactive form of carbon dioxide and convert it to a product using protons supplied from water. This makes the sorbent available to bind more carbon dioxide, rather than using steam to do the same.

    Gallant previously demonstrated this electrochemical process could work to capture and convert carbon dioxide into a solid carbonate form.

    “We showed that this electrochemical process was feasible in very early concepts,” she says. “Since then, there have been other studies focused on using this process to attempt to produce useful chemicals and fuels. But there’s been inconsistent explanations of how these reactions work, under the hood.”

    Solo CO2

    In the new study, the MIT team took a magnifying glass under the hood to tease out the specific reactions driving the electrochemical process. In the lab, they generated amine solutions that resemble the industrial capture solutions used to extract carbon dioxide from flue gas. They methodically altered various properties of each solution, such as the pH, concentration, and type of amine, then ran each solution past an electrode made from silver — a metal that is widely used in electrolysis studies and known to efficiently convert carbon dioxide to carbon monoxide. They then measured the concentration of carbon monoxide that was converted at the end of the reaction, and compared this number against that of every other solution they tested, to see which parameter had the most influence on how much carbon monoxide was produced.

    In the end, they found that what mattered most was not the type of amine used to initially capture carbon dioxide, as many have suspected. Instead, it was the concentration of solo, free-floating carbon dioxide molecules, which avoided bonding with amines but were nevertheless present in the solution. This “solo-CO2” determined the concentration of carbon monoxide that was ultimately produced.

    “We found that it’s easier to react this ‘solo’ CO2, as compared to CO2 that has been captured by the amine,” Leverick offers. “This tells future researchers that this process could be feasible for industrial streams, where high concentrations of carbon dioxide could efficiently be captured and converted into useful chemicals and fuels.”

    “This is not a removal technology, and it’s important to state that,” Gallant stresses. “The value that it does bring is that it allows us to recycle carbon dioxide some number of times while sustaining existing industrial processes, for fewer associated emissions. Ultimately, my dream is that electrochemical systems can be used to facilitate mineralization, and permanent storage of CO2 — a true removal technology. That’s a longer-term vision. And a lot of the science we’re starting to understand is a first step toward designing those processes.”

    This research is supported by Sunway University in Malaysia. More

  • in

    Devices offers long-distance, low-power underwater communication

    MIT researchers have demonstrated the first system for ultra-low-power underwater networking and communication, which can transmit signals across kilometer-scale distances.

    This technique, which the researchers began developing several years ago, uses about one-millionth the power that existing underwater communication methods use. By expanding their battery-free system’s communication range, the researchers have made the technology more feasible for applications such as aquaculture, coastal hurricane prediction, and climate change modeling.

    “What started as a very exciting intellectual idea a few years ago — underwater communication with a million times lower power — is now practical and realistic. There are still a few interesting technical challenges to address, but there is a clear path from where we are now to deployment,” says Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science and director of the Signal Kinetics group in the MIT Media Lab.

    Underwater backscatter enables low-power communication by encoding data in sound waves that it reflects, or scatters, back toward a receiver. These innovations enable reflected signals to be more precisely directed at their source.

    Due to this “retrodirectivity,” less signal scatters in the wrong directions, allowing for more efficient and longer-range communication.

    When tested in a river and an ocean, the retrodirective device exhibited a communication range that was more than 15 times farther than previous devices. However, the experiments were limited by the length of the docks available to the researchers.

    To better understand the limits of underwater backscatter, the team also developed an analytical model to predict the technology’s maximum range. The model, which they validated using experimental data, showed that their retrodirective system could communicate across kilometer-scale distances.

    The researchers shared these findings in two papers which will be presented at this year’s ACM SIGCOMM and MobiCom conferences. Adib, senior author on both papers, is joined on the SIGCOMM paper by co-lead authors Aline Eid, a former postdoc who is now an assistant professor at the University of Michigan, and Jack Rademacher, a research assistant; as well as research assistants Waleed Akbar and Purui Wang, and postdoc Ahmed Allam. The MobiCom paper is also written by co-lead authors Akbar and Allam.

    Communicating with sound waves

    Underwater backscatter communication devices utilize an array of nodes made from “piezoelectric” materials to receive and reflect sound waves. These materials produce an electric signal when mechanical force is applied to them.

    When sound waves strike the nodes, they vibrate and convert the mechanical energy to an electric charge. The nodes use that charge to scatter some of the acoustic energy back to the source, transmitting data that a receiver decodes based on the sequence of reflections.

    But because the backscattered signal travels in all directions, only a small fraction reaches the source, reducing the signal strength and limiting the communication range.

    To overcome this challenge, the researchers leveraged a 70-year-old radio device called a Van Atta array, in which symmetric pairs of antennas are connected in such a way that the array reflects energy back in the direction it came from.

    But connecting piezoelectric nodes to make a Van Atta array reduces their efficiency. The researchers avoided this problem by placing a transformer between pairs of connected nodes. The transformer, which transfers electric energy from one circuit to another, allows the nodes to reflect the maximum amount of energy back to the source.

    “Both nodes are receiving and both nodes are reflecting, so it is a very interesting system. As you increase the number of elements in that system, you build an array that allows you to achieve much longer communication ranges,” Eid explains.

    In addition, they used a technique called cross-polarity switching to encode binary data in the reflected signal. Each node has a positive and a negative terminal (like a car battery), so when the positive terminals of two nodes are connected and the negative terminals of two nodes are connected, that reflected signal is a “bit one.”

    But if the researchers switch the polarity, and the negative and positive terminals are connected to each other instead, then the reflection is a “bit zero.”

    “Just connecting the piezoelectric nodes together is not enough. By alternating the polarities between the two nodes, we are able to transmit data back to the remote receiver,” Rademacher explains.

    When building the Van Atta array, the researchers found that if the connected nodes were too close, they would block each other’s signals. They devised a new design with staggered nodes that enables signals to reach the array from any direction. With this scalable design, the more nodes an array has, the greater its communication range.

    They tested the array in more than 1,500 experimental trials in the Charles River in Cambridge, Massachusetts, and in the Atlantic Ocean, off the coast of Falmouth, Massachusetts, in collaboration with the Woods Hole Oceanographic Institution. The device achieved communication ranges of 300 meters, more than 15 times longer than they previously demonstrated.

    However, they had to cut the experiments short because they ran out of space on the dock.

    Modeling the maximum

    That inspired the researchers to build an analytical model to determine the theoretical and practical communication limits of this new underwater backscatter technology.

    Building off their group’s work on RFIDs, the team carefully crafted a model that captured the impact of system parameters, like the size of the piezoelectric nodes and the input power of the signal, on the underwater operation range of the device.

    “It is not a traditional communication technology, so you need to understand how you can quantify the reflection. What are the roles of the different components in that process?” Akbar says.

    For instance, the researchers needed to derive a function that captures the amount of signal reflected out of an underwater piezoelectric node with a specific size, which was among the biggest challenges of developing the model, he adds.

    They used these insights to create a plug-and-play model into a which a user could enter information like input power and piezoelectric node dimensions and receive an output that shows the expected range of the system.

    They evaluated the model on data from their experimental trials and found that it could accurately predict the range of retrodirected acoustic signals with an average error of less than one decibel.

    Using this model, they showed that an underwater backscatter array can potentially achieve kilometer-long communication ranges.

    “We are creating a new ocean technology and propelling it into the realm of the things we have been doing for 6G cellular networks. For us, it is very rewarding because we are starting to see this now very close to reality,” Adib says.

    The researchers plan to continue studying underwater backscatter Van Atta arrays, perhaps using boats so they could evaluate longer communication ranges. Along the way, they intend to release tools and datasets so other researchers can build on their work. At the same time, they are beginning to move toward commercialization of this technology.

    “Limited range has been an open problem in underwater backscatter networks, preventing them from being used in real-world applications. This paper takes a significant step forward in the future of underwater communication, by enabling them to operate on minimum energy while achieving long range,” says Omid Abari, assistant professor of computer science at the University of California at Los Angeles, who was not involved with this work. “The paper is the first to bring Van Atta Reflector array technique into underwater backscatter settings and demonstrate its benefits in improving the communication range by orders of magnitude. This can take battery-free underwater communication one step closer to reality, enabling applications such as underwater climate change monitoring and coastal monitoring.”

    This research was funded, in part, by the Office of Naval Research, the Sloan Research Fellowship, the National Science Foundation, the MIT Media Lab, and the Doherty Chair in Ocean Utilization. More