More stories

  • in

    Improving health outcomes by targeting climate and air pollution simultaneously

    Climate policies are typically designed to reduce greenhouse gas emissions that result from human activities and drive climate change. The largest source of these emissions is the combustion of fossil fuels, which increases atmospheric concentrations of ozone, fine particulate matter (PM2.5) and other air pollutants that pose public health risks. While climate policies may result in lower concentrations of health-damaging air pollutants as a “co-benefit” of reducing greenhouse gas emissions-intensive activities, they are most effective at improving health outcomes when deployed in tandem with geographically targeted air-quality regulations.

    Yet the computer models typically used to assess the likely air quality/health impacts of proposed climate/air-quality policy combinations come with drawbacks for decision-makers. Atmospheric chemistry/climate models can produce high-resolution results, but they are expensive and time-consuming to run. Integrated assessment models can produce results for far less time and money, but produce results at global and regional scales, rendering them insufficiently precise to obtain accurate assessments of air quality/health impacts at the subnational level.

    To overcome these drawbacks, a team of researchers at MIT and the University of California at Davis has developed a climate/air-quality policy assessment tool that is both computationally efficient and location-specific. Described in a new study in the journal ACS Environmental Au, the tool could enable users to obtain rapid estimates of combined policy impacts on air quality/health at more than 1,500 locations around the globe — estimates precise enough to reveal the equity implications of proposed policy combinations within a particular region.

    “The modeling approach described in this study may ultimately allow decision-makers to assess the efficacy of multiple combinations of climate and air-quality policies in reducing the health impacts of air pollution, and to design more effective policies,” says Sebastian Eastham, the study’s lead author and a principal research scientist at the MIT Joint Program on the Science and Policy of Global Change. “It may also be used to determine if a given policy combination would result in equitable health outcomes across a geographical area of interest.”

    To demonstrate the efficiency and accuracy of their policy assessment tool, the researchers showed that outcomes projected by the tool within seconds were consistent with region-specific results from detailed chemistry/climate models that took days or even months to run. While continuing to refine and develop their approaches, they are now working to embed the new tool into integrated assessment models for direct use by policymakers.

    “As decision-makers implement climate policies in the context of other sustainability challenges like air pollution, efficient modeling tools are important for assessment — and new computational techniques allow us to build faster and more accurate tools to provide credible, relevant information to a broader range of users,” says Noelle Selin, a professor at MIT’s Institute for Data, Systems and Society and Department of Earth, Atmospheric and Planetary Sciences, and supervising author of the study. “We are looking forward to further developing such approaches, and to working with stakeholders to ensure that they provide timely, targeted and useful assessments.”

    The study was funded, in part, by the U.S. Environmental Protection Agency and the Biogen Foundation. More

  • in

    Study: Carbon-neutral pavements are possible by 2050, but rapid policy and industry action are needed

    Almost 2.8 million lane-miles, or about 4.6 million lane-kilometers, of the United States are paved.

    Roads and streets form the backbone of our built environment. They take us to work or school, take goods to their destinations, and much more.

    However, a new study by MIT Concrete Sustainability Hub (CSHub) researchers shows that the annual greenhouse gas (GHG) emissions of all construction materials used in the U.S. pavement network are 11.9 to 13.3 megatons. This is equivalent to the emissions of a gasoline-powered passenger vehicle driving about 30 billion miles in a year.

    As roads are built, repaved, and expanded, new approaches and thoughtful material choices are necessary to dampen their carbon footprint. 

    The CSHub researchers found that, by 2050, mixtures for pavements can be made carbon-neutral if industry and governmental actors help to apply a range of solutions — like carbon capture — to reduce, avoid, and neutralize embodied impacts. (A neutralization solution is any compensation mechanism in the value chain of a product that permanently removes the global warming impact of the processes after avoiding and reducing the emissions.) Furthermore, nearly half of pavement-related greenhouse gas (GHG) savings can be achieved in the short term with a negative or nearly net-zero cost.

    The research team, led by Hessam AzariJafari, MIT CSHub’s deputy director, closed gaps in our understanding of the impacts of pavements decisions by developing a dynamic model quantifying the embodied impact of future pavements materials demand for the U.S. road network. 

    The team first split the U.S. road network into 10-mile (about 16 kilometer) segments, forecasting the condition and performance of each. They then developed a pavement management system model to create benchmarks helping to understand the current level of emissions and the efficacy of different decarbonization strategies. 

    This model considered factors such as annual traffic volume and surface conditions, budget constraints, regional variation in pavement treatment choices, and pavement deterioration. The researchers also used a life-cycle assessment to calculate annual state-level emissions from acquiring pavement construction materials, considering future energy supply and materials procurement.

    The team considered three scenarios for the U.S. pavement network: A business-as-usual scenario in which technology remains static, a projected improvement scenario aligned with stated industry and national goals, and an ambitious improvement scenario that intensifies or accelerates projected strategies to achieve carbon neutrality. 

    If no steps are taken to decarbonize pavement mixtures, the team projected that GHG emissions of construction materials used in the U.S. pavement network would increase by 19.5 percent by 2050. Under the projected scenario, there was an estimated 38 percent embodied impact reduction for concrete and 14 percent embodied impact reduction for asphalt by 2050.

    The keys to making the pavement network carbon neutral by 2050 lie in multiple places. Fully renewable energy sources should be used for pavement materials production, transportation, and other processes. The federal government must contribute to the development of these low-carbon energy sources and carbon capture technologies, as it would be nearly impossible to achieve carbon neutrality for pavements without them. 

    Additionally, increasing pavements’ recycled content and improving their design and production efficiency can lower GHG emissions to an extent. Still, neutralization is needed to achieve carbon neutrality.

    Making the right pavement construction and repair choices would also contribute to the carbon neutrality of the network. For instance, concrete pavements can offer GHG savings across the whole life cycle as they are stiffer and stay smoother for longer, meaning they require less maintenance and have a lesser impact on the fuel efficiency of vehicles. 

    Concrete pavements have other use-phase benefits including a cooling effect through an intrinsically high albedo, meaning they reflect more sunlight than regular pavements. Therefore, they can help combat extreme heat and positively affect the earth’s energy balance through positive radiative forcing, making albedo a potential neutralization mechanism.

    At the same time, a mix of fixes, including using concrete and asphalt in different contexts and proportions, could produce significant GHG savings for the pavement network; decision-makers must consider scenarios on a case-by-case basis to identify optimal solutions. 

    In addition, it may appear as though the GHG emissions of materials used in local roads are dwarfed by the emissions of interstate highway materials. However, the study found that the two road types have a similar impact. In fact, all road types contribute heavily to the total GHG emissions of pavement materials in general. Therefore, stakeholders at the federal, state, and local levels must be involved if our roads are to become carbon neutral. 

    The path to pavement network carbon-neutrality is, therefore, somewhat of a winding road. It demands regionally specific policies and widespread investment to help implement decarbonization solutions, just as renewable energy initiatives have been supported. Providing subsidies and covering the costs of premiums, too, are vital to avoid shifts in the market that would derail environmental savings.

    When planning for these shifts, we must recall that pavements have impacts not just in their production, but across their entire life cycle. As pavements are used, maintained, and eventually decommissioned, they have significant impacts on the surrounding environment.

    If we are to meet climate goals such as the Paris Agreement, which demands that we reach carbon-neutrality by 2050 to avoid the worst impacts of climate change, we — as well as industry and governmental stakeholders — must come together to take a hard look at the roads we use every day and work to reduce their life cycle emissions. 

    The study was published in the International Journal of Life Cycle Assessment. In addition to AzariJafari, the authors include Fengdi Guo of the MIT Department of Civil and Environmental Engineering; Jeremy Gregory, executive director of the MIT Climate and Sustainability Consortium; and Randolph Kirchain, director of the MIT CSHub. More

  • in

    A more sustainable way to generate phosphorus

    Phosphorus is an essential ingredient in thousands of products, including herbicides, lithium-ion batteries, and even soft drinks. Most of this phosphorus comes from an energy-intensive process that contributes significantly to global carbon emissions.

    In an effort to reduce that carbon footprint, MIT chemists have devised an alternative way to generate white phosphorus, a critical intermediate in the manufacture of those phosphorus-containing products. Their approach, which uses electricity to speed up a key chemical reaction, could reduce the carbon emissions of the process by half or even more, the researchers say.

    “White phosphorus is currently an indispensable intermediate, and our process dramatically reduces the carbon footprint of converting phosphate to white phosphorus,” says Yogesh Surendranath, an associate professor of chemistry at MIT and the senior author of the study.

    The new process reduces the carbon footprint of white phosphorus production in two ways: It reduces the temperatures required for the reaction, and it generates significantly less carbon dioxide as a waste product.

    Recent MIT graduate Jonathan “Jo” Melville PhD ’21 and MIT graduate student Andrew Licini are the lead authors of the paper, which appears today in ACS Central Science.

    Purifying phosphorus

    When phosphorus is mined out of the ground, it is in the form of phosphate, a mineral whose basic unit comprises one atom of phosphorus bound to four oxygen atoms. About 95 percent of this phosphate ore is used to make fertilizer. The remaining phosphate ore is processed separately into white phosphorus, a molecule composed of four phosphorus atoms bound to each other. White phosphorus is then fed into a variety of chemical processes that are used to manufacture many different products, such as lithium battery electrolytes and semiconductor dopants.

    Converting those mined phosphates into white phosphorus accounts for a substantial fraction of the carbon footprint of the entire phosphorus industry, Surendranath says. The most energy-intensive part of the process is breaking the bonds between phosphorus and oxygen, which are very stable.

    Using the traditional “thermal process,” those bonds are broken by heating carbon coke and phosphate rock to a temperature of 1,500 degrees Celsius. In this process, the carbon serves to strip away the oxygen atoms from phosphorus, leading to the eventual generation of CO2 as a byproduct. In addition, sustaining those temperatures requires a great deal of energy, adding to the carbon footprint of the process.

    “That process hasn’t changed substantially since its inception over a century ago. Our goal was to figure out how we could develop a process that would substantially lower the carbon footprint of this process,” Surendranath says. “The idea was to combine it with renewable electricity and drive that conversion of phosphate to white phosphorus with electrons rather than using carbon.”

    To do that, the researchers had to come up with an alternative way to weaken the strong phosphorus-oxygen bonds found in phosphates. They achieved this by controlling the environment in which the reaction occurs. The researchers found that the reaction could be promoted using a dehydrated form of phosphoric acid, which contains long chains of phosphate salts held together by bonds called phosphoryl anhydrides. These bonds help to weaken the phosphorus-oxygen bonds.

    When the researchers run an electric current through these salts, electrons break the weakened bonds, allowing the phosphorus atoms to break free and bind to each other to form white phosphorus. At the temperatures needed for this system (about 800 C), phosphorus exists as a gas, so it can bubble out of the solution and be collected in an external chamber.

    Decarbonization

    The electrode that the researchers used for this demonstration relies on carbon as a source of electrons, so the process generates some carbon dioxide as a byproduct. However, they are now working on swapping that electrode out for one that would use phosphate itself as the electron source, which would further reduce the carbon footprint by cleanly separating phosphate into phosphorus and oxygen.

    With the process reported in this paper, the researchers have reduced the overall carbon footprint for generating white phosphorus by about 50 percent. With future modifications, they hope to bring the carbon emissions down to nearly zero, in part by using renewable energy such as solar or wind power to drive the electric current required.

    If the researchers succeed in scaling up their process and making it widely available, it could allow industrial users to generate white phosphorus on site instead of having it shipped from the few places in the world where it is currently manufactured. That would cut down on the risks of transporting white phosphorus, which is an explosive material.

    “We’re excited about the prospect of doing on-site generation of this intermediate, so you don’t have to do the transportation and distribution,” Surendranath says. “If you could decentralize this production, the end user could make it on site and use it in an integrated fashion.”

    In order to do this study, the researchers had to develop new tools for controlling the electrolytes (such as salts and acids) present in the environment, and for measuring how those electrolytes affect the reaction. Now, they plan to use the same approach to try to develop lower-carbon processes for isolating other industrially important elements, such as silicon and iron.

    “This work falls within our broader interests in decarbonizing these legacy industrial processes that have a huge carbon footprint,” Surendranath says. “The basic science that leads us there is understanding how you can tailor the electrolytes to foster these processes.”

    The research was funded by the UMRP Partnership for Progress on Sustainable Development in Africa, a fellowship from the MIT Tata Center for Technology and Design, and a National Defense Science and Engineering Graduate Fellowship. More

  • in

    Using combustion to make better batteries

    For more than a century, much of the world has run on the combustion of fossil fuels. Now, to avert the threat of climate change, the energy system is changing. Notably, solar and wind systems are replacing fossil fuel combustion for generating electricity and heat, and batteries are replacing the internal combustion engine for powering vehicles. As the energy transition progresses, researchers worldwide are tackling the many challenges that arise.

    Sili Deng has spent her career thinking about combustion. Now an assistant professor in the MIT Department of Mechanical Engineering and the Class of 1954 Career Development Professor, Deng leads a group that, among other things, develops theoretical models to help understand and control combustion systems to make them more efficient and to control the formation of emissions, including particles of soot.

    “So we thought, given our background in combustion, what’s the best way we can contribute to the energy transition?” says Deng. In considering the possibilities, she notes that combustion refers only to the process — not to what’s burning. “While we generally think of fossil fuels when we think of combustion, the term ‘combustion’ encompasses many high-temperature chemical reactions that involve oxygen and typically emit light and large amounts of heat,” she says.

    Given that definition, she saw another role for the expertise she and her team have developed: They could explore the use of combustion to make materials for the energy transition. Under carefully controlled conditions, combusting flames can be used to produce not polluting soot, but rather valuable materials, including some that are critical in the manufacture of lithium-ion batteries.

    Improving the lithium-ion battery by lowering costs

    The demand for lithium-ion batteries is projected to skyrocket in the coming decades. Batteries will be needed to power the growing fleet of electric cars and to store the electricity produced by solar and wind systems so it can be delivered later when those sources aren’t generating. Some experts project that the global demand for lithium-ion batteries may increase tenfold or more in the next decade.

    Given such projections, many researchers are looking for ways to improve the lithium-ion battery technology. Deng and her group aren’t materials scientists, so they don’t focus on making new and better battery chemistries. Instead, their goal is to find a way to lower the high cost of making all of those batteries. And much of the cost of making a lithium-ion battery can be traced to the manufacture of materials used to make one of its two electrodes — the cathode.

    The MIT researchers began their search for cost savings by considering the methods now used to produce cathode materials. The raw materials are typically salts of several metals, including lithium, which provides ions — the electrically charged particles that move when the battery is charged and discharged. The processing technology aims to produce tiny particles, each one made up of a mixture of those ingredients, with the atoms arranged in the specific crystalline structure that will deliver the best performance in the finished battery.

    For the past several decades, companies have manufactured those cathode materials using a two-stage process called coprecipitation. In the first stage, the metal salts — excluding the lithium — are dissolved in water and thoroughly mixed inside a chemical reactor. Chemicals are added to change the acidity (the pH) of the mixture, and particles made up of the combined salts precipitate out of the solution. The particles are then removed, dried, ground up, and put through a sieve.

    A change in pH won’t cause lithium to precipitate, so it is added in the second stage. Solid lithium is ground together with the particles from the first stage until lithium atoms permeate the particles. The resulting material is then heated, or “annealed,” to ensure complete mixing and to achieve the targeted crystalline structure. Finally, the particles go through a “deagglomerator” that separates any particles that have joined together, and the cathode material emerges.

    Coprecipitation produces the needed materials, but the process is time-consuming. The first stage takes about 10 hours, and the second stage requires about 13 hours of annealing at a relatively low temperature (750 degrees Celsius). In addition, to prevent cracking during annealing, the temperature is gradually “ramped” up and down, which takes another 11 hours. The process is thus not only time-consuming but also energy-intensive and costly.

    For the past two years, Deng and her group have been exploring better ways to make the cathode material. “Combustion is very effective at oxidizing things, and the materials for lithium-ion batteries are generally mixtures of metal oxides,” says Deng. That being the case, they thought this could be an opportunity to use a combustion-based process called flame synthesis.

    A new way of making a high-performance cathode material

    The first task for Deng and her team — mechanical engineering postdoc Jianan Zhang, Valerie L. Muldoon ’20, SM ’22, and current graduate students Maanasa Bhat and Chuwei Zhang — was to choose a target material for their study. They decided to focus on a mixture of metal oxides consisting of nickel, cobalt, and manganese plus lithium. Known as “NCM811,” this material is widely used and has been shown to produce cathodes for batteries that deliver high performance; in an electric vehicle, that means a long driving range, rapid discharge and recharge, and a long lifetime. To better define their target, the researchers examined the literature to determine the composition and crystalline structure of NCM811 that has been shown to deliver the best performance as a cathode material.

    They then considered three possible approaches to improving on the coprecipitation process for synthesizing NCM811: They could simplify the system (to cut capital costs), speed up the process, or cut the energy required.

    “Our first thought was, what if we can mix together all of the substances — including the lithium — at the beginning?” says Deng. “Then we would not need to have the two stages” — a clear simplification over coprecipitation.

    Introducing FASP

    One process widely used in the chemical and other industries to fabricate nanoparticles is a type of flame synthesis called flame-assisted spray pyrolysis, or FASP. Deng’s concept for using FASP to make their targeted cathode powders proceeds as follows.

    The precursor materials — the metal salts (including the lithium) — are mixed with water, and the resulting solution is sprayed as fine droplets by an atomizer into a combustion chamber. There, a flame of burning methane heats up the mixture. The water evaporates, leaving the precursor materials to decompose, oxidize, and solidify to form the powder product. The cyclone separates particles of different sizes, and the baghouse filters out those that aren’t useful. The collected particles would then be annealed and deagglomerated.

    To investigate and optimize this concept, the researchers developed a lab-scale FASP setup consisting of a homemade ultrasonic nebulizer, a preheating section, a burner, a filter, and a vacuum pump that withdraws the powders that form. Using that system, they could control the details of the heating process: The preheating section replicates conditions as the material first enters the combustion chamber, and the burner replicates conditions as it passes the flame. That setup allowed the team to explore operating conditions that would give the best results.

    Their experiments showed marked benefits over coprecipitation. The nebulizer breaks up the liquid solution into fine droplets, ensuring atomic-level mixing. The water simply evaporates, so there’s no need to change the pH or to separate the solids from a liquid. As Deng notes, “You just let the gas go, and you’re left with the particles, which is what you want.” With lithium included at the outset, there’s no need for mixing solids with solids, which is neither efficient 
nor effective.

    They could even control the structure, or “morphology,” of the particles that formed. In one series of experiments, they tried exposing the incoming spray to different rates of temperature change over time. They found that the temperature “history” has a direct impact on morphology. With no preheating, the particles burst apart; and with rapid preheating, the particles were hollow. The best outcomes came when they used temperatures ranging from 175-225 C. Experiments with coin-cell batteries (laboratory devices used for testing battery materials) confirmed that by adjusting the preheating temperature, they could achieve a particle morphology that would optimize the performance of their materials.

    Best of all, the particles formed in seconds. Assuming the time needed for conventional annealing and deagglomerating, the new setup could synthesize the finished cathode material in half the total time needed for coprecipitation. Moreover, the first stage of the coprecipitation system is replaced by a far simpler setup — a savings in capital costs.

    “We were very happy,” says Deng. “But then we thought, if we’ve changed the precursor side so the lithium is mixed well with the salts, do we need to have the same process for the second stage? Maybe not!”

    Improving the second stage

    The key time- and energy-consuming step in the second stage is the annealing. In today’s coprecipitation process, the strategy is to anneal at a low temperature for a long time, giving the operator time to manipulate and control the process. But running a furnace for some 20 hours — even at a low temperature — consumes a lot of energy.

    Based on their studies thus far, Deng thought, “What if we slightly increase the temperature but reduce the annealing time by orders of magnitude? Then we could cut energy consumption, and we might still achieve the desired crystal structure.”

    However, experiments at slightly elevated temperatures and short treatment times didn’t bring the results they had hoped for. In transmission electron microscope (TEM) images, the particles that formed had clouds of light-looking nanoscale particles attached to their surfaces. When the researchers performed the same experiments without adding the lithium, those nanoparticles didn’t appear. Based on that and other tests, they concluded that the nanoparticles were pure lithium. So, it seemed like long-duration annealing would be needed to ensure that the lithium made its way inside the particles.

    But they then came up with a different solution to the lithium-distribution problem. They added a small amount — just 1 percent by weight — of an inexpensive compound called urea to their mixture. In TEM images of the particles formed, the “undesirable nanoparticles were largely gone,” says Deng.

    Experiments in the laboratory coin cells showed that the addition of urea significantly altered the response to changes in the annealing temperature. When the urea was absent, raising the annealing temperature led to a dramatic decline in performance of the cathode material that formed. But with the urea present, the performance of the material that formed was unaffected by any temperature change.

    That result meant that — as long as the urea was added with the other precursors — they could push up the temperature, shrink the annealing time, and omit the gradual ramp-up and cool-down process. Further imaging studies confirmed that their approach yields the desired crystal structure and the homogeneous elemental distribution of the cobalt, nickel, manganese, and lithium within the particles. Moreover, in tests of various performance measures, their materials did as well as materials produced by coprecipitation or by other methods using long-time heat treatment. Indeed, the performance was comparable to that of commercial batteries with cathodes made of NCM811.

    So now the long and expensive second stage required in standard coprecipitation could be replaced by just 20 minutes of annealing at about 870 C plus 20 minutes of cooling down at room temperature.

    Theory, continuing work, and planning for scale-up

    While experimental evidence supports their approach, Deng and her group are now working to understand why it works. “Getting the underlying physics right will help us design the process to control the morphology and to scale up the process,” says Deng. And they have a hypothesis for why the lithium nanoparticles in their flame synthesis process end up on the surfaces of the larger particles — and why the presence of urea solves that problem.

    According to their theory, without the added urea, the metal and lithium atoms are initially well-mixed within the droplet. But as heating progresses, the lithium diffuses to the surface and ends up as nanoparticles attached to the solidified particle. As a result, a long annealing process is needed to move the lithium in among the other atoms.

    When the urea is present, it starts out mixed with the lithium and other atoms inside the droplet. As temperatures rise, the urea decomposes, forming bubbles. As heating progresses, the bubbles burst, increasing circulation, which keeps the lithium from diffusing to the surface. The lithium ends up uniformly distributed, so the final heat treatment can be very short.

    The researchers are now designing a system to suspend a droplet of their mixture so they can observe the circulation inside it, with and without the urea present. They’re also developing experiments to examine how droplets vaporize, employing tools and methods they have used in the past to study how hydrocarbons vaporize inside internal combustion engines.

    They also have ideas about how to streamline and scale up their process. In coprecipitation, the first stage takes 10 to 20 hours, so one batch at a time moves on to the second stage to be annealed. In contrast, the novel FASP process generates particles in 20 minutes or less — a rate that’s consistent with continuous processing. In their design for an “integrated synthesis system,” the particles coming out of the baghouse are deposited on a belt that carries them for 10 or 20 minutes through a furnace. A deagglomerator then breaks any attached particles apart, and the cathode powder emerges, ready to be fabricated into a high-performance cathode for a lithium-ion battery. The cathode powders for high-performance lithium-ion batteries would thus be manufactured at unprecedented speed, low cost, and low energy use.

    Deng notes that every component in their integrated system is already used in industry, generally at a large scale and high flow-through rate. “That’s why we see great potential for our technology to be commercialized and scaled up,” she says. “Where our expertise comes into play is in designing the combustion chamber to control the temperature and heating rate so as to produce particles with the desired morphology.” And while a detailed economic analysis has yet to be performed, it seems clear that their technique will be faster, the equipment simpler, and the energy use lower than other methods of manufacturing cathode materials for lithium-ion batteries — potentially a major contribution to the ongoing energy transition.

    This research was supported by the MIT Department of Mechanical Engineering.

    This article appears in the Winter 2023 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    How to pull carbon dioxide out of seawater

    As carbon dioxide continues to build up in the Earth’s atmosphere, research teams around the world have spent years seeking ways to remove the gas efficiently from the air. Meanwhile, the world’s number one “sink” for carbon dioxide from the atmosphere is the ocean, which soaks up some 30 to 40 percent of all of the gas produced by human activities.

    Recently, the possibility of removing carbon dioxide directly from ocean water has emerged as another promising possibility for mitigating CO2 emissions, one that could potentially someday even lead to overall net negative emissions. But, like air capture systems, the idea has not yet led to any widespread use, though there are a few companies attempting to enter this area.

    Now, a team of researchers at MIT says they may have found the key to a truly efficient and inexpensive removal mechanism. The findings were reported this week in the journal Energy and Environmental Science, in a paper by MIT professors T. Alan Hatton and Kripa Varanasi, postdoc Seoni Kim, and graduate students Michael Nitzsche, Simon Rufer, and Jack Lake.

    The existing methods for removing carbon dioxide from seawater apply a voltage across a stack of membranes to acidify a feed stream by water splitting. This converts bicarbonates in the water to molecules of CO2, which can then be removed under vacuum. Hatton, who is the Ralph Landau Professor of Chemical Engineering, notes that the membranes are expensive, and chemicals are required to drive the overall electrode reactions at either end of the stack, adding further to the expense and complexity of the processes. “We wanted to avoid the need for introducing chemicals to the anode and cathode half cells and to avoid the use of membranes if at all possible” he says.

    The team came up with a reversible process consisting of membrane-free electrochemical cells. Reactive electrodes are used to release protons to the seawater fed to the cells, driving the release of the dissolved carbon dioxide from the water. The process is cyclic: It first acidifies the water to convert dissolved inorganic bicarbonates to molecular carbon dioxide, which is collected as a gas under vacuum. Then, the water is fed to a second set of cells with a reversed voltage, to recover the protons and turn the acidic water back to alkaline before releasing it back to the sea. Periodically, the roles of the two cells are reversed once one set of electrodes is depleted of protons (during acidification) and the other has been regenerated during alkalization.

    This removal of carbon dioxide and reinjection of alkaline water could slowly start to reverse, at least locally, the acidification of the oceans that has been caused by carbon dioxide buildup, which in turn has threatened coral reefs and shellfish, says Varanasi, a professor of mechanical engineering. The reinjection of alkaline water could be done through dispersed outlets or far offshore to avoid a local spike of alkalinity that could disrupt ecosystems, they say.

    “We’re not going to be able to treat the entire planet’s emissions,” Varanasi says. But the reinjection might be done in some cases in places such as fish farms, which tend to acidify the water, so this could be a way of helping to counter that effect.

    Once the carbon dioxide is removed from the water, it still needs to be disposed of, as with other carbon removal processes. For example, it can be buried in deep geologic formations under the sea floor, or it can be chemically converted into a compound like ethanol, which can be used as a transportation fuel, or into other specialty chemicals. “You can certainly consider using the captured CO2 as a feedstock for chemicals or materials production, but you’re not going to be able to use all of it as a feedstock,” says Hatton. “You’ll run out of markets for all the products you produce, so no matter what, a significant amount of the captured CO2 will need to be buried underground.”

    Initially at least, the idea would be to couple such systems with existing or planned infrastructure that already processes seawater, such as desalination plants. “This system is scalable so that we could integrate it potentially into existing processes that are already processing ocean water or in contact with ocean water,” Varanasi says. There, the carbon dioxide removal could be a simple add-on to existing processes, which already return vast amounts of water to the sea, and it would not require consumables like chemical additives or membranes.

    “With desalination plants, you’re already pumping all the water, so why not co-locate there?” Varanasi says. “A bunch of capital costs associated with the way you move the water, and the permitting, all that could already be taken care of.”

    The system could also be implemented by ships that would process water as they travel, in order to help mitigate the significant contribution of ship traffic to overall emissions. There are already international mandates to lower shipping’s emissions, and “this could help shipping companies offset some of their emissions, and turn ships into ocean scrubbers,” Varanasi says.

    The system could also be implemented at locations such as offshore drilling platforms, or at aquaculture farms. Eventually, it could lead to a deployment of free-standing carbon removal plants distributed globally.

    The process could be more efficient than air-capture systems, Hatton says, because the concentration of carbon dioxide in seawater is more than 100 times greater than it is in air. In direct air-capture systems it is first necessary to capture and concentrate the gas before recovering it. “The oceans are large carbon sinks, however, so the capture step has already kind of been done for you,” he says. “There’s no capture step, only release.” That means the volumes of material that need to be handled are much smaller, potentially simplifying the whole process and reducing the footprint requirements.

    The research is continuing, with one goal being to find an alternative to the present step that requires a vacuum to remove the separated carbon dioxide from the water. Another need is to identify operating strategies to prevent precipitation of minerals that can foul the electrodes in the alkalinization cell, an inherent issue that reduces the overall efficiency in all reported approaches. Hatton notes that significant progress has been made on these issues, but that it is still too early to report on them. The team expects that the system could be ready for a practical demonstration project within about two years.

    “The carbon dioxide problem is the defining problem of our life, of our existence,” Varanasi says. “So clearly, we need all the help we can get.”

    The work was supported by ARPA-E. More

  • in

    To decarbonize the chemical industry, electrify it

    The chemical industry is the world’s largest industrial energy consumer and the third-largest source of industrial emissions, according to the International Energy Agency. In 2019, the industrial sector as a whole was responsible for 24 percent of global greenhouse gas emissions. And yet, as the world races to find pathways to decarbonization, the chemical industry has been largely untouched.

    “When it comes to climate action and dealing with the emissions that come from the chemical sector, the slow pace of progress is partly technical and partly driven by the hesitation on behalf of policymakers to overly impact the economic competitiveness of the sector,” says Dharik Mallapragada, a principal research scientist at the MIT Energy Initiative.

    With so many of the items we interact with in our daily lives — from soap to baking soda to fertilizer — deriving from products of the chemical industry, the sector has become a major source of economic activity and employment for many nations, including the United States and China. But as the global demand for chemical products continues to grow, so do the industry’s emissions.

    New sustainable chemical production methods need to be developed and deployed and current emission-intensive chemical production technologies need to be reconsidered, urge the authors of a new paper published in Joule. Researchers from DC-MUSE, a multi-institution research initiative, argue that electrification powered by low-carbon sources should be viewed more broadly as a viable decarbonization pathway for the chemical industry. In this paper, they shine a light on different potential methods to do just that.

    “Generally, the perception is that electrification can play a role in this sector — in a very narrow sense — in that it can replace fossil fuel combustion by providing the heat that the combustion is providing,” says Mallapragada, a member of DC-MUSE. “What we argue is that electrification could be much more than that.”

    The researchers outline four technological pathways — ranging from more mature, near-term options to less technologically mature options in need of research investment — and present the opportunities and challenges associated with each.

    The first two pathways directly replace fossil fuel-produced heat (which facilitates the reactions inherent in chemical production) with electricity or electrochemically generated hydrogen. The researchers suggest that both options could be deployed now and potentially be used to retrofit existing facilities. Electrolytic hydrogen is also highlighted as an opportunity to replace fossil fuel-produced hydrogen (a process that emits carbon dioxide) as a critical chemical feedstock. In 2020, fossil-based hydrogen supplied nearly all hydrogen demand (90 megatons) in the chemical and refining industries — hydrogen’s largest consumers.

    The researchers note that increasing the role of electricity in decarbonizing the chemical industry will directly affect the decarbonization of the power grid. They stress that to successfully implement these technologies, their operation must coordinate with the power grid in a mutually beneficial manner to avoid overburdening it. “If we’re going to be serious about decarbonizing the sector and relying on electricity for that, we have to be creative in how we use it,” says Mallapragada. “Otherwise we run the risk of having addressed one problem, while creating a massive problem for the grid in the process.”

    Electrified processes have the potential to be much more flexible than conventional fossil fuel-driven processes. This can reduce the cost of chemical production by allowing producers to shift electricity consumption to times when the cost of electricity is low. “Process flexibility is particularly impactful during stressed power grid conditions and can help better accommodate renewable generation resources, which are intermittent and are often poorly correlated with daily power grid cycles,” says Yury Dvorkin, an associate research professor at the Johns Hopkins Ralph O’Connor Sustainable Energy Institute. “It’s beneficial for potential adopters because it can help them avoid consuming electricity during high-price periods.”

    Dvorkin adds that some intermediate energy carriers, such as hydrogen, can potentially be used as highly efficient energy storage for day-to-day operations and as long-term energy storage. This would help support the power grid during extreme events when traditional and renewable generators may be unavailable. “The application of long-duration storage is of particular interest as this is a key enabler of a low-emissions society, yet not widespread beyond pumped hydro units,” he says. “However, as we envision electrified chemical manufacturing, it is important to ensure that the supplied electricity is sourced from low-emission generators to prevent emissions leakages from the chemical to power sector.” 

    The next two pathways introduced — utilizing electrochemistry and plasma — are less technologically mature but have the potential to replace energy- and carbon-intensive thermochemical processes currently used in the industry. By adopting electrochemical processes or plasma-driven reactions instead, chemical transformations can occur at lower temperatures and pressures, potentially enhancing efficiency. “These reaction pathways also have the potential to enable more flexible, grid-responsive plants and the deployment of modular manufacturing plants that leverage distributed chemical feedstocks such as biomass waste — further enhancing sustainability in chemical manufacturing,” says Miguel Modestino, the director of the Sustainable Engineering Initiative at the New York University Tandon School of Engineering.

    A large barrier to deep decarbonization of chemical manufacturing relates to its complex, multi-product nature. But, according to the researchers, each of these electricity-driven pathways supports chemical industry decarbonization for various feedstock choices and end-of-life disposal decisions. Each should be evaluated in comprehensive techno-economic and environmental life cycle assessments to weigh trade-offs and establish suitable cost and performance metrics.

    Regardless of the pathway chosen, the researchers stress the need for active research and development and deployment of these technologies. They also emphasize the importance of workforce training and development running in parallel to technology development. As André Taylor, the director of DC-MUSE, explains, “There is a healthy skepticism in the industry regarding electrification and adoption of these technologies, as it involves processing chemicals in a new way.” The workforce at different levels of the industry hasn’t necessarily been exposed to ideas related to the grid, electrochemistry, or plasma. The researchers say that workforce training at all levels will help build greater confidence in these different solutions and support customer-driven industry adoption.

    “There’s no silver bullet, which is kind of the standard line with all climate change solutions,” says Mallapragada. “Each option has pros and cons, as well as unique advantages. But being aware of the portfolio of options in which you can use electricity allows us to have a better chance of success and of reducing emissions — and doing so in a way that supports grid decarbonization.”

    This work was supported, in part, by the Alfred P. Sloan Foundation. More

  • in

    Computers that power self-driving cars could be a huge driver of global carbon emissions

    In the future, the energy needed to run the powerful computers on board a global fleet of autonomous vehicles could generate as many greenhouse gas emissions as all the data centers in the world today.

    That is one key finding of a new study from MIT researchers that explored the potential energy consumption and related carbon emissions if autonomous vehicles are widely adopted.

    The data centers that house the physical computing infrastructure used for running applications are widely known for their large carbon footprint: They currently account for about 0.3 percent of global greenhouse gas emissions, or about as much carbon as the country of Argentina produces annually, according to the International Energy Agency. Realizing that less attention has been paid to the potential footprint of autonomous vehicles, the MIT researchers built a statistical model to study the problem. They determined that 1 billion autonomous vehicles, each driving for one hour per day with a computer consuming 840 watts, would consume enough energy to generate about the same amount of emissions as data centers currently do.

    The researchers also found that in over 90 percent of modeled scenarios, to keep autonomous vehicle emissions from zooming past current data center emissions, each vehicle must use less than 1.2 kilowatts of power for computing, which would require more efficient hardware. In one scenario — where 95 percent of the global fleet of vehicles is autonomous in 2050, computational workloads double every three years, and the world continues to decarbonize at the current rate — they found that hardware efficiency would need to double faster than every 1.1 years to keep emissions under those levels.

    “If we just keep the business-as-usual trends in decarbonization and the current rate of hardware efficiency improvements, it doesn’t seem like it is going to be enough to constrain the emissions from computing onboard autonomous vehicles. This has the potential to become an enormous problem. But if we get ahead of it, we could design more efficient autonomous vehicles that have a smaller carbon footprint from the start,” says first author Soumya Sudhakar, a graduate student in aeronautics and astronautics.

    Sudhakar wrote the paper with her co-advisors Vivienne Sze, associate professor in the Department of Electrical Engineering and Computer Science (EECS) and a member of the Research Laboratory of Electronics (RLE); and Sertac Karaman, associate professor of aeronautics and astronautics and director of the Laboratory for Information and Decision Systems (LIDS). The research appears today in the January-February issue of IEEE Micro.

    Modeling emissions

    The researchers built a framework to explore the operational emissions from computers on board a global fleet of electric vehicles that are fully autonomous, meaning they don’t require a back-up human driver.

    The model is a function of the number of vehicles in the global fleet, the power of each computer on each vehicle, the hours driven by each vehicle, and the carbon intensity of the electricity powering each computer.

    “On its own, that looks like a deceptively simple equation. But each of those variables contains a lot of uncertainty because we are considering an emerging application that is not here yet,” Sudhakar says.

    For instance, some research suggests that the amount of time driven in autonomous vehicles might increase because people can multitask while driving and the young and the elderly could drive more. But other research suggests that time spent driving might decrease because algorithms could find optimal routes that get people to their destinations faster.

    In addition to considering these uncertainties, the researchers also needed to model advanced computing hardware and software that doesn’t exist yet.

    To accomplish that, they modeled the workload of a popular algorithm for autonomous vehicles, known as a multitask deep neural network because it can perform many tasks at once. They explored how much energy this deep neural network would consume if it were processing many high-resolution inputs from many cameras with high frame rates, simultaneously.

    When they used the probabilistic model to explore different scenarios, Sudhakar was surprised by how quickly the algorithms’ workload added up.

    For example, if an autonomous vehicle has 10 deep neural networks processing images from 10 cameras, and that vehicle drives for one hour a day, it will make 21.6 million inferences each day. One billion vehicles would make 21.6 quadrillion inferences. To put that into perspective, all of Facebook’s data centers worldwide make a few trillion inferences each day (1 quadrillion is 1,000 trillion).

    “After seeing the results, this makes a lot of sense, but it is not something that is on a lot of people’s radar. These vehicles could actually be using a ton of computer power. They have a 360-degree view of the world, so while we have two eyes, they may have 20 eyes, looking all over the place and trying to understand all the things that are happening at the same time,” Karaman says.

    Autonomous vehicles would be used for moving goods, as well as people, so there could be a massive amount of computing power distributed along global supply chains, he says. And their model only considers computing — it doesn’t take into account the energy consumed by vehicle sensors or the emissions generated during manufacturing.

    Keeping emissions in check

    To keep emissions from spiraling out of control, the researchers found that each autonomous vehicle needs to consume less than 1.2 kilowatts of energy for computing. For that to be possible, computing hardware must become more efficient at a significantly faster pace, doubling in efficiency about every 1.1 years.

    One way to boost that efficiency could be to use more specialized hardware, which is designed to run specific driving algorithms. Because researchers know the navigation and perception tasks required for autonomous driving, it could be easier to design specialized hardware for those tasks, Sudhakar says. But vehicles tend to have 10- or 20-year lifespans, so one challenge in developing specialized hardware would be to “future-proof” it so it can run new algorithms.

    In the future, researchers could also make the algorithms more efficient, so they would need less computing power. However, this is also challenging because trading off some accuracy for more efficiency could hamper vehicle safety.

    Now that they have demonstrated this framework, the researchers want to continue exploring hardware efficiency and algorithm improvements. In addition, they say their model can be enhanced by characterizing embodied carbon from autonomous vehicles — the carbon emissions generated when a car is manufactured — and emissions from a vehicle’s sensors.

    While there are still many scenarios to explore, the researchers hope that this work sheds light on a potential problem people may not have considered.

    “We are hoping that people will think of emissions and carbon efficiency as important metrics to consider in their designs. The energy consumption of an autonomous vehicle is really critical, not just for extending the battery life, but also for sustainability,” says Sze.

    This research was funded, in part, by the National Science Foundation and the MIT-Accenture Fellowship. More

  • in

    Sustainable supply chains put the customer first

    When we consider the supply chain, we typically think of factories, ships, trucks, and warehouses. Yet, the customer side is equally important, especially in efforts to make our distribution networks more sustainable. Customers are an untapped resource in building sustainability, says Josué C. Velázquez Martínez, a research scientist at MIT Center for Transportation and Logistics. 

    Velázquez Martínez, who is director of MIT’s Sustainable Supply Chain Lab, investigates how customer-facing supply chains can be made more environmentally and socially sustainable. One way is a Green Button project that explores how to optimize e-commerce delivery schedules to reduce carbon emissions and persuade customers to use less carbon-intensive four- or five-day shipping options instead of one or two days. Velázquez Martínez has also launched the MIT Low Income Firms Transformation (LIFT) Lab that is researching ways to improve micro-retailer supply chains in the developing world to provide owners with the necessary tools for survival.  

    “The definition of sustainable supply chain keeps evolving because things that were sustainable 20 to 30 years ago are not as sustainable now,” says Velázquez Martínez. “Today, there are more companies that are capturing information to build strategies for environmental, economic, and social sustainability. They are investing in alternative energy and other solutions to make the supply chain more environmentally friendly and are tracking their suppliers and identifying key vulnerabilities. A big part of this is an attempt to create fairer conditions for people who work in supply chains or are dependent on them.”

    Play video

    The move toward sustainable supply chain is being driven as much by people as by companies, whether they are playing the role of selective consumer or voting citizens. The consumer aspect is often overlooked, says Velázquez Martínez. “Consumers are the ones who move the supply chain. We are looking at how companies can provide transparency to involve customers in their sustainability strategy.” 

    Proposed solutions for sustainability are not always as effective as promised. Some fashion rental schemes fall into this category, says Velázquez Martínez. “There are many new rental companies that are trying to get more use out of clothes to offset the emissions associated with production. We recently researched the environmental impact of monthly subscription models where consumers pay a fee to receive clothes for a month before returning them, as well as peer-to-peer sharing models.” 

    The researchers found that while rental services generally have a lower carbon footprint than retail sales, hidden emissions from logistics played a surprisingly large role. “First, you need to deliver the clothes and pick them up, and there are high return rates,” says Velázquez Martínez. “When you factor in dry cleaning and packaging emissions, the rental models in some cases have a worse carbon footprint than buying new clothes.” Peer-to-peer sharing could be better, he adds, but that depends on how far the consumers travel to meet-up points. 

    Typically, says Velázquez Martínez, garment types that are frequently used are not well suited to rental models. “But for specialty clothes such as wedding dresses or prom dresses, it is better to rent.” 

    Waiting a few days to save the planet 

    Even before the pandemic, online retailing gained a second wind due to low-cost same- and next-day delivery options. While e-commerce may have its drawbacks as a contributor to social isolation and reduced competition, it has proven itself to be far more eco-friendly than brick-and-mortar shopping, not to mention a lot more convenient. Yet rapid deliveries are cutting into online-shopping’s carbon-cutting advantage.

    In 2019, MIT’s Sustainable Supply Chain Lab launched a Green Bottle project to study the rapid delivery phenomenon. The project has been “testing whether consumers would be willing to delay their e-commerce deliveries to reduce the environmental impact of fast shipping,” says Velázquez Martínez. “Many companies such as Walmart and Target have followed Amazon’s 2019 strategy of moving from two-day to same-day delivery. Instead of sending a fully loaded truck to a neighborhood every few days, they now send multiple trucks to that neighborhood every day, and there are more days when trucks are targeting each neighborhood. All this increases carbon emissions and makes it hard for shippers to consolidate. ”  

    Working with Coppel, one of Mexico’s largest retailers, the Green Button project inspired a related Consolidation Ecommerce Project that built a large-scale mathematical model to provide a strategy for consolidation. The model determined what delivery time window each neighborhood demands and then calculated the best day to deliver to each neighborhood to meet the desired window while minimizing carbon emissions. 

    No matter what mixture of delivery times was used, the consolidation model helped retailers schedule deliveries more efficiently. Yet, the biggest cuts in emissions emerged when customers were willing to wait several days.

    Play video

    “When we ran a month-long simulation comparing our model for four-to-five-day delivery with Coppel’s existing model for one- or two-day delivery, we saw savings in fuel consumption of over 50 percent on certain routes” says Velázquez Martínez. “This is huge compared to other strategies for squeezing more efficiency from the last-mile supply chain, such as routing optimization, where savings are close to 5 percent. The optimal solution depends on factors such as the capacity for consolidation, the frequency of delivery, the store capacity, and the impact on inbound operations.” 

    The researchers next set out to determine if customers could be persuaded to wait longer for deliveries. Considering that the price differential is low or nonexistent, this was a considerable challenge. Yet, the same day habit is only a few years old, and some consumers have come to realize they don’t always need rapid deliveries. “Some consumers who order by rapid delivery find they are too busy to open the packages right away,” says Velázquez Martínez.  

    Trees beat kilograms of CO2

    The researchers set out to find if consumers would be willing to sacrifice a bit of convenience if they knew they were helping to reduce climate change. The Green Button project tested different public outreach strategies. For one test group, they reported the carbon impact of delivery times in kilograms of carbon dioxide (CO2). Another group received the information expressed in terms of the energy required to recycle a certain amount of garbage. A third group learned about emissions in terms of the number of trees required to trap the carbon. “Explaining the impact in terms of trees led to almost 90 percent willing to wait another day or two,” says Velázquez Martínez. “This is compared to less than 40 percent for the group that received the data in kilograms of CO2.” 

    Another surprise was that there was no difference in response based on income, gender, or age. “Most studies of green consumers suggest they are predominantly high income, female, highly educated, or younger,” says Velázquez Martínez. “However, our results show that the differences were the same between low and high income, women and men, and younger and older people. We have shown that disclosing emissions transparently and making the consumer a part of the strategy can be a new opportunity for more consumer-driven logistics sustainability.” 

    The researchers are now developing similar models for business-to-business (B2B) e-commerce. “We found that B2B supply chain emissions are often high because many shipping companies require strict delivery windows,” says Velázquez Martínez.  

    The B2B models drill down to examine the Corporate Value Chain (Scope 3) emissions of suppliers. “Although some shipping companies are now asking their suppliers to review emissions, it is a challenge to create a transparent supply chain,” says Velázquez Martínez.  “Technological innovations have made it easier, starting with RFID [radio frequency identification], and then real-time GPS mapping and blockchain. But these technologies need to be more accessible and affordable, and we need more companies willing to use them.” 

    Some companies have been hesitant to dig too deeply into their supply chain, fearing they might uncover a scandal that might risk their reputation, says Velázquez Martínez. Other organizations are forced to look at the issue when nongovernmental organizations research sustainability issues such as social injustice in sweat shops and conflict mineral mines. 

    One challenge to building a transparent supply chain is that “in many companies, the sustainability teams are separate from the rest of the company,” says Velázquez Martínez. “Even if the CEOs receive information on sustainability issues, it often doesn’t filter down because the information does not belong to the planners or managers. We are pushing companies to not only account for sustainability factors in supply chain network design but also examine daily operations that affect sustainability. This is a big topic now: How can we translate sustainability information into something that everybody can understand and use?” 

    LIFT Lab lifts micro-retailers  

    In 2016, Velázquez Martínez launched the MIT GeneSys project to gain insights into micro and small enterprises (MSEs) in developing countries. The project released a GeneSys mobile app, which was used by more than 500 students throughout Latin America to collect data on more than 800 microfirms. In 2022, he launched the LIFT Lab, which focuses more specifically on studying and improving the supply chain for MSEs.  

    Worldwide, some 90 percent of companies have fewer than 10 employees. In Latin America and the Caribbean, companies with fewer than 50 employees represent 99 percent of all companies and 47 percent of employment. 

    Although MSEs represent much of the world’s economy, they are poorly understood, notes Velázquez Martínez. “Those tiny businesses are driving a lot of the economy and serve as important customers for the large companies working in developing countries. They range from small businesses down to people trying to get some money to eat by selling cakes or tacos through their windows.”  

    The MIT LIFT Lab researchers investigated whether MSE supply chain issues could help shed light on why many Latin American countries have been limited to marginal increases in gross domestic product. “Large companies from the developed world that are operating in Latin America, such as Unilever, Walmart, and Coca-Cola, have huge growth there, in some cases higher than they have in the developed world,” says Velázquez Martínez. “Yet, the countries are not developing as fast as we would expect.” 

    The LIFT Lab data showed that while the multinationals are thriving in Latin America, the local MSEs are decreasing in productivity. The study also found the trend has worsened with Covid-19.  

    The LIFT Lab’s first big project, which is sponsored by Mexican beverage and retail company FEMSA, is studying supply chains in Mexico. The study spans 200,000 micro-retailers and 300,000 consumers. In a collaboration with Tecnológico de Monterrey, hundreds of students are helping with a field study.  

    “We are looking at supply chain management and business capabilities and identifying the challenges to adoption of technology and digitalization,” says Velázquez Martínez. “We want to find the best ways for micro-firms to work with suppliers and consumers by identifying the consumers who access this market, as well as the products and services that can best help the micro-firms drive growth.” 

    Based on the earlier research by GeneSys, Velázquez Martínez has developed some hypotheses for potential improvements for micro-retailer supply chain, starting with payment terms. “We found that the micro-firms often get the worst purchasing deals. Owners without credit cards and with limited cash often buy in smaller amounts at much higher prices than retailers like Walmart. The big suppliers are squeezing them.” 

    While large retailers usually get 60 to 120 days to pay, micro-retailers “either pay at the moment of the transaction or in advance,” says Velázquez Martínez. “In a study of 500 micro-retailers in five countries in Latin America, we found the average payment time was minus seven days payment in advance. These terms reduce cash availability and often lead to bankruptcy.” 

    LIFT Lab is working with suppliers to persuade them to offer a minimum payment time of two weeks. “We can show the suppliers that the change in terms will let them move more product and increase sales,” says Velázquez Martínez. “Meanwhile, the micro-retailers gain higher profits and become more stable, even if they may pay a bit more.” 

    LIFT Lab is also looking at ways that micro-retailers can leverage smartphones for digitalization and planning. “Some of these companies are keeping records on napkins,” says Velázquez Martínez. “By using a cellphone, they can charge orders to suppliers and communicate with consumers. We are testing different dashboards for mobile apps to help with planning and financial performance. We are also recommending services the stores can provide, such as paying electricity or water bills. The idea is to build more capabilities and knowledge and increase business competencies for the supply chain that are tailored for micro-retailers.” 

    From a financial perspective, micro-retailers are not always the most efficient way to move products. Yet they also play an important role in building social cohesion within neighborhoods. By offering more services, the corner bodega can bring people together in ways that are impossible with e-commerce and big-box stores.  

    Whether the consumers are micro-firms buying from suppliers or e-commerce customers waiting for packages, “transparency is key to building a sustainable supply chain,” says Velázquez Martínez. “To change consumer habits, consumers need to be better educated on the impacts of their behaviors. With consumer-facing logistics, ‘The last shall be first, and the first last.’” More