More stories

  • in

    A new method for removing lead from drinking water

    Engineers at MIT have developed a new approach to removing lead or other heavy-metal contaminants from water, in a process that they say is far more energy-efficient than any other currently used system, though there are others under development that come close. Ultimately, it might be used to treat lead-contaminated water supplies at the home level, or to treat contaminated water from some chemical or industrial processes.

    The new system is the latest in a series of applications based on initial findings six years ago by members of the same research team, initially developed for desalination of seawater or brackish water, and later adapted for removing radioactive compounds from the cooling water of nuclear power plants. The new version is the first such method that might be applicable for treating household water supplies, as well as industrial uses.

    The findings are published today in the journal Environmental Science and Technology – Water, in a paper by MIT graduate students Huanhuan Tian, Mohammad Alkhadra, and Kameron Conforti, and professor of chemical engineering Martin Bazant.

    “It’s notoriously difficult to remove toxic heavy metal that’s persistent and present in a lot of different water sources,” Alkhadra says. “Obviously there are competing methods today that do this function, so it’s a matter of which method can do it at lower cost and more reliably.”

    The biggest challenge in trying to remove lead is that it is generally present in such tiny concentrations, vastly exceeded by other elements or compounds. For example, sodium is typically present in drinking water at a concentration of tens of parts per million, whereas lead can be highly toxic at just a few parts per billion. Most existing processes, such as reverse osmosis or distillation, remove everything at once, Alkhadra explains. This not only takes much more energy than would be needed for a selective removal, but it’s counterproductive since small amounts of elements such as sodium and magnesium are actually essential for healthy drinking water.

    The new approach is to use a process called shock electrodialysis, in which an electric field is used to produce a shockwave inside a pipe carrying the contaminated water. The shockwave separates the liquid into two streams, selectively pulling certain electrically charged atoms, or ions, toward one side of the flow by tuning the properties of the shockwave to match the target ions, while leaving a stream of relatively pure water on the other side. The stream containing the concentrated lead ions can then be easily separated out using a mechanical barrier in the pipe.

    In principle, “this makes the process much cheaper,” Bazant says, “because the electrical energy that you’re putting in to do the separation is really going after the high-value target, which is the lead. You’re not wasting a lot of energy removing the sodium.” Because the lead is present at such low concentration, “there’s not a lot of current involved in removing those ions, so this can be a very cost-effective way.”

    The process still has its limitations, as it has only been demonstrated at small laboratory scale and at quite slow flow rates. Scaling up the process to make it practical for in-home use will require further research, and larger-scale industrial uses will take even longer. But it could be practical within a few years for some home-based systems, Bazant says.

    For example, a home whose water supply is heavily contaminated with lead might have a system in the cellar that slowly processes a stream of water, filling a tank with lead-free water to be used for drinking and cooking, while leaving most of the water untreated for uses like toilet flushing or watering the lawn. Such uses might be appropriate as an interim measure for places like Flint, Michigan, where the water, mostly contaminated by the distribution pipes, will take many years to remediate through pipe replacements.

    The process could also be adapted for some industrial uses such as cleaning water produced in mining or drilling operations, so that the treated water can be safely disposed of or reused. And in some cases, this could also provide a way of recovering metals that contaminate water but could actually be a valuable product if they were separated out; for example, some such minerals could be used to process semiconductors or pharmaceuticals or other high-tech products, the researchers say.

    Direct comparisons of the economics of such a system versus existing methods is difficult, Bazant says, because in filtration systems, for example, the costs are mainly for replacing the filter materials, which quickly clog up and become unusable, whereas in this system the costs are mostly for the ongoing energy input, which is very small. At this point, the shock electrodialysis system has been operated for several weeks, but it’s too soon to estimate the real-world longevity of such a system, he says.

    Developing the process into a scalable commercial product will take some time, but “we have shown how this could be done, from a technical standpoint,” Bazant says. “The main issue would be on the economic side,” he adds. That includes figuring out the most appropriate applications and developing specific configurations that would meet those uses. “We do have a reasonable idea of how to scale this up. So it’s a question of having the resources,” which might be a role for a startup company rather than an academic research lab, he adds.

    “I think this is an exciting result,” he says, “because it shows that we really can address this important application” of cleaning the lead from drinking water. For example, he says, there are places now that perform desalination of seawater using reverse osmosis, but they have to run this expensive process twice in a row, first to get the salt out, and then again to remove the low-level but highly toxic contaminants like lead. This new process might be used instead of the second round of reverse osmosis, at a far lower expenditure of energy.

    The research received support from a MathWorks Engineering Fellowship and a fellowship awarded by MIT’s Abdul Latif Jameel Water and Food Systems Lab, funded by Xylem, Inc. More

  • in

    Study: Global cancer risk from burning organic matter comes from unregulated chemicals

    Whenever organic matter is burned, such as in a wildfire, a power plant, a car’s exhaust, or in daily cooking, the combustion releases polycyclic aromatic hydrocarbons (PAHs) — a class of pollutants that is known to cause lung cancer.

    There are more than 100 known types of PAH compounds emitted daily into the atmosphere. Regulators, however, have historically relied on measurements of a single compound, benzo(a)pyrene, to gauge a community’s risk of developing cancer from PAH exposure. Now MIT scientists have found that benzo(a)pyrene may be a poor indicator of this type of cancer risk.

    In a modeling study appearing today in the journal GeoHealth, the team reports that benzo(a)pyrene plays a small part — about 11 percent — in the global risk of developing PAH-associated cancer. Instead, 89 percent of that cancer risk comes from other PAH compounds, many of which are not directly regulated.

    Interestingly, about 17 percent of PAH-associated cancer risk comes from “degradation products” — chemicals that are formed when emitted PAHs react in the atmosphere. Many of these degradation products can in fact be more toxic than the emitted PAH from which they formed.

    The team hopes the results will encourage scientists and regulators to look beyond benzo(a)pyrene, to consider a broader class of PAHs when assessing a community’s cancer risk.

    “Most of the regulatory science and standards for PAHs are based on benzo(a)pyrene levels. But that is a big blind spot that could lead you down a very wrong path in terms of assessing whether cancer risk is improving or not, and whether it’s relatively worse in one place than another,” says study author Noelle Selin, a professor in MIT’s Institute for Data, Systems and Society, and the Department of Earth, Atmospheric and Planetary Sciences.

    Selin’s MIT co-authors include Jesse Kroll, Amy Hrdina, Ishwar Kohale, Forest White, and Bevin Engelward, and Jamie Kelly (who is now at University College London). Peter Ivatt and Mathew Evans at the University of York are also co-authors.

    Chemical pixels

    Benzo(a)pyrene has historically been the poster chemical for PAH exposure. The compound’s indicator status is largely based on early toxicology studies. But recent research suggests the chemical may not be the PAH representative that regulators have long relied upon.   

    “There has been a bit of evidence suggesting benzo(a)pyrene may not be very important, but this was from just a few field studies,” says Kelly, a former postdoc in Selin’s group and the study’s lead author.

    Kelly and his colleagues instead took a systematic approach to evaluate benzo(a)pyrene’s suitability as a PAH indicator. The team began by using GEOS-Chem, a global, three-dimensional chemical transport model that breaks the world into individual grid boxes and simulates within each box the reactions and concentrations of chemicals in the atmosphere.

    They extended this model to include chemical descriptions of how various PAH compounds, including benzo(a)pyrene, would react in the atmosphere. The team then plugged in recent data from emissions inventories and meteorological observations, and ran the model forward to simulate the concentrations of various PAH chemicals around the world over time.

    Risky reactions

    In their simulations, the researchers started with 16 relatively well-studied PAH chemicals, including benzo(a)pyrene, and traced the concentrations of these chemicals, plus the concentration of their degradation products over two generations, or chemical transformations. In total, the team evaluated 48 PAH species.

    They then compared these concentrations with actual concentrations of the same chemicals, recorded by monitoring stations around the world. This comparison was close enough to show that the model’s concentration predictions were realistic.

    Then within each model’s grid box, the researchers related the concentration of each PAH chemical to its associated cancer risk; to do this, they had to develop a new method based on previous studies in the literature to avoid double-counting risk from the different chemicals. Finally, they overlaid population density maps to predict the number of cancer cases globally, based on the concentration and toxicity of a specific PAH chemical in each location.

    Dividing the cancer cases by population produced the cancer risk associated with that chemical. In this way, the team calculated the cancer risk for each of the 48 compounds, then determined each chemical’s individual contribution to the total risk.

    This analysis revealed that benzo(a)pyrene had a surprisingly small contribution, of about 11 percent, to the overall risk of developing cancer from PAH exposure globally. Eighty-nine percent of cancer risk came from other chemicals. And 17 percent of this risk arose from degradation products.

    “We see places where you can find concentrations of benzo(a)pyrene are lower, but the risk is higher because of these degradation products,” Selin says. “These products can be orders of magnitude more toxic, so the fact that they’re at tiny concentrations doesn’t mean you can write them off.”

    When the researchers compared calculated PAH-associated cancer risks around the world, they found significant differences depending on whether that risk calculation was based solely on concentrations of benzo(a)pyrene or on a region’s broader mix of PAH compounds.

    “If you use the old method, you would find the lifetime cancer risk is 3.5 times higher in Hong Kong versus southern India, but taking into account the differences in PAH mixtures, you get a difference of 12 times,” Kelly says. “So, there’s a big difference in the relative cancer risk between the two places. And we think it’s important to expand the group of compounds that regulators are thinking about, beyond just a single chemical.”

    The team’s study “provides an excellent contribution to better understanding these ubiquitous pollutants,” says Elisabeth Galarneau, an air quality expert and PhD research scientist in Canada’s Department of the Environment. “It will be interesting to see how these results compare to work being done elsewhere … to pin down which (compounds) need to be tracked and considered for the protection of human and environmental health.”

    This research was conducted in MIT’s Superfund Research Center and is supported in part by the National Institute of Environmental Health Sciences Superfund Basic Research Program, and the National Institutes of Health. More

  • in

    Predicting building emissions across the US

    The United States is entering a building boom. Between 2017 and 2050, it will build the equivalent of New York City 20 times over. Yet, to meet climate targets, the nation must also significantly reduce the greenhouse gas (GHG) emissions of its buildings, which comprise 27 percent of the nation’s total emissions.

    A team of current and former MIT Concrete Sustainability Hub (CSHub) researchers is addressing these conflicting demands with the aim of giving policymakers the tools and information to act. They have detailed the results of their collaboration in a recent paper in the journal Applied Energy that projects emissions for all buildings across the United States under two GHG reduction scenarios.

    Their paper found that “embodied” emissions — those from materials production and construction — would represent around a quarter of emissions between 2016 and 2050 despite extensive construction.

    Further, many regions would have varying priorities for GHG reductions; some, like the West, would benefit most from reductions to embodied emissions, while others, like parts of the Midwest, would see the greatest payoff from interventions to emissions from energy consumption. If these regional priorities were addressed aggressively, building sector emissions could be reduced by around 30 percent between 2016 and 2050.

    Quantifying contradictions

    Modern buildings are far more complex — and efficient — than their predecessors. Due to new technologies and more stringent building codes, they can offer lower energy consumption and operational emissions. And yet, more-efficient materials and improved construction standards can also generate greater embodied emissions.

    Concrete, in many ways, epitomizes this tradeoff. Though its durability can minimize energy-intensive repairs over a building’s operational life, the scale of its production means that it contributes to a large proportion of the embodied impacts in the building sector.

    As such, the team centered GHG reductions for concrete in its analysis.

    “We took a bottom-up approach, developing reference designs based on a set of residential and commercial building models,” explains Ehsan Vahidi, an assistant professor at the University of Nevada at Reno and a former CSHub postdoc. “These designs were differentiated by roof and slab insulation, HVAC efficiency, and construction materials — chiefly concrete and wood.”

    After measuring the operational and embodied GHG emissions for each reference design, the team scaled up their results to the county level and then national level based on building stock forecasts. This allowed them to estimate the emissions of the entire building sector between 2016 and 2050.

    To understand how various interventions could cut GHG emissions, researchers ran two different scenarios — a “projected” and an “ambitious” scenario — through their framework.

    The projected scenario corresponded to current trends. It assumed grid decarbonization would follow Energy Information Administration predictions; the widespread adoption of new energy codes; efficiency improvement of lighting and appliances; and, for concrete, the implementation of 50 percent low-carbon cements and binders in all new concrete construction and the adoption of full carbon capture, storage, and utilization (CCUS) of all cement and concrete emissions.

    “Our ambitious scenario was intended to reflect a future where more aggressive actions are taken to reduce GHG emissions and achieve the targets,” says Vahidi. “Therefore, the ambitious scenario took these same strategies [of the projected scenario] but featured more aggressive targets for their implementation.”

    For instance, it assumed a 33 percent reduction in grid emissions by 2050 and moved the projected deadlines for lighting and appliances and thermal insulation forward by five and 10 years, respectively. Concrete decarbonization occurred far more quickly as well.

    Reductions and variations

    The extensive growth forecast for the U.S. building sector will inevitably generate a sizable number of emissions. But how much can this figure be minimized?

    Without the implementation of any GHG reduction strategies, the team found that the building sector would emit 62 gigatons CO2 equivalent between 2016 and 2050. That’s comparable to the emissions generated from 156 trillion passenger vehicle miles traveled.

    But both GHG reduction scenarios could cut the emissions from this unmitigated, business-as-usual scenario significantly.

    Under the projected scenario, emissions would fall to 45 gigatons CO2 equivalent — a 27 percent decrease over the analysis period. The ambitious scenario would offer a further 6 percent reduction over the projected scenario, reaching 40 gigatons CO2 equivalent — like removing around 55 trillion passenger vehicle miles from the road over the period.

    “In both scenarios, the largest contributor to reductions was the greening of the energy grid,” notes Vahidi. “Other notable opportunities for reductions were from increasing the efficiency of lighting, HVAC, and appliances. Combined, these four attributes contributed to 85 percent of the emissions over the analysis period. Improvements to them offered the greatest potential emissions reductions.”

    The remaining attributes, such as thermal insulation and low-carbon concrete, had a smaller impact on emissions and, consequently, offered smaller reduction opportunities. That’s because these two attributes were only applied to new construction in the analysis, which was outnumbered by existing structures throughout the period.

    The disparities in impact between strategies aimed at new and existing structures underscore a broader finding: Despite extensive construction over the period, embodied emissions would comprise just 23 percent of cumulative emissions between 2016 and 2050, with the remainder coming primarily from operation.  

    “This is a consequence of existing structures far outnumbering new structures,” explains Jasmina Burek, a CSHub postdoc and an incoming assistant professor at the University of Massachusetts Lowell. “The operational emissions generated by all new and existing structures between 2016 and 2050 will always greatly exceed the embodied emissions of new structures at any given time, even as buildings become more efficient and the grid gets greener.”

    Yet the emissions reductions from both scenarios were not distributed evenly across the entire country. The team identified several regional variations that could have implications for how policymakers must act to reduce building sector emissions.

    “We found that western regions in the United States would see the greatest reduction opportunities from interventions to residential emissions, which would constitute 90 percent of the region’s total emissions over the analysis period,” says Vahidi.

    The predominance of residential emissions stems from the region’s ongoing population surge and its subsequent growth in housing stock. Proposed solutions would include CCUS and low-carbon binders for concrete production, and improvements to energy codes aimed at residential buildings.

    As with the West, ideal solutions for the Southeast would include CCUS, low-carbon binders, and improved energy codes.

    “In the case of Southeastern regions, interventions should equally target commercial and residential buildings, which we found were split more evenly among the building stock,” explains Burek. “Due to the stringent energy codes in both regions, interventions to operational emissions were less impactful than those to embodied emissions.”

    Much of the Midwest saw the inverse outcome. Its energy mix remains one of the most carbon-intensive in the nation and improvements to energy efficiency and the grid would have a large payoff — particularly in Missouri, Kansas, and Colorado.

    New England and California would see the smallest reductions. As their already-strict energy codes would limit further operational reductions, opportunities to reduce embodied emissions would be the most impactful.

    This tremendous regional variation uncovered by the MIT team is in many ways a reflection of the great demographic and geographic diversity of the nation as a whole. And there are still further variables to consider.

    In addition to GHG emissions, future research could consider other environmental impacts, like water consumption and air quality. Other mitigation strategies to consider include longer building lifespans, retrofitting, rooftop solar, and recycling and reuse.

    In this sense, their findings represent the lower bounds of what is possible in the building sector. And even if further improvements are ultimately possible, they’ve shown that regional variation will invariably inform those environmental impact reductions.

    The MIT Concrete Sustainability Hub is a team of researchers from several departments across MIT working on concrete and infrastructure science, engineering, and economics. Its research is supported by the Portland Cement Association and the Ready Mixed Concrete Research and Education Foundation. More

  • in

    Concrete’s role in reducing building and pavement emissions

    Encountering concrete is a common, even routine, occurrence. And that’s exactly what makes concrete exceptional.

    As the most consumed material after water, concrete is indispensable to the many essential systems — from roads to buildings — in which it is used.

    But due to its extensive use, concrete production also contributes to around 1 percent of emissions in the United States and remains one of several carbon-intensive industries globally. Tackling climate change, then, will mean reducing the environmental impacts of concrete, even as its use continues to increase.

    In a new paper in the Proceedings of the National Academy of Sciences, a team of current and former researchers at the MIT Concrete Sustainability Hub (CSHub) outlines how this can be achieved.

    They present an extensive life-cycle assessment of the building and pavements sectors that estimates how greenhouse gas (GHG) reduction strategies — including those for concrete and cement — could minimize the cumulative emissions of each sector and how those reductions would compare to national GHG reduction targets. 

    The team found that, if reduction strategies were implemented, the emissions for pavements and buildings between 2016 and 2050 could fall by up to 65 percent and 57 percent, respectively, even if concrete use accelerated greatly over that period. These are close to U.S. reduction targets set as part of the Paris Climate Accords. The solutions considered would also enable concrete production for both sectors to attain carbon neutrality by 2050.

    Despite continued grid decarbonization and increases in fuel efficiency, they found that the vast majority of the GHG emissions from new buildings and pavements during this period would derive from operational energy consumption rather than so-called embodied emissions — emissions from materials production and construction.

    Sources and solutions

    The consumption of concrete, due to its versatility, durability, constructability, and role in economic development, has been projected to increase around the world.

    While it is essential to consider the embodied impacts of ongoing concrete production, it is equally essential to place these initial impacts in the context of the material’s life cycle.

    Due to concrete’s unique attributes, it can influence the long-term sustainability performance of the systems in which it is used. Concrete pavements, for instance, can reduce vehicle fuel consumption, while concrete structures can endure hazards without needing energy- and materials-intensive repairs.

    Concrete’s impacts, then, are as complex as the material itself — a carefully proportioned mixture of cement powder, water, sand, and aggregates. Untangling concrete’s contribution to the operational and embodied impacts of buildings and pavements is essential for planning GHG reductions in both sectors.

    Set of scenarios

    In their paper, CSHub researchers forecast the potential greenhouse gas emissions from the building and pavements sectors as numerous emissions reduction strategies were introduced between 2016 and 2050.

    Since both of these sectors are immense and rapidly evolving, modeling them required an intricate framework.

    “We don’t have details on every building and pavement in the United States,” explains Randolph Kirchain, a research scientist at the Materials Research Laboratory and co-director of CSHub.

    “As such, we began by developing reference designs, which are intended to be representative of current and future buildings and pavements. These were adapted to be appropriate for 14 different climate zones in the United States and then distributed across the U.S. based on data from the U.S. Census and the Federal Highway Administration”

    To reflect the complexity of these systems, their models had to have the highest resolutions possible.

    “In the pavements sector, we collected the current stock of the U.S. network based on high-precision 10-mile segments, along with the surface conditions, traffic, thickness, lane width, and number of lanes for each segment,” says Hessam AzariJafari, a postdoc at CSHub and a co-author on the paper.

    “To model future paving actions over the analysis period, we assumed four climate conditions; four road types; asphalt, concrete, and composite pavement structures; as well as major, minor, and reconstruction paving actions specified for each climate condition.”

    Using this framework, they analyzed a “projected” and an “ambitious” scenario of reduction strategies and system attributes for buildings and pavements over the 34-year analysis period. The scenarios were defined by the timing and intensity of GHG reduction strategies.

    As its name might suggest, the projected scenario reflected current trends. For the building sector, solutions encompassed expected grid decarbonization and improvements to building codes and energy efficiency that are currently being implemented across the country. For pavements, the sole projected solution was improvements to vehicle fuel economy. That’s because as vehicle efficiency continues to increase, excess vehicle emissions due to poor road quality will also decrease.

    Both the projected scenarios for buildings and pavements featured the gradual introduction of low-carbon concrete strategies, such as recycled content, carbon capture in cement production, and the use of captured carbon to produce aggregates and cure concrete.

    “In the ambitious scenario,” explains Kirchain, “we went beyond projected trends and explored reasonable changes that exceed current policies and [industry] commitments.”

    Here, the building sector strategies were the same, but implemented more aggressively. The pavements sector also abided by more aggressive targets and incorporated several novel strategies, including investing more to yield smoother roads, selectively applying concrete overlays to produce stiffer pavements, and introducing more reflective pavements — which can change the Earth’s energy balance by sending more energy out of the atmosphere.

    Results

    As the grid becomes greener and new homes and buildings become more efficient, many experts have predicted the operational impacts of new construction projects to shrink in comparison to their embodied emissions.

    “What our life-cycle assessment found,” says Jeremy Gregory, the executive director of the MIT Climate Consortium and the lead author on the paper, “is that [this prediction] isn’t necessarily the case.”

    “Instead, we found that more than 80 percent of the total emissions from new buildings and pavements between 2016 and 2050 would derive from their operation.”

    In fact, the study found that operations will create the majority of emissions through 2050 unless all energy sources — electrical and thermal — are carbon-neutral by 2040. This suggests that ambitious interventions to the electricity grid and other sources of operational emissions can have the greatest impact.

    Their predictions for emissions reductions generated additional insights.  

    For the building sector, they found that the projected scenario would lead to a reduction of 49 percent compared to 2016 levels, and that the ambitious scenario provided a 57 percent reduction.

    As most buildings during the analysis period were existing rather than new, energy consumption dominated emissions in both scenarios. Consequently, decarbonizing the electricity grid and improving the efficiency of appliances and lighting led to the greatest improvements for buildings, they found.

    In contrast to the building sector, the pavements scenarios had a sizeable gulf between outcomes: the projected scenario led to only a 14 percent reduction while the ambitious scenario had a 65 percent reduction — enough to meet U.S. Paris Accord targets for that sector. This gulf derives from the lack of GHG reduction strategies being pursued under current projections.

    “The gap between the pavement scenarios shows that we need to be more proactive in managing the GHG impacts from pavements,” explains Kirchain. “There is tremendous potential, but seeing those gains requires action now.”

    These gains from both ambitious scenarios could occur even as concrete use tripled over the analysis period in comparison to the projected scenarios — a reflection of not only concrete’s growing demand but its potential role in decarbonizing both sectors.

    Though only one of their reduction scenarios (the ambitious pavement scenario) met the Paris Accord targets, that doesn’t preclude the achievement of those targets: many other opportunities exist.

    “In this study, we focused on mainly embodied reductions for concrete,” explains Gregory. “But other construction materials could receive similar treatment.

    “Further reductions could also come from retrofitting existing buildings and by designing structures with durability, hazard resilience, and adaptability in mind in order to minimize the need for reconstruction.”

    This study answers a paradox in the field of sustainability. For the world to become more equitable, more development is necessary. And yet, that very same development may portend greater emissions.

    The MIT team found that isn’t necessarily the case. Even as America continues to use more concrete, the benefits of the material itself and the interventions made to it can make climate targets more achievable.

    The MIT Concrete Sustainability Hub is a team of researchers from several departments across MIT working on concrete and infrastructure science, engineering, and economics. Its research is supported by the Portland Cement Association and the Ready Mixed Concrete Research and Education Foundation. More

  • in

    3 Questions: Daniel Cohn on the benefits of high-efficiency, flexible-fuel engines for heavy-duty trucking

    The California Air Resources Board has adopted a regulation that requires truck and engine manufacturers to reduce the nitrogen oxide (NOx) emissions from new heavy-duty trucks by 90 percent starting in 2027. NOx from heavy-duty trucks is one of the main sources of air pollution, creating smog and threatening respiratory health. This regulation requires the largest air pollution cuts in California in more than a decade. How can manufacturers achieve this aggressive goal efficiently and affordably?

    Daniel Cohn, a research scientist at the MIT Energy Initiative, and Leslie Bromberg, a principal research scientist at the MIT Plasma Science and Fusion Center, have been working on a high-efficiency, gasoline-ethanol engine that is cleaner and more cost-effective than existing diesel engine technologies. Here, Cohn explains the flexible-fuel engine approach and why it may be the most realistic solution — in the near term — to help California meet its stringent vehicle emission reduction goals. The research was sponsored by the Arthur Samberg MIT Energy Innovation fund.

    Q. How does your high-efficiency, flexible-fuel gasoline engine technology work?

    A. Our goal is to provide an affordable solution for heavy-duty vehicle (HDV) engines to emit low levels of nitrogen oxide (NOx) emissions that would meet California’s NOx regulations, while also quick-starting gasoline-consumption reductions in a substantial fraction of the HDV fleet.

    Presently, large trucks and other HDVs generally use diesel engines. The main reason for this is because of their high efficiency, which reduces fuel cost — a key factor for commercial trucks (especially long-haul trucks) because of the large number of miles that are driven. However, the NOx emissions from these diesel-powered vehicles are around 10 times greater than those from spark-ignition engines powered by gasoline or ethanol.

    Spark-ignition gasoline engines are primarily used in cars and light trucks (light-duty vehicles), which employ a three-way catalyst exhaust treatment system (generally referred to as a catalytic converter) that reduces vehicle NOx emissions by at least 98 percent and at a modest cost. The use of this highly effective exhaust treatment system is enabled by the capability of spark-ignition engines to be operated at a stoichiometric air/fuel ratio (where the amount of air matches what is needed for complete combustion of the fuel).

    Diesel engines do not operate with stoichiometric air/fuel ratios, making it much more difficult to reduce NOx emissions. Their state-of-the-art exhaust treatment system is much more complex and expensive than catalytic converters, and even with it, vehicles produce NOx emissions around 10 times higher than spark-ignition engine vehicles. Consequently, it is very challenging for diesel engines to further reduce their NOx emissions to meet the new California regulations.

    Our approach uses spark-ignition engines that can be powered by gasoline, ethanol, or mixtures of gasoline and ethanol as a substitute for diesel engines in HDVs. Gasoline has the attractive feature of being widely available and having a comparable or lower cost than diesel fuel. In addition, presently available ethanol in the U.S. produces up to 40 percent less greenhouse gas (GHG) emissions than diesel fuel or gasoline and has a widely available distribution system.

    To make gasoline- and/or ethanol-powered spark-ignition engine HDVs attractive for widespread HDV applications, we developed ways to make spark-ignition engines more efficient, so their fuel costs are more palatable to owners of heavy-duty trucks. Our approach provides diesel-like high efficiency and high power in gasoline-powered engines by using various methods to prevent engine knock (unwanted self-ignition that can damage the engine) in spark-ignition gasoline engines. This enables greater levels of turbocharging and use of higher engine compression ratios. These features provide high efficiency, comparable to that provided by diesel engines. Plus, when the engine is powered by ethanol, the required knock resistance is provided by the intrinsic high knock resistance of the fuel itself. 

    Q. What are the major challenges to implementing your technology in California?

    A. California has always been the pioneer in air pollutant control, with states such as Washington, Oregon, and New York often following suit. As the most populous state, California has a lot of sway — it’s a trendsetter. What happens in California has an impact on the rest of the United States.

    The main challenge to implementation of our technology is the argument that a better internal combustion engine technology is not needed because battery-powered HDVs — particularly long-haul trucks — can play the required role in reducing NOx and GHG emissions by 2035. We think that substantial market penetration of battery electric vehicles (BEV) in this vehicle sector will take a considerably longer time. In contrast to light-duty vehicles, there has been very little penetration of battery power into the HDV fleet, especially in long-haul trucks, which are the largest users of diesel fuel. One reason for this is that long-haul trucks using battery power face the challenge of reduced cargo capability due to substantial battery weight. Another challenge is the substantially longer charging time for BEVs compared to that of most present HDVs.

    Hydrogen-powered trucks using fuel cells have also been proposed as an alternative to BEV trucks, which might limit interest in adopting improved internal combustion engines. However, hydrogen-powered trucks face the formidable challenges of producing zero GHG hydrogen at affordable cost, as well as the cost of storage and transportation of hydrogen. At present the high purity hydrogen needed for fuel cells is generally very expensive.

    Q. How does your idea compare overall to battery-powered and hydrogen-powered HDVs? And how will you persuade people that it is an attractive pathway to follow?

    A. Our design uses existing propulsion systems and can operate on existing liquid fuels, and for these reasons, in the near term, it will be economically attractive to the operators of long-haul trucks. In fact, it can even be a lower-cost option than diesel power because of the significantly less-expensive exhaust treatment and smaller-size engines for the same power and torque. This economic attractiveness could enable the large-scale market penetration that is needed to have a substantial impact on reducing air pollution. Alternatively, we think it could take at least 20 years longer for BEVs or hydrogen-powered vehicles to obtain the same level of market penetration.

    Our approach also uses existing corn-based ethanol, which can provide a greater near-term GHG reduction benefit than battery- or hydrogen-powered long-haul trucks. While the GHG reduction from using existing ethanol would initially be in the 20 percent to 40 percent range, the scale at which the market is penetrated in the near-term could be much greater than for BEV or hydrogen-powered vehicle technology. The overall impact in reducing GHGs could be considerably greater.

    Moreover, we see a migration path beyond 2030 where further reductions in GHG emissions from corn ethanol can be possible through carbon capture and sequestration of the carbon dioxide (CO2) that is produced during ethanol production. In this case, overall CO2 reductions could potentially be 80 percent or more. Technologies for producing ethanol (and methanol, another alcohol fuel) from waste at attractive costs are emerging, and can provide fuel with zero or negative GHG emissions. One pathway for providing a negative GHG impact is through finding alternatives to landfilling for waste disposal, as this method leads to potent methane GHG emissions. A negative GHG impact could also be obtained by converting biomass waste into clean fuel, since the biomass waste can be carbon neutral and CO2 from the production of the clean fuel can be captured and sequestered.

    In addition, our flex-fuel engine technology may be synergistically used as range extenders in plug-in hybrid HDVs, which use limited battery capacity and obviates the cargo capability reduction and fueling disadvantages of long-haul trucks powered by battery alone.

    With the growing threats from air pollution and global warming, our HDV solution is an increasingly important option for near-term reduction of air pollution and offers a faster start in reducing heavy-duty fleet GHG emissions. It also provides an attractive migration path for longer-term, larger GHG reductions from the HDV sector. More

  • in

    Smarter regulation of global shipping emissions could improve air quality and health outcomes

    Emissions from shipping activities around the world account for nearly 3 percent of total human-caused greenhouse gas emissions, and could increase by up to 50 percent by 2050, making them an important and often overlooked target for global climate mitigation. At the same time, shipping-related emissions of additional pollutants, particularly nitrogen and sulfur oxides, pose a significant threat to global health, as they degrade air quality enough to cause premature deaths.

    The main source of shipping emissions is the combustion of heavy fuel oil in large diesel engines, which disperses pollutants into the air over coastal areas. The nitrogen and sulfur oxides emitted from these engines contribute to the formation of PM2.5, airborne particulates with diameters of up to 2.5 micrometers that are linked to respiratory and cardiovascular diseases. Previous studies have estimated that PM2.5  from shipping emissions contribute to about 60,000 cardiopulmonary and lung cancer deaths each year, and that IMO 2020, an international policy that caps engine fuel sulfur content at 0.5 percent, could reduce PM2.5 concentrations enough to lower annual premature mortality by 34 percent.

    Global shipping emissions arise from both domestic (between ports in the same country) and international (between ports of different countries) shipping activities, and are governed by national and international policies, respectively. Consequently, effective mitigation of the air quality and health impacts of global shipping emissions will require that policymakers quantify the relative contributions of domestic and international shipping activities to these adverse impacts in an integrated global analysis.

    A new study in the journal Environmental Research Letters provides that kind of analysis for the first time. To that end, the study’s co-authors — researchers from MIT and the Hong Kong University of Science and Technology — implement a three-step process. First, they create global shipping emission inventories for domestic and international vessels based on ship activity records of the year 2015 from the Automatic Identification System (AIS). Second, they apply an atmospheric chemistry and transport model to this data to calculate PM2.5 concentrations generated by that year’s domestic and international shipping activities. Finally, they apply a model that estimates mortalities attributable to these pollutant concentrations.

    The researchers find that approximately 94,000 premature deaths were associated with PM2.5 exposure due to maritime shipping in 2015 — 83 percent international and 17 percent domestic. While international shipping accounted for the vast majority of the global health impact, some regions experienced significant health burdens from domestic shipping operations. This is especially true in East Asia: In China, 44 percent of shipping-related premature deaths were attributable to domestic shipping activities.

    “By comparing the health impacts from international and domestic shipping at the global level, our study could help inform decision-makers’ efforts to coordinate shipping emissions policies across multiple scales, and thereby reduce the air quality and health impacts of these emissions more effectively,” says Yiqi Zhang, a researcher at the Hong Kong University of Science and Technology who led the study as a visiting student supported by the MIT Joint Program on the Science and Policy of Global Change.

    In addition to estimating the air-quality and health impacts of domestic and international shipping, the researchers evaluate potential health outcomes under different shipping emissions-control policies that are either currently in effect or likely to be implemented in different regions in the near future.

    They estimate about 30,000 avoided deaths per year under a scenario consistent with IMO 2020, an international regulation limiting the sulfur content in shipping fuel oil to 0.5 percent — a finding that tracks with previous studies. Further strengthening regulations on sulfur content would yield only slight improvement; limiting sulfur content to 0.1 percent reduces annual shipping-attributable PM2.5-related premature deaths by an additional 5,000. In contrast, regulating nitrogen oxides instead, involving a Tier III NOx Standard would produce far greater benefits than a 0.1-percent sulfur cap, with 33,000 further avoided deaths.

    “Areas with high proportions of mortalities contributed by domestic shipping could effectively use domestic regulations to implement controls,” says study co-author Noelle Selin, a professor at MIT’s Institute for Data, Systems and Society and Department of Earth, Atmospheric and Planetary Sciences, and a faculty affiliate of the MIT Joint Program. “For other regions where much damage comes from international vessels, further international cooperation is required to mitigate impacts.” More

  • in

    Using aluminum and water to make clean hydrogen fuel — when and where it’s needed

    As the world works to move away from fossil fuels, many researchers are investigating whether clean hydrogen fuel can play an expanded role in sectors from transportation and industry to buildings and power generation. It could be used in fuel cell vehicles, heat-producing boilers, electricity-generating gas turbines, systems for storing renewable energy, and more.

    But while using hydrogen doesn’t generate carbon emissions, making it typically does. Today, almost all hydrogen is produced using fossil fuel-based processes that together generate more than 2 percent of all global greenhouse gas emissions. In addition, hydrogen is often produced in one location and consumed in another, which means its use also presents logistical challenges.

    A promising reaction

    Another option for producing hydrogen comes from a perhaps surprising source: reacting aluminum with water. Aluminum metal will readily react with water at room temperature to form aluminum hydroxide and hydrogen. That reaction doesn’t typically take place because a layer of aluminum oxide naturally coats the raw metal, preventing it from coming directly into contact with water.

    Using the aluminum-water reaction to generate hydrogen doesn’t produce any greenhouse gas emissions, and it promises to solve the transportation problem for any location with available water. Simply move the aluminum and then react it with water on-site. “Fundamentally, the aluminum becomes a mechanism for storing hydrogen — and a very effective one,” says Douglas P. Hart, professor of mechanical engineering at MIT. “Using aluminum as our source, we can ‘store’ hydrogen at a density that’s 10 times greater than if we just store it as a compressed gas.”

    Two problems have kept aluminum from being employed as a safe, economical source for hydrogen generation. The first problem is ensuring that the aluminum surface is clean and available to react with water. To that end, a practical system must include a means of first modifying the oxide layer and then keeping it from re-forming as the reaction proceeds.

    The second problem is that pure aluminum is energy-intensive to mine and produce, so any practical approach needs to use scrap aluminum from various sources. But scrap aluminum is not an easy starting material. It typically occurs in an alloyed form, meaning that it contains other elements that are added to change the properties or characteristics of the aluminum for different uses. For example, adding magnesium increases strength and corrosion-resistance, adding silicon lowers the melting point, and adding a little of both makes an alloy that’s moderately strong and corrosion-resistant.

    Despite considerable research on aluminum as a source of hydrogen, two key questions remain: What’s the best way to prevent the adherence of an oxide layer on the aluminum surface, and how do alloying elements in a piece of scrap aluminum affect the total amount of hydrogen generated and the rate at which it is generated?

    “If we’re going to use scrap aluminum for hydrogen generation in a practical application, we need to be able to better predict what hydrogen generation characteristics we’re going to observe from the aluminum-water reaction,” says Laureen Meroueh PhD ’20, who earned her doctorate in mechanical engineering.

    Since the fundamental steps in the reaction aren’t well understood, it’s been hard to predict the rate and volume at which hydrogen forms from scrap aluminum, which can contain varying types and concentrations of alloying elements. So Hart, Meroueh, and Thomas W. Eagar, a professor of materials engineering and engineering management in the MIT Department of Materials Science and Engineering, decided to examine — in a systematic fashion — the impacts of those alloying elements on the aluminum-water reaction and on a promising technique for preventing the formation of the interfering oxide layer.

    To prepare, they had experts at Novelis Inc. fabricate samples of pure aluminum and of specific aluminum alloys made of commercially pure aluminum combined with either 0.6 percent silicon (by weight), 1 percent magnesium, or both — compositions that are typical of scrap aluminum from a variety of sources. Using those samples, the MIT researchers performed a series of tests to explore different aspects of the aluminum-water reaction.

    Pre-treating the aluminum

    The first step was to demonstrate an effective means of penetrating the oxide layer that forms on aluminum in the air. Solid aluminum is made up of tiny grains that are packed together with occasional boundaries where they don’t line up perfectly. To maximize hydrogen production, researchers would need to prevent the formation of the oxide layer on all those interior grain surfaces.

    Research groups have already tried various ways of keeping the aluminum grains “activated” for reaction with water. Some have crushed scrap samples into particles so tiny that the oxide layer doesn’t adhere. But aluminum powders are dangerous, as they can react with humidity and explode. Another approach calls for grinding up scrap samples and adding liquid metals to prevent oxide deposition. But grinding is a costly and energy-intensive process.

    To Hart, Meroueh, and Eagar, the most promising approach — first introduced by Jonathan Slocum ScD ’18 while he was working in Hart’s research group — involved pre-treating the solid aluminum by painting liquid metals on top and allowing them to permeate through the grain boundaries.

    To determine the effectiveness of that approach, the researchers needed to confirm that the liquid metals would reach the internal grain surfaces, with and without alloying elements present. And they had to establish how long it would take for the liquid metal to coat all of the grains in pure aluminum and its alloys.

    They started by combining two metals — gallium and indium — in specific proportions to create a “eutectic” mixture; that is, a mixture that would remain in liquid form at room temperature. They coated their samples with the eutectic and allowed it to penetrate for time periods ranging from 48 to 96 hours. They then exposed the samples to water and monitored the hydrogen yield (the amount formed) and flow rate for 250 minutes. After 48 hours, they also took high-magnification scanning electron microscope (SEM) images so they could observe the boundaries between adjacent aluminum grains.

    Based on the hydrogen yield measurements and the SEM images, the MIT team concluded that the gallium-indium eutectic does naturally permeate and reach the interior grain surfaces. However, the rate and extent of penetration vary with the alloy. The permeation rate was the same in silicon-doped aluminum samples as in pure aluminum samples but slower in magnesium-doped samples.

    Perhaps most interesting were the results from samples doped with both silicon and magnesium — an aluminum alloy often found in recycling streams. Silicon and magnesium chemically bond to form magnesium silicide, which occurs as solid deposits on the internal grain surfaces. Meroueh hypothesized that when both silicon and magnesium are present in scrap aluminum, those deposits can act as barriers that impede the flow of the gallium-indium eutectic.

    The experiments and images confirmed her hypothesis: The solid deposits did act as barriers, and images of samples pre-treated for 48 hours showed that permeation wasn’t complete. Clearly, a lengthy pre-treatment period would be critical for maximizing the hydrogen yield from scraps of aluminum containing both silicon and magnesium.

    Meroueh cites several benefits to the process they used. “You don’t have to apply any energy for the gallium-indium eutectic to work its magic on aluminum and get rid of that oxide layer,” she says. “Once you’ve activated your aluminum, you can drop it in water, and it’ll generate hydrogen — no energy input required.” Even better, the eutectic doesn’t chemically react with the aluminum. “It just physically moves around in between the grains,” she says. “At the end of the process, I could recover all of the gallium and indium I put in and use it again” — a valuable feature as gallium and (especially) indium are costly and in relatively short supply.

    Impacts of alloying elements on hydrogen generation

    The researchers next investigated how the presence of alloying elements affects hydrogen generation. They tested samples that had been treated with the eutectic for 96 hours; by then, the hydrogen yield and flow rates had leveled off in all the samples.

    The presence of 0.6 percent silicon increased the hydrogen yield for a given weight of aluminum by 20 percent compared to pure aluminum — even though the silicon-containing sample had less aluminum than the pure aluminum sample. In contrast, the presence of 1 percent magnesium produced far less hydrogen, while adding both silicon and magnesium pushed the yield up, but not to the level of pure aluminum.

    The presence of silicon also greatly accelerated the reaction rate, producing a far higher peak in the flow rate but cutting short the duration of hydrogen output. The presence of magnesium produced a lower flow rate but allowed the hydrogen output to remain fairly steady over time. And once again, aluminum with both alloying elements produced a flow rate between that of magnesium-doped and pure aluminum.

    Those results provide practical guidance on how to adjust the hydrogen output to match the operating needs of a hydrogen-consuming device. If the starting material is commercially pure aluminum, adding small amounts of carefully selected alloying elements can tailor the hydrogen yield and flow rate. If the starting material is scrap aluminum, careful choice of the source can be key. For high, brief bursts of hydrogen, pieces of silicon-containing aluminum from an auto junkyard could work well. For lower but longer flows, magnesium-containing scraps from the frame of a demolished building might be better. For results somewhere in between, aluminum containing both silicon and magnesium should work well; such material is abundantly available from scrapped cars and motorcycles, yachts, bicycle frames, and even smartphone cases.

    It should also be possible to combine scraps of different aluminum alloys to tune the outcome, notes Meroueh. “If I have a sample of activated aluminum that contains just silicon and another sample that contains just magnesium, I can put them both into a container of water and let them react,” she says. “So I get the fast ramp-up in hydrogen production from the silicon and then the magnesium takes over and has that steady output.”

    Another opportunity for tuning: Reducing grain size

    Another practical way to affect hydrogen production could be to reduce the size of the aluminum grains — a change that should increase the total surface area available for reactions to occur.

    To investigate that approach, the researchers requested specially customized samples from their supplier. Using standard industrial procedures, the Novelis experts first fed each sample through two rollers, squeezing it from the top and bottom so that the internal grains were flattened. They then heated each sample until the long, flat grains had reorganized and shrunk to a targeted size.

    In a series of carefully designed experiments, the MIT team found that reducing the grain size increased the efficiency and decreased the duration of the reaction to varying degrees in the different samples. Again, the presence of particular alloying elements had a major effect on the outcome.

    Needed: A revised theory that explains observations

    Throughout their experiments, the researchers encountered some unexpected results. For example, standard corrosion theory predicts that pure aluminum will generate more hydrogen than silicon-doped aluminum will — the opposite of what they observed in their experiments.

    To shed light on the underlying chemical reactions, Hart, Meroueh, and Eagar investigated hydrogen “flux,” that is, the volume of hydrogen generated over time on each square centimeter of aluminum surface, including the interior grains. They examined three grain sizes for each of their four compositions and collected thousands of data points measuring hydrogen flux.

    Their results show that reducing grain size has significant effects. It increases the peak hydrogen flux from silicon-doped aluminum as much as 100 times and from the other three compositions by 10 times. With both pure aluminum and silicon-containing aluminum, reducing grain size also decreases the delay before the peak flux and increases the rate of decline afterward. With magnesium-containing aluminum, reducing the grain size brings about an increase in peak hydrogen flux and results in a slightly faster decline in the rate of hydrogen output. With both silicon and magnesium present, the hydrogen flux over time resembles that of magnesium-containing aluminum when the grain size is not manipulated. When the grain size is reduced, the hydrogen output characteristics begin to resemble behavior observed in silicon-containing aluminum. That outcome was unexpected because when silicon and magnesium are both present, they react to form magnesium silicide, resulting in a new type of aluminum alloy with its own properties.

    The researchers stress the benefits of developing a better fundamental understanding of the underlying chemical reactions involved. In addition to guiding the design of practical systems, it might help them find a replacement for the expensive indium in their pre-treatment mixture. Other work has shown that gallium will naturally permeate through the grain boundaries of aluminum. “At this point, we know that the indium in our eutectic is important, but we don’t really understand what it does, so we don’t know how to replace it,” says Hart.

    But already Hart, Meroueh, and Eagar have demonstrated two practical ways of tuning the hydrogen reaction rate: by adding certain elements to the aluminum and by manipulating the size of the interior aluminum grains. In combination, those approaches can deliver significant results. “If you go from magnesium-containing aluminum with the largest grain size to silicon-containing aluminum with the smallest grain size, you get a hydrogen reaction rate that differs by two orders of magnitude,” says Meroueh. “That’s huge if you’re trying to design a real system that would use this reaction.”

    This research was supported through the MIT Energy Initiative by ExxonMobil-MIT Energy Fellowships awarded to Laureen Meroueh PhD ’20 from 2018 to 2020.

    This article appears in the Spring 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative.  More

  • in

    Using graphene foam to filter toxins from drinking water

    Some kinds of water pollution, such as algal blooms and plastics that foul rivers, lakes, and marine environments, lie in plain sight. But other contaminants are not so readily apparent, which makes their impact potentially more dangerous. Among these invisible substances is uranium. Leaching into water resources from mining operations, nuclear waste sites, or from natural subterranean deposits, the element can now be found flowing out of taps worldwide.

    In the United States alone, “many areas are affected by uranium contamination, including the High Plains and Central Valley aquifers, which supply drinking water to 6 million people,” says Ahmed Sami Helal, a postdoc in the Department of Nuclear Science and Engineering. This contamination poses a near and present danger. “Even small concentrations are bad for human health,” says Ju Li, the Battelle Energy Alliance Professor of Nuclear Science and Engineering and professor of materials science and engineering.

    Now, a team led by Li has devised a highly efficient method for removing uranium from drinking water. Applying an electric charge to graphene oxide foam, the researchers can capture uranium in solution, which precipitates out as a condensed solid crystal. The foam may be reused up to seven times without losing its electrochemical properties. “Within hours, our process can purify a large quantity of drinking water below the EPA limit for uranium,” says Li.

    A paper describing this work was published in this week Advanced Materials. The two first co-authors are Helal and Chao Wang, a postdoc at MIT during the study, who is now with the School of Materials Science and Engineering at Tongji University, Shanghai. Researchers from Argonne National Laboratory, Taiwan’s National Chiao Tung University, and the University of Tokyo also participated in the research. The Defense Threat Reduction Agency (U.S. Department of Defense) funded later stages of this work.

    Targeting the contaminant

    The project, launched three years ago, began as an effort to find better approaches to environmental cleanup of heavy metals from mining sites. To date, remediation methods for such metals as chromium, cadmium, arsenic, lead, mercury, radium, and uranium have proven limited and expensive. “These techniques are highly sensitive to organics in water, and are poor at separating out the heavy metal contaminants,” explains Helal. “So they involve long operation times, high capital costs, and at the end of extraction, generate more toxic sludge.”

    To the team, uranium seemed a particularly attractive target. Field testing from the U.S. Geological Service and the Environmental Protection Agency (EPA) has revealed unhealthy levels of uranium moving into reservoirs and aquifers from natural rock sources in the northeastern United States, from ponds and pits storing old nuclear weapons and fuel in places like Hanford, Washington, and from mining activities located in many western states. This kind of contamination is prevalent in many other nations as well. An alarming number of these sites show uranium concentrations close to or above the EPA’s recommended ceiling of 30 parts per billion (ppb) — a level linked to kidney damage, cancer risk, and neurobehavioral changes in humans.

    The critical challenge lay in finding a practical remediation process exclusively sensitive to uranium, capable of extracting it from solution without producing toxic residues. And while earlier research showed that electrically charged carbon fiber could filter uranium from water, the results were partial and imprecise.

    Wang managed to crack these problems — based on her investigation of the behavior of graphene foam used for lithium-sulfur batteries. “The physical performance of this foam was unique because of its ability to attract certain chemical species to its surface,” she says. “I thought the ligands in graphene foam would work well with uranium.”

    Simple, efficient, and clean

    The team set to work transforming graphene foam into the equivalent of a uranium magnet. They learned that by sending an electric charge through the foam, splitting water and releasing hydrogen, they could increase the local pH and induce a chemical change that pulled uranium ions out of solution. The researchers found that the uranium would graft itself onto the foam’s surface, where it formed a never-before-seen crystalline uranium hydroxide. On reversal of the electric charge, the mineral, which resembles fish scales, slipped easily off the foam.

    It took hundreds of tries to get the chemical composition and electrolysis just right. “We kept changing the functional chemical groups to get them to work correctly,” says Helal. “And the foam was initially quite fragile, tending to break into pieces, so we needed to make it stronger and more durable,” says Wang.

    This uranium filtration process is simple, efficient, and clean, according to Li: “Each time it’s used, our foam can capture four times its own weight of uranium, and we can achieve an extraction capacity of 4,000 mg per gram, which is a major improvement over other methods,” he says. “We’ve also made a major breakthrough in reusability, because the foam can go through seven cycles without losing its extraction efficiency.” The graphene foam functions as well in seawater, where it reduces uranium concentrations from 3 parts per million to 19.9 ppb, showing that other ions in the brine do not interfere with filtration.

    The team believes its low-cost, effective device could become a new kind of home water filter, fitting on faucets like those of commercial brands. “Some of these filters already have activated carbon, so maybe we could modify these, add low-voltage electricity to filter uranium,” says Li.

    “The uranium extraction this device achieves is very impressive when compared to existing methods,” says Ho Jin Ryu, associate professor of nuclear and quantum engineering at the Korea Advanced Institute of Science and Technology. Ryu, who was not involved in the research, believes that the demonstration of graphene foam reusability is a “significant advance,” and that “the technology of local pH control to enhance uranium deposition will be impactful because the scientific principle can be applied more generally to heavy metal extraction from polluted water.”

    The researchers have already begun investigating broader applications of their method. “There is a science to this, so we can modify our filters to be selective for other heavy metals such as lead, mercury, and cadmium,” says Li. He notes that radium is another significant danger for locales in the United States and elsewhere that lack resources for reliable drinking water infrastructure.

    “In the future, instead of a passive water filter, we could be using a smart filter powered by clean electricity that turns on electrolytic action, which could extract multiple toxic metals, tell you when to regenerate the filter, and give you quality assurance about the water you’re drinking.” More