More stories

  • in

    Study helps pinpoint areas where microplastics will accumulate

    The accumulation of microplastics in the environment, and within our bodies, is an increasingly worrisome issue. But predicting where these ubiquitous particles will accumulate, and therefore where remediation efforts should be focused, has been difficult because of the many factors that contribute to their dispersal and deposition.New research from MIT shows that one key factor in determining where microparticles are likely to build up has to do with the presence of biofilms. These thin, sticky biopolymer layers are shed by microorganisms and can accumulate on surfaces, including along sandy riverbeds or seashores. The study found that, all other conditions being equal, microparticles are less likely to accumulate in sediment infused with biofilms, because if they land there, they are more likely to be resuspended by flowing water and carried away.The open-access findings appear in the journal Geophysical Research Letters, in a paper by MIT postdoc Hyoungchul Park and professor of civil and environmental engineering Heidi Nepf. “Microplastics are definitely in the news a lot,” Nepf says, “and we don’t fully understand where the hotspots of accumulation are likely to be. This work gives a little bit of guidance” on some of the factors that can cause these particles, and small particles in general, to accumulate in certain locations.Most experiments looking at the ways microparticles are transported and deposited have been conducted over bare sand, Park says. “But in nature, there are a lot of microorganisms, such as bacteria, fungi, and algae, and when they adhere to the stream bed they generate some sticky things.” These substances are known as extracellular polymeric substances, or EPS, and they “can significantly affect the channel bed characteristics,” he says. The new research focused on determining exactly how these substances affected the transport of microparticles, including microplastics.The research involved a flow tank with a bottom lined with fine sand, and sometimes with vertical plastic tubes simulating the presence of mangrove roots. In some experiments the bed consisted of pure sand, and in others the sand was mixed with a biological material to simulate the natural biofilms found in many riverbed and seashore environments.Water mixed with tiny plastic particles was pumped through the tank for three hours, and then the bed surface was photographed under ultraviolet light that caused the plastic particles to fluoresce, allowing a quantitative measurement of their concentration.The results revealed two different phenomena that affected how much of the plastic accumulated on the different surfaces. Immediately around the rods that stood in for above-ground roots, turbulence prevented particle deposition. In addition, as the amount of simulated biofilms in the sediment bed increased, the accumulation of particles also decreased.Nepf and Park concluded that the biofilms filled up the spaces between the sand grains, leaving less room for the microparticles to fit in. The particles were more exposed because they penetrated less deeply in between the sand grains, and as a result they were much more easily resuspended and carried away by the flowing water.“These biological films fill the pore spaces between the sediment grains,” Park explains, “and that makes the deposited particles — the particles that land on the bed — more exposed to the forces generated by the flow, which makes it easier for them to be resuspended. What we found was that in a channel with the same flow conditions and the same vegetation and the same sand bed, if one is without EPS and one is with EPS, then the one without EPS has a much higher deposition rate than the one with EPS.”Nepf adds: “The biofilm is blocking the plastics from accumulating in the bed because they can’t go deep into the bed. They just stay right on the surface, and then they get picked up and moved elsewhere. So, if I spilled a large amount of microplastic in two rivers, and one had a sandy or gravel bottom, and one was muddier with more biofilm, I would expect more of the microplastics to be retained in the sandy or gravelly river.”All of this is complicated by other factors, such as the turbulence of the water or the roughness of the bottom surface, she says. But it provides a “nice lens” to provide some suggestions for people who are trying to study the impacts of microplastics in the field. “They’re trying to determine what kinds of habitats these plastics are in, and this gives a framework for how you might categorize those habitats,” she says. “It gives guidance to where you should go to find more plastics versus less.”As an example, Park suggests, in mangrove ecosystems, microplastics may preferentially accumulate in the outer edges, which tend to be sandy, while the interior zones have sediment with more biofilm. Thus, this work suggests “the sandy outer regions may be potential hotspots for microplastic accumulation,” he says, and can make this a priority zone for monitoring and protection.“This is a highly relevant finding,” says Isabella Schalko, a research scientist at ETH Zurich, who was not associated with this research. “It suggests that restoration measures such as re-vegetation or promoting biofilm growth could help mitigate microplastic accumulation in aquatic systems. It highlights the powerful role of biological and physical features in shaping particle transport processes.”The work was supported by Shell International Exploration and Production through the MIT Energy Initiative. More

  • in

    Study shows making hydrogen with soda cans and seawater is scalable and sustainable

    Hydrogen has the potential to be a climate-friendly fuel since it doesn’t release carbon dioxide when used as an energy source. Currently, however, most methods for producing hydrogen involve fossil fuels, making hydrogen less of a “green” fuel over its entire life cycle.A new process developed by MIT engineers could significantly shrink the carbon footprint associated with making hydrogen.Last year, the team reported that they could produce hydrogen gas by combining seawater, recycled soda cans, and caffeine. The question then was whether the benchtop process could be applied at an industrial scale, and at what environmental cost.Now, the researchers have carried out a “cradle-to-grave” life cycle assessment, taking into account every step in the process at an industrial scale. For instance, the team calculated the carbon emissions associated with acquiring and processing aluminum, reacting it with seawater to produce hydrogen, and transporting the fuel to gas stations, where drivers could tap into hydrogen tanks to power engines or fuel cell cars. They found that, from end to end, the new process could generate a fraction of the carbon emissions that is associated with conventional hydrogen production.In a study appearing today in Cell Reports Sustainability, the team reports that for every kilogram of hydrogen produced, the process would generate 1.45 kilograms of carbon dioxide over its entire life cycle. In comparison, fossil-fuel-based processes emit 11 kilograms of carbon dioxide per kilogram of hydrogen generated.The low-carbon footprint is on par with other proposed “green hydrogen” technologies, such as those powered by solar and wind energy.“We’re in the ballpark of green hydrogen,” says lead author Aly Kombargi PhD ’25, who graduated this spring from MIT with a doctorate in mechanical engineering. “This work highlights aluminum’s potential as a clean energy source and offers a scalable pathway for low-emission hydrogen deployment in transportation and remote energy systems.”The study’s MIT co-authors are Brooke Bao, Enoch Ellis, and professor of mechanical engineering Douglas Hart.Gas bubbleDropping an aluminum can in water won’t normally cause much of a chemical reaction. That’s because when aluminum is exposed to oxygen, it instantly forms a shield-like layer. Without this layer, aluminum exists in its pure form and can readily react when mixed with water. The reaction that occurs involves aluminum atoms that efficiently break up molecules of water, producing aluminum oxide and pure hydrogen. And it doesn’t take much of the metal to bubble up a significant amount of the gas.“One of the main benefits of using aluminum is the energy density per unit volume,” Kombargi says. “With a very small amount of aluminum fuel, you can conceivably supply much of the power for a hydrogen-fueled vehicle.”Last year, he and Hart developed a recipe for aluminum-based hydrogen production. They found they could puncture aluminum’s natural shield by treating it with a small amount of gallium-indium, which is a rare-metal alloy that effectively scrubs aluminum into its pure form. The researchers then mixed pellets of pure aluminum with seawater and observed that the reaction produced pure hydrogen. What’s more, the salt in the water helped to precipitate gallium-indium, which the team could subsequently recover and reuse to generate more hydrogen, in a cost-saving, sustainable cycle.“We were explaining the science of this process in conferences, and the questions we would get were, ‘How much does this cost?’ and, ‘What’s its carbon footprint?’” Kombargi says. “So we wanted to look at the process in a comprehensive way.”A sustainable cycleFor their new study, Kombargi and his colleagues carried out a life cycle assessment to estimate the environmental impact of aluminum-based hydrogen production, at every step of the process, from sourcing the aluminum to transporting the hydrogen after production. They set out to calculate the amount of carbon associated with generating 1 kilogram of hydrogen — an amount that they chose as a practical, consumer-level illustration.“With a hydrogen fuel cell car using 1 kilogram of hydrogen, you can go between 60 to 100 kilometers, depending on the efficiency of the fuel cell,” Kombargi notes.They performed the analysis using Earthster — an online life cycle assessment tool that draws data from a large repository of products and processes and their associated carbon emissions. The team considered a number of scenarios to produce hydrogen using aluminum, from starting with “primary” aluminum mined from the Earth, versus “secondary” aluminum that is recycled from soda cans and other products, and using various methods to transport the aluminum and hydrogen.After running life cycle assessments for about a dozen scenarios, the team identified one scenario with the lowest carbon footprint. This scenario centers on recycled aluminum — a source that saves a significant amount of emissions compared with mining aluminum — and seawater — a natural resource that also saves money by recovering gallium-indium. They found that this scenario, from start to finish, would generate about 1.45 kilograms of carbon dioxide for every kilogram of hydrogen produced. The cost of the fuel produced, they calculated, would be about $9 per kilogram, which is comparable to the price of hydrogen that would be generated with other green technologies such as wind and solar energy.The researchers envision that if the low-carbon process were ramped up to a commercial scale, it would look something like this: The production chain would start with scrap aluminum sourced from a recycling center. The aluminum would be shredded into pellets and treated with gallium-indium. Then, drivers could transport the pretreated pellets as aluminum “fuel,” rather than directly transporting hydrogen, which is potentially volatile. The pellets would be transported to a fuel station that ideally would be situated near a source of seawater, which could then be mixed with the aluminum, on demand, to produce hydrogen. A consumer could then directly pump the gas into a car with either an internal combustion engine or a fuel cell.The entire process does produce an aluminum-based byproduct, boehmite, which is a mineral that is commonly used in fabricating semiconductors, electronic elements, and a number of industrial products. Kombargi says that if this byproduct were recovered after hydrogen production, it could be sold to manufacturers, further bringing down the cost of the process as a whole.“There are a lot of things to consider,” Kombargi says. “But the process works, which is the most exciting part. And we show that it can be environmentally sustainable.”The group is continuing to develop the process. They recently designed a small reactor, about the size of a water bottle, that takes in aluminum pellets and seawater to generate hydrogen, enough to power an electric bike for several hours. They previously demonstrated that the process can produce enough hydrogen to fuel a small car. The team is also exploring underwater applications, and are designing a hydrogen reactor that would take in surrounding seawater to power a small boat or underwater vehicle.This research was supported, in part, by the MIT Portugal Program. More

  • in

    AI stirs up the recipe for concrete in MIT study

    For weeks, the whiteboard in the lab was crowded with scribbles, diagrams, and chemical formulas. A research team across the Olivetti Group and the MIT Concrete Sustainability Hub (CSHub) was working intensely on a key problem: How can we reduce the amount of cement in concrete to save on costs and emissions? The question was certainly not new; materials like fly ash, a byproduct of coal production, and slag, a byproduct of steelmaking, have long been used to replace some of the cement in concrete mixes. However, the demand for these products is outpacing supply as industry looks to reduce its climate impacts by expanding their use, making the search for alternatives urgent. The challenge that the team discovered wasn’t a lack of candidates; the problem was that there were too many to sort through.On May 17, the team, led by postdoc Soroush Mahjoubi, published an open-access paper in Nature’s Communications Materials outlining their solution. “We realized that AI was the key to moving forward,” notes Mahjoubi. “There is so much data out there on potential materials — hundreds of thousands of pages of scientific literature. Sorting through them would have taken many lifetimes of work, by which time more materials would have been discovered!”With large language models, like the chatbots many of us use daily, the team built a machine-learning framework that evaluates and sorts candidate materials based on their physical and chemical properties. “First, there is hydraulic reactivity. The reason that concrete is strong is that cement — the ‘glue’ that holds it together — hardens when exposed to water. So, if we replace this glue, we need to make sure the substitute reacts similarly,” explains Mahjoubi. “Second, there is pozzolanicity. This is when a material reacts with calcium hydroxide, a byproduct created when cement meets water, to make the concrete harder and stronger over time.  We need to balance the hydraulic and pozzolanic materials in the mix so the concrete performs at its best.”Analyzing scientific literature and over 1 million rock samples, the team used the framework to sort candidate materials into 19 types, ranging from biomass to mining byproducts to demolished construction materials. Mahjoubi and his team found that suitable materials were available globally — and, more impressively, many could be incorporated into concrete mixes just by grinding them. This means it’s possible to extract emissions and cost savings without much additional processing. “Some of the most interesting materials that could replace a portion of cement are ceramics,” notes Mahjoubi. “Old tiles, bricks, pottery — all these materials may have high reactivity. That’s something we’ve observed in ancient Roman concrete, where ceramics were added to help waterproof structures. I’ve had many interesting conversations on this with Professor Admir Masic, who leads a lot of the ancient concrete studies here at MIT.”The potential of everyday materials like ceramics and industrial materials like mine tailings is an example of how materials like concrete can help enable a circular economy. By identifying and repurposing materials that would otherwise end up in landfills, researchers and industry can help to give these materials a second life as part of our buildings and infrastructure.Looking ahead, the research team is planning to upgrade the framework to be capable of assessing even more materials, while experimentally validating some of the best candidates. “AI tools have gotten this research far in a short time, and we are excited to see how the latest developments in large language models enable the next steps,” says Professor Elsa Olivetti, senior author on the work and member of the MIT Department of Materials Science and Engineering. She serves as an MIT Climate Project mission director, a CSHub principal investigator, and the leader of the Olivetti Group.“Concrete is the backbone of the built environment,” says Randolph Kirchain, co-author and CSHub director. “By applying data science and AI tools to material design, we hope to support industry efforts to build more sustainably, without compromising on strength, safety, or durability.In addition to Mahjoubi, Olivetti, and Kirchain, co-authors on the work include MIT postdoc Vineeth Venugopal, Ipek Bensu Manav SM ’21, PhD ’24; and CSHub Deputy Director Hessam AzariJafari. More

  • in

    New fuel cell could enable electric aviation

    Batteries are nearing their limits in terms of how much power they can store for a given weight. That’s a serious obstacle for energy innovation and the search for new ways to power airplanes, trains, and ships. Now, researchers at MIT and elsewhere have come up with a solution that could help electrify these transportation systems.Instead of a battery, the new concept is a kind of fuel cell — which is similar to a battery but can be quickly refueled rather than recharged. In this case, the fuel is liquid sodium metal, an inexpensive and widely available commodity. The other side of the cell is just ordinary air, which serves as a source of oxygen atoms. In between, a layer of solid ceramic material serves as the electrolyte, allowing sodium ions to pass freely through, and a porous air-facing electrode helps the sodium to chemically react with oxygen and produce electricity.In a series of experiments with a prototype device, the researchers demonstrated that this cell could carry more than three times as much energy per unit of weight as the lithium-ion batteries used in virtually all electric vehicles today. Their findings are being published today in the journal Joule, in a paper by MIT doctoral students Karen Sugano, Sunil Mair, and Saahir Ganti-Agrawal; professor of materials science and engineering Yet-Ming Chiang; and five others.“We expect people to think that this is a totally crazy idea,” says Chiang, who is the Kyocera Professor of Ceramics. “If they didn’t, I’d be a bit disappointed because if people don’t think something is totally crazy at first, it probably isn’t going to be that revolutionary.”And this technology does appear to have the potential to be quite revolutionary, he suggests. In particular, for aviation, where weight is especially crucial, such an improvement in energy density could be the breakthrough that finally makes electrically powered flight practical at significant scale.“The threshold that you really need for realistic electric aviation is about 1,000 watt-hours per kilogram,” Chiang says. Today’s electric vehicle lithium-ion batteries top out at about 300 watt-hours per kilogram — nowhere near what’s needed. Even at 1,000 watt-hours per kilogram, he says, that wouldn’t be enough to enable transcontinental or trans-Atlantic flights.That’s still beyond reach for any known battery chemistry, but Chiang says that getting to 1,000 watts per kilogram would be an enabling technology for regional electric aviation, which accounts for about 80 percent of domestic flights and 30 percent of the emissions from aviation.The technology could be an enabler for other sectors as well, including marine and rail transportation. “They all require very high energy density, and they all require low cost,” he says. “And that’s what attracted us to sodium metal.”A great deal of research has gone into developing lithium-air or sodium-air batteries over the last three decades, but it has been hard to make them fully rechargeable. “People have been aware of the energy density you could get with metal-air batteries for a very long time, and it’s been hugely attractive, but it’s just never been realized in practice,” Chiang says.By using the same basic electrochemical concept, only making it a fuel cell instead of a battery, the researchers were able to get the advantages of the high energy density in a practical form. Unlike a battery, whose materials are assembled once and sealed in a container, with a fuel cell the energy-carrying materials go in and out.The team produced two different versions of a lab-scale prototype of the system. In one, called an H cell, two vertical glass tubes are connected by a tube across the middle, which contains a solid ceramic electrolyte material and a porous air electrode. Liquid sodium metal fills the tube on one side, and air flows through the other, providing the oxygen for the electrochemical reaction at the center, which ends up gradually consuming the sodium fuel. The other prototype uses a horizontal design, with a tray of the electrolyte material holding the liquid sodium fuel. The porous air electrode, which facilitates the reaction, is affixed to the bottom of the tray. Tests using an air stream with a carefully controlled humidity level produced a level of more than 1,500 watt-hours per kilogram at the level of an individual “stack,” which would translate to over 1,000 watt-hours at the full system level, Chiang says.The researchers envision that to use this system in an aircraft, fuel packs containing stacks of cells, like racks of food trays in a cafeteria, would be inserted into the fuel cells; the sodium metal inside these packs gets chemically transformed as it provides the power. A stream of its chemical byproduct is given off, and in the case of aircraft this would be emitted out the back, not unlike the exhaust from a jet engine.But there’s a very big difference: There would be no carbon dioxide emissions. Instead the emissions, consisting of sodium oxide, would actually soak up carbon dioxide from the atmosphere. This compound would quickly combine with moisture in the air to make sodium hydroxide — a material commonly used as a drain cleaner — which readily combines with carbon dioxide to form a solid material, sodium carbonate, which in turn forms sodium bicarbonate, otherwise known as baking soda.“There’s this natural cascade of reactions that happens when you start with sodium metal,” Chiang says. “It’s all spontaneous. We don’t have to do anything to make it happen, we just have to fly the airplane.”As an added benefit, if the final product, the sodium bicarbonate, ends up in the ocean, it could help to de-acidify the water, countering another of the damaging effects of greenhouse gases.Using sodium hydroxide to capture carbon dioxide has been proposed as a way of mitigating carbon emissions, but on its own, it’s not an economic solution because the compound is too expensive. “But here, it’s a byproduct,” Chiang explains, so it’s essentially free, producing environmental benefits at no cost.Importantly, the new fuel cell is inherently safer than many other batteries, he says. Sodium metal is extremely reactive and must be well-protected. As with lithium batteries, sodium can spontaneously ignite if exposed to moisture. “Whenever you have a very high energy density battery, safety is always a concern, because if there’s a rupture of the membrane that separates the two reactants, you can have a runaway reaction,” Chiang says. But in this fuel cell, one side is just air, “which is dilute and limited. So you don’t have two concentrated reactants right next to each other. If you’re pushing for really, really high energy density, you’d rather have a fuel cell than a battery for safety reasons.”While the device so far exists only as a small, single-cell prototype, Chiang says the system should be quite straightforward to scale up to practical sizes for commercialization. Members of the research team have already formed a company, Propel Aero, to develop the technology. The company is currently housed in MIT’s startup incubator, The Engine.Producing enough sodium metal to enable widespread, full-scale global implementation of this technology should be practical, since the material has been produced at large scale before. When leaded gasoline was the norm, before it was phased out, sodium metal was used to make the tetraethyl lead used as an additive, and it was being produced in the U.S. at a capacity of 200,000 tons a year. “It reminds us that sodium metal was once produced at large scale and safely handled and distributed around the U.S.,” Chiang says.What’s more, sodium primarily originates from sodium chloride, or salt, so it is abundant, widely distributed around the world, and easily extracted, unlike lithium and other materials used in today’s EV batteries.The system they envisage would use a refillable cartridge, which would be filled with liquid sodium metal and sealed. When it’s depleted, it would be returned to a refilling station and loaded with fresh sodium. Sodium melts at 98 degrees Celsius, just below the boiling point of water, so it is easy to heat to the melting point to refuel the cartridges.Initially, the plan is to produce a brick-sized fuel cell that can deliver about 1,000 watt-hours of energy, enough to power a large drone, in order to prove the concept in a practical form that could be used for agriculture, for example. The team hopes to have such a demonstration ready within the next year.Sugano, who conducted much of the experimental work as part of her doctoral thesis and will now work at the startup, says that a key insight was the importance of moisture in the process. As she tested the device with pure oxygen, and then with air, she found that the amount of humidity in the air was crucial to making the electrochemical reaction efficient. The humid air resulted in the sodium producing its discharge products in liquid rather than solid form, making it much easier for these to be removed by the flow of air through the system. “The key was that we can form this liquid discharge product and remove it easily, as opposed to the solid discharge that would form in dry conditions,” she says.Ganti-Agrawal notes that the team drew from a variety of different engineering subfields. For example, there has been much research on high-temperature sodium, but none with a system with controlled humidity. “We’re pulling from fuel cell research in terms of designing our electrode, we’re pulling from older high-temperature battery research as well as some nascent sodium-air battery research, and kind of mushing it together,” which led to the “the big bump in performance” the team has achieved, he says.The research team also included Alden Friesen, an MIT summer intern who attends Desert Mountain High School in Scottsdale, Arizona; Kailash Raman and William Woodford of Form Energy in Somerville, Massachusetts; Shashank Sripad of And Battery Aero in California, and Venkatasubramanian Viswanathan of the University of Michigan. The work was supported by ARPA-E, Breakthrough Energy Ventures, and the National Science Foundation, and used facilities at MIT.nano. More

  • in

    A new approach could fractionate crude oil using much less energy

    Separating crude oil into products such as gasoline, diesel, and heating oil is an energy-intensive process that accounts for about 6 percent of the world’s CO2 emissions. Most of that energy goes into the heat needed to separate the components by their boiling point.In an advance that could dramatically reduce the amount of energy needed for crude oil fractionation, MIT engineers have developed a membrane that filters the components of crude oil by their molecular size.“This is a whole new way of envisioning a separation process. Instead of boiling mixtures to purify them, why not separate components based on shape and size? The key innovation is that the filters we developed can separate very small molecules at an atomistic length scale,” says Zachary P. Smith, an associate professor of chemical engineering at MIT and the senior author of the new study.The new filtration membrane can efficiently separate heavy and light components from oil, and it is resistant to the swelling that tends to occur with other types of oil separation membranes. The membrane is a thin film that can be manufactured using a technique that is already widely used in industrial processes, potentially allowing it to be scaled up for widespread use.Taehoon Lee, a former MIT postdoc who is now an assistant professor at Sungkyunkwan University in South Korea, is the lead author of the paper, which appears today in Science.Oil fractionationConventional heat-driven processes for fractionating crude oil make up about 1 percent of global energy use, and it has been estimated that using membranes for crude oil separation could reduce the amount of energy needed by about 90 percent. For this to succeed, a separation membrane needs to allow hydrocarbons to pass through quickly, and to selectively filter compounds of different sizes.Until now, most efforts to develop a filtration membrane for hydrocarbons have focused on polymers of intrinsic microporosity (PIMs), including one known as PIM-1. Although this porous material allows the fast transport of hydrocarbons, it tends to excessively absorb some of the organic compounds as they pass through the membrane, leading the film to swell, which impairs its size-sieving ability.To come up with a better alternative, the MIT team decided to try modifying polymers that are used for reverse osmosis water desalination. Since their adoption in the 1970s, reverse osmosis membranes have reduced the energy consumption of desalination by about 90 percent — a remarkable industrial success story.The most commonly used membrane for water desalination is a polyamide that is manufactured using a method known as interfacial polymerization. During this process, a thin polymer film forms at the interface between water and an organic solvent such as hexane. Water and hexane do not normally mix, but at the interface between them, a small amount of the compounds dissolved in them can react with each other.In this case, a hydrophilic monomer called MPD, which is dissolved in water, reacts with a hydrophobic monomer called TMC, which is dissolved in hexane. The two monomers are joined together by a connection known as an amide bond, forming a polyamide thin film (named MPD-TMC) at the water-hexane interface.While highly effective for water desalination, MPD-TMC doesn’t have the right pore sizes and swelling resistance that would allow it to separate hydrocarbons.To adapt the material to separate the hydrocarbons found in crude oil, the researchers first modified the film by changing the bond that connects the monomers from an amide bond to an imine bond. This bond is more rigid and hydrophobic, which allows hydrocarbons to quickly move through the membrane without causing noticeable swelling of the film compared to the polyamide counterpart.“The polyimine material has porosity that forms at the interface, and because of the cross-linking chemistry that we have added in, you now have something that doesn’t swell,” Smith says. “You make it in the oil phase, react it at the water interface, and with the crosslinks, it’s now immobilized. And so those pores, even when they’re exposed to hydrocarbons, no longer swell like other materials.”The researchers also introduced a monomer called triptycene. This shape-persistent, molecularly selective molecule further helps the resultant polyimines to form pores that are the right size for hydrocarbons to fit through.This approach represents “an important step toward reducing industrial energy consumption,” says Andrew Livingston, a professor of chemical engineering at Queen Mary University of London, who was not involved in the study.“This work takes the workhorse technology of the membrane desalination industry, interfacial polymerization, and creates a new way to apply it to organic systems such as hydrocarbon feedstocks, which currently consume large chunks of global energy,” Livingston says. “The imaginative approach using an interfacial catalyst coupled to hydrophobic monomers leads to membranes with high permeance and excellent selectivity, and the work shows how these can be used in relevant separations.”Efficient separationWhen the researchers used the new membrane to filter a mixture of toluene and triisopropylbenzene (TIPB) as a benchmark for evaluating separation performance, it was able to achieve a concentration of toluene 20 times greater than its concentration in the original mixture. They also tested the membrane with an industrially relevant mixture consisting of naphtha, kerosene, and diesel, and found that it could efficiently separate the heavier and lighter compounds by their molecular size.If adapted for industrial use, a series of these filters could be used to generate a higher concentration of the desired products at each step, the researchers say.“You can imagine that with a membrane like this, you could have an initial stage that replaces a crude oil fractionation column. You could partition heavy and light molecules and then you could use different membranes in a cascade to purify complex mixtures to isolate the chemicals that you need,” Smith says.Interfacial polymerization is already widely used to create membranes for water desalination, and the researchers believe it should be possible to adapt those processes to mass produce the films they designed in this study.“The main advantage of interfacial polymerization is it’s already a well-established method to prepare membranes for water purification, so you can imagine just adopting these chemistries into existing scale of manufacturing lines,” Lee says.The research was funded, in part, by ExxonMobil through the MIT Energy Initiative.  More

  • in

    Study: Climate change may make it harder to reduce smog in some regions

    Global warming will likely hinder our future ability to control ground-level ozone, a harmful air pollutant that is a primary component of smog, according to a new MIT study.The results could help scientists and policymakers develop more effective strategies for improving both air quality and human health. Ground-level ozone causes a host of detrimental health impacts, from asthma to heart disease, and contributes to thousands of premature deaths each year.The researchers’ modeling approach reveals that, as the Earth warms due to climate change, ground-level ozone will become less sensitive to reductions in nitrogen oxide emissions in eastern North America and Western Europe. In other words, it will take greater nitrogen oxide emission reductions to get the same air quality benefits.However, the study also shows that the opposite would be true in northeast Asia, where cutting emissions would have a greater impact on reducing ground-level ozone in the future. The researchers combined a climate model that simulates meteorological factors, such as temperature and wind speeds, with a chemical transport model that estimates the movement and composition of chemicals in the atmosphere.By generating a range of possible future outcomes, the researchers’ ensemble approach better captures inherent climate variability, allowing them to paint a fuller picture than many previous studies.“Future air quality planning should consider how climate change affects the chemistry of air pollution. We may need steeper cuts in nitrogen oxide emissions to achieve the same air quality goals,” says Emmie Le Roy, a graduate student in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) and lead author of a paper on this study.Her co-authors include Anthony Y.H. Wong, a postdoc in the MIT Center for Sustainability Science and Strategy; Sebastian D. Eastham, principal research scientist in the MIT Center for Sustainability Science and Strategy; Arlene Fiore, the Peter H. Stone and Paola Malanotte Stone Professor of EAPS; and senior author Noelle Selin, a professor in the Institute for Data, Systems, and Society (IDSS) and EAPS. The research appears today in Environmental Science and Technology.Controlling ozoneGround-level ozone differs from the stratospheric ozone layer that protects the Earth from harmful UV radiation. It is a respiratory irritant that is harmful to the health of humans, animals, and plants.Controlling ground-level ozone is particularly challenging because it is a secondary pollutant, formed in the atmosphere by complex reactions involving nitrogen oxides and volatile organic compounds in the presence of sunlight.“That is why you tend to have higher ozone days when it is warm and sunny,” Le Roy explains.Regulators typically try to reduce ground-level ozone by cutting nitrogen oxide emissions from industrial processes. But it is difficult to predict the effects of those policies because ground-level ozone interacts with nitrogen oxide and volatile organic compounds in nonlinear ways.Depending on the chemical environment, reducing nitrogen oxide emissions could cause ground-level ozone to increase instead.“Past research has focused on the role of emissions in forming ozone, but the influence of meteorology is a really important part of Emmie’s work,” Selin says.To conduct their study, the researchers combined a global atmospheric chemistry model with a climate model that simulate future meteorology.They used the climate model to generate meteorological inputs for each future year in their study, simulating factors such as likely temperature and wind speeds, in a way that captures the inherent variability of a region’s climate.Then they fed those inputs to the atmospheric chemistry model, which calculates how the chemical composition of the atmosphere would change because of meteorology and emissions.The researchers focused on Eastern North America, Western Europe, and Northeast China, since those regions have historically high levels of the precursor chemicals that form ozone and well-established monitoring networks to provide data.They chose to model two future scenarios, one with high warming and one with low warming, over a 16-year period between 2080 and 2095. They compared them to a historical scenario capturing 2000 to 2015 to see the effects of a 10 percent reduction in nitrogen oxide emissions.Capturing climate variability“The biggest challenge is that the climate naturally varies from year to year. So, if you want to isolate the effects of climate change, you need to simulate enough years to see past that natural variability,” Le Roy says.They could overcome that challenge due to recent advances in atmospheric chemistry modeling and by taking advantage of parallel computing to simulate multiple years at the same time. They simulated five 16-year realizations, resulting in 80 model years for each scenario.The researchers found that eastern North America and Western Europe are especially sensitive to increases in nitrogen oxide emissions from the soil, which are natural emissions driven by increases in temperature.Due to that sensitivity, as the Earth warms and more nitrogen oxide from soil enters the atmosphere, reducing nitrogen oxide emissions from human activities will have less of an impact on ground-level ozone.“This shows how important it is to improve our representation of the biosphere in these models to better understand how climate change may impact air quality,” Le Roy says.On the other hand, since industrial processes in northeast Asia cause more ozone per unit of nitrogen oxide emitted, cutting emissions there would cause greater reductions in ground-level ozone in future warming scenarios.“But I wouldn’t say that is a good thing because it means that, overall, there are higher levels of ozone,” Le Roy adds.Running detailed meteorology simulations, rather than relying on annual average weather data, gave the researchers a more complete picture of the potential effects on human health.“Average climate isn’t the only thing that matters. One high ozone day, which might be a statistical anomaly, could mean we don’t meet our air quality target and have negative human health impacts that we should care about,” Le Roy says.In the future, the researchers want to continue exploring the intersection of meteorology and air quality. They also want to expand their modeling approach to consider other climate change factors with high variability, like wildfires or biomass burning.“We’ve shown that it is important for air quality scientists to consider the full range of climate variability, even if it is hard to do in your models, because it really does affect the answer that you get,” says Selin.This work is funded, in part, by the MIT Praecis Presidential Fellowship, the J.H. and E.V. Wade Fellowship, and the MIT Martin Family Society of Fellows for Sustainability. More

  • in

    How to solve a bottleneck for CO2 capture and conversion

    Removing carbon dioxide from the atmosphere efficiently is often seen as a crucial need for combatting climate change, but systems for removing carbon dioxide suffer from a tradeoff. Chemical compounds that efficiently remove CO₂ from the air do not easily release it once captured, and compounds that release CO₂ efficiently are not very efficient at capturing it. Optimizing one part of the cycle tends to make the other part worse.Now, using nanoscale filtering membranes, researchers at MIT have added a simple intermediate step that facilitates both parts of the cycle. The new approach could improve the efficiency of electrochemical carbon dioxide capture and release by six times and cut costs by at least 20 percent, they say.The new findings are reported today in the journal ACS Energy Letters, in a paper by MIT doctoral students Simon Rufer, Tal Joseph, and Zara Aamer, and professor of mechanical engineering Kripa Varanasi.“We need to think about scale from the get-go when it comes to carbon capture, as making a meaningful impact requires processing gigatons of CO₂,” says Varanasi. “Having this mindset helps us pinpoint critical bottlenecks and design innovative solutions with real potential for impact. That’s the driving force behind our work.”Many carbon-capture systems work using chemicals called hydroxides, which readily combine with carbon dioxide to form carbonate. That carbonate is fed into an electrochemical cell, where the carbonate reacts with an acid to form water and release carbon dioxide. The process can take ordinary air with only about 400 parts per million of carbon dioxide and generate a stream of 100 percent pure carbon dioxide, which can then be used to make fuels or other products.Both the capture and release steps operate in the same water-based solution, but the first step needs a solution with a high concentration of hydroxide ions, and the second step needs one high in carbonate ions. “You can see how these two steps are at odds,” says Varanasi. “These two systems are circulating the same sorbent back and forth. They’re operating on the exact same liquid. But because they need two different types of liquids to operate optimally, it’s impossible to operate both systems at their most efficient points.”The team’s solution was to decouple the two parts of the system and introduce a third part in between. Essentially, after the hydroxide in the first step has been mostly chemically converted to carbonate, special nanofiltration membranes then separate ions in the solution based on their charge. Carbonate ions have a charge of 2, while hydroxide ions have a charge of 1. “The nanofiltration is able to separate these two pretty well,” Rufer says.Once separated, the hydroxide ions are fed back to the absorption side of the system, while the carbonates are sent ahead to the electrochemical release stage. That way, both ends of the system can operate at their more efficient ranges. Varanasi explains that in the electrochemical release step, protons are being added to the carbonate to cause the conversion to carbon dioxide and water, but if hydroxide ions are also present, the protons will react with those ions instead, producing just water.“If you don’t separate these hydroxides and carbonates,” Rufer says, “the way the system fails is you’ll add protons to hydroxide instead of carbonate, and so you’ll just be making water rather than extracting carbon dioxide. That’s where the efficiency is lost. Using nanofiltration to prevent this was something that we aren’t aware of anyone proposing before.”Testing showed that the nanofiltration could separate the carbonate from the hydroxide solution with about 95 percent efficiency, validating the concept under realistic conditions, Rufer says. The next step was to assess how much of an effect this would have on the overall efficiency and economics of the process. They created a techno-economic model, incorporating electrochemical efficiency, voltage, absorption rate, capital costs, nanofiltration efficiency, and other factors.The analysis showed that present systems cost at least $600 per ton of carbon dioxide captured, while with the nanofiltration component added, that drops to about $450 a ton. What’s more, the new system is much more stable, continuing to operate at high efficiency even under variations in the ion concentrations in the solution. “In the old system without nanofiltration, you’re sort of operating on a knife’s edge,” Rufer says; if the concentration varies even slightly in one direction or the other, efficiency drops off drastically. “But with our nanofiltration system, it kind of acts as a buffer where it becomes a lot more forgiving. You have a much broader operational regime, and you can achieve significantly lower costs.”He adds that this approach could apply not only to the direct air capture systems they studied specifically, but also to point-source systems — which are attached directly to the emissions sources such as power plant emissions — or to the next stage of the process, converting captured carbon dioxide into useful products such as fuel or chemical feedstocks.  Those conversion processes, he says, “are also bottlenecked in this carbonate and hydroxide tradeoff.”In addition, this technology could lead to safer alternative chemistries for carbon capture, Varanasi says. “A lot of these absorbents can at times be toxic, or damaging to the environment. By using a system like ours, you can improve the reaction rate, so you can choose chemistries that might not have the best absorption rate initially but can be improved to enable safety.”Varanasi adds that “the really nice thing about this is we’ve been able to do this with what’s commercially available,” and with a system that can easily be retrofitted to existing carbon-capture installations. If the costs can be further brought down to about $200 a ton, it could be viable for widespread adoption. With ongoing work, he says, “we’re confident that we’ll have something that can become economically viable” and that will ultimately produce valuable, saleable products.Rufer notes that even today, “people are buying carbon credits at a cost of over $500 per ton. So, at this cost we’re projecting, it is already commercially viable in that there are some buyers who are willing to pay that price.” But by bringing the price down further, that should increase the number of buyers who would consider buying the credit, he says. “It’s just a question of how widespread we can make it.” Recognizing this growing market demand, Varanasi says, “Our goal is to provide industry scalable, cost-effective, and reliable technologies and systems that enable them to directly meet their decarbonization targets.”The research was supported by Shell International Exploration and Production Inc. through the MIT Energy Initiative, and the U.S. National Science Foundation, and made use of the facilities at MIT.nano. More

  • in

    Imaging technique removes the effect of water in underwater scenes

    The ocean is teeming with life. But unless you get up close, much of the marine world can easily remain unseen. That’s because water itself can act as an effective cloak: Light that shines through the ocean can bend, scatter, and quickly fade as it travels through the dense medium of water and reflects off the persistent haze of ocean particles. This makes it extremely challenging to capture the true color of objects in the ocean without imaging them at close range.Now a team from MIT and the Woods Hole Oceanographic Institution (WHOI) has developed an image-analysis tool that cuts through the ocean’s optical effects and generates images of underwater environments that look as if the water had been drained away, revealing an ocean scene’s true colors. The team paired the color-correcting tool with a computational model that converts images of a scene into a three-dimensional underwater “world,” that can then be explored virtually.The researchers have dubbed the new tool “SeaSplat,” in reference to both its underwater application and a method known as 3D gaussian splatting (3DGS), which takes images of a scene and stitches them together to generate a complete, three-dimensional representation that can be viewed in detail, from any perspective.“With SeaSplat, it can model explicitly what the water is doing, and as a result it can in some ways remove the water, and produces better 3D models of an underwater scene,” says MIT graduate student Daniel Yang.The researchers applied SeaSplat to images of the sea floor taken by divers and underwater vehicles, in various locations including the U.S. Virgin Islands. The method generated 3D “worlds” from the images that were truer and more vivid and varied in color, compared to previous methods.The team says SeaSplat could help marine biologists monitor the health of certain ocean communities. For instance, as an underwater robot explores and takes pictures of a coral reef, SeaSplat would simultaneously process the images and render a true-color, 3D representation, that scientists could then virtually “fly” through, at their own pace and path, to inspect the underwater scene, for instance for signs of coral bleaching.“Bleaching looks white from close up, but could appear blue and hazy from far away, and you might not be able to detect it,” says Yogesh Girdhar, an associate scientist at WHOI. “Coral bleaching, and different coral species, could be easier to detect with SeaSplat imagery, to get the true colors in the ocean.”Girdhar and Yang will present a paper detailing SeaSplat at the IEEE International Conference on Robotics and Automation (ICRA). Their study co-author is John Leonard, professor of mechanical engineering at MIT.Aquatic opticsIn the ocean, the color and clarity of objects is distorted by the effects of light traveling through water. In recent years, researchers have developed color-correcting tools that aim to reproduce the true colors in the ocean. These efforts involved adapting tools that were developed originally for environments out of water, for instance to reveal the true color of features in foggy conditions. One recent work accurately reproduces true colors in the ocean, with an algorithm named “Sea-Thru,” though this method requires a huge amount of computational power, which makes its use in producing 3D scene models challenging.In parallel, others have made advances in 3D gaussian splatting, with tools that seamlessly stitch images of a scene together, and intelligently fill in any gaps to create a whole, 3D version of the scene. These 3D worlds enable “novel view synthesis,” meaning that someone can view the generated 3D scene, not just from the perspective of the original images, but from any angle and distance.But 3DGS has only successfully been applied to environments out of water. Efforts to adapt 3D reconstruction to underwater imagery have been hampered, mainly by two optical underwater effects: backscatter and attenuation. Backscatter occurs when light reflects off of tiny particles in the ocean, creating a veil-like haze. Attenuation is the phenomenon by which light of certain wavelengths attenuates, or fades with distance. In the ocean, for instance, red objects appear to fade more than blue objects when viewed from farther away.Out of water, the color of objects appears more or less the same regardless of the angle or distance from which they are viewed. In water, however, color can quickly change and fade depending on one’s perspective. When 3DGS methods attempt to stitch underwater images into a cohesive 3D whole, they are unable to resolve objects due to aquatic backscatter and attenuation effects that distort the color of objects at different angles.“One dream of underwater robotic vision that we have is: Imagine if you could remove all the water in the ocean. What would you see?” Leonard says.A model swimIn their new work, Yang and his colleagues developed a color-correcting algorithm that accounts for the optical effects of backscatter and attenuation. The algorithm determines the degree to which every pixel in an image must have been distorted by backscatter and attenuation effects, and then essentially takes away those aquatic effects, and computes what the pixel’s true color must be.Yang then worked the color-correcting algorithm into a 3D gaussian splatting model to create SeaSplat, which can quickly analyze underwater images of a scene and generate a true-color, 3D virtual version of the same scene that can be explored in detail from any angle and distance.The team applied SeaSplat to multiple underwater scenes, including images taken in the Red Sea, in the Carribean off the coast of Curaçao, and the Pacific Ocean, near Panama. These images, which the team took from a pre-existing dataset, represent a range of ocean locations and water conditions. They also tested SeaSplat on images taken by a remote-controlled underwater robot in the U.S. Virgin Islands.From the images of each ocean scene, SeaSplat generated a true-color 3D world that the researchers were able to virtually explore, for instance zooming in and out of a scene and viewing certain features from different perspectives. Even when viewing from different angles and distances, they found objects in every scene retained their true color, rather than fading as they would if viewed through the actual ocean.“Once it generates a 3D model, a scientist can just ‘swim’ through the model as though they are scuba-diving, and look at things in high detail, with real color,” Yang says.For now, the method requires hefty computing resources in the form of a desktop computer that would be too bulky to carry aboard an underwater robot. Still, SeaSplat could work for tethered operations, where a vehicle, tied to a ship, can explore and take images that can be sent up to a ship’s computer.“This is the first approach that can very quickly build high-quality 3D models with accurate colors, underwater, and it can create them and render them fast,” Girdhar says. “That will help to quantify biodiversity, and assess the health of coral reef and other marine communities.”This work was supported, in part, by the Investment in Science Fund at WHOI, and by the U.S. National Science Foundation. More