More stories

  • in

    Using graphene foam to filter toxins from drinking water

    Some kinds of water pollution, such as algal blooms and plastics that foul rivers, lakes, and marine environments, lie in plain sight. But other contaminants are not so readily apparent, which makes their impact potentially more dangerous. Among these invisible substances is uranium. Leaching into water resources from mining operations, nuclear waste sites, or from natural subterranean deposits, the element can now be found flowing out of taps worldwide.

    In the United States alone, “many areas are affected by uranium contamination, including the High Plains and Central Valley aquifers, which supply drinking water to 6 million people,” says Ahmed Sami Helal, a postdoc in the Department of Nuclear Science and Engineering. This contamination poses a near and present danger. “Even small concentrations are bad for human health,” says Ju Li, the Battelle Energy Alliance Professor of Nuclear Science and Engineering and professor of materials science and engineering.

    Now, a team led by Li has devised a highly efficient method for removing uranium from drinking water. Applying an electric charge to graphene oxide foam, the researchers can capture uranium in solution, which precipitates out as a condensed solid crystal. The foam may be reused up to seven times without losing its electrochemical properties. “Within hours, our process can purify a large quantity of drinking water below the EPA limit for uranium,” says Li.

    A paper describing this work was published in this week Advanced Materials. The two first co-authors are Helal and Chao Wang, a postdoc at MIT during the study, who is now with the School of Materials Science and Engineering at Tongji University, Shanghai. Researchers from Argonne National Laboratory, Taiwan’s National Chiao Tung University, and the University of Tokyo also participated in the research. The Defense Threat Reduction Agency (U.S. Department of Defense) funded later stages of this work.

    Targeting the contaminant

    The project, launched three years ago, began as an effort to find better approaches to environmental cleanup of heavy metals from mining sites. To date, remediation methods for such metals as chromium, cadmium, arsenic, lead, mercury, radium, and uranium have proven limited and expensive. “These techniques are highly sensitive to organics in water, and are poor at separating out the heavy metal contaminants,” explains Helal. “So they involve long operation times, high capital costs, and at the end of extraction, generate more toxic sludge.”

    To the team, uranium seemed a particularly attractive target. Field testing from the U.S. Geological Service and the Environmental Protection Agency (EPA) has revealed unhealthy levels of uranium moving into reservoirs and aquifers from natural rock sources in the northeastern United States, from ponds and pits storing old nuclear weapons and fuel in places like Hanford, Washington, and from mining activities located in many western states. This kind of contamination is prevalent in many other nations as well. An alarming number of these sites show uranium concentrations close to or above the EPA’s recommended ceiling of 30 parts per billion (ppb) — a level linked to kidney damage, cancer risk, and neurobehavioral changes in humans.

    The critical challenge lay in finding a practical remediation process exclusively sensitive to uranium, capable of extracting it from solution without producing toxic residues. And while earlier research showed that electrically charged carbon fiber could filter uranium from water, the results were partial and imprecise.

    Wang managed to crack these problems — based on her investigation of the behavior of graphene foam used for lithium-sulfur batteries. “The physical performance of this foam was unique because of its ability to attract certain chemical species to its surface,” she says. “I thought the ligands in graphene foam would work well with uranium.”

    Simple, efficient, and clean

    The team set to work transforming graphene foam into the equivalent of a uranium magnet. They learned that by sending an electric charge through the foam, splitting water and releasing hydrogen, they could increase the local pH and induce a chemical change that pulled uranium ions out of solution. The researchers found that the uranium would graft itself onto the foam’s surface, where it formed a never-before-seen crystalline uranium hydroxide. On reversal of the electric charge, the mineral, which resembles fish scales, slipped easily off the foam.

    It took hundreds of tries to get the chemical composition and electrolysis just right. “We kept changing the functional chemical groups to get them to work correctly,” says Helal. “And the foam was initially quite fragile, tending to break into pieces, so we needed to make it stronger and more durable,” says Wang.

    This uranium filtration process is simple, efficient, and clean, according to Li: “Each time it’s used, our foam can capture four times its own weight of uranium, and we can achieve an extraction capacity of 4,000 mg per gram, which is a major improvement over other methods,” he says. “We’ve also made a major breakthrough in reusability, because the foam can go through seven cycles without losing its extraction efficiency.” The graphene foam functions as well in seawater, where it reduces uranium concentrations from 3 parts per million to 19.9 ppb, showing that other ions in the brine do not interfere with filtration.

    The team believes its low-cost, effective device could become a new kind of home water filter, fitting on faucets like those of commercial brands. “Some of these filters already have activated carbon, so maybe we could modify these, add low-voltage electricity to filter uranium,” says Li.

    “The uranium extraction this device achieves is very impressive when compared to existing methods,” says Ho Jin Ryu, associate professor of nuclear and quantum engineering at the Korea Advanced Institute of Science and Technology. Ryu, who was not involved in the research, believes that the demonstration of graphene foam reusability is a “significant advance,” and that “the technology of local pH control to enhance uranium deposition will be impactful because the scientific principle can be applied more generally to heavy metal extraction from polluted water.”

    The researchers have already begun investigating broader applications of their method. “There is a science to this, so we can modify our filters to be selective for other heavy metals such as lead, mercury, and cadmium,” says Li. He notes that radium is another significant danger for locales in the United States and elsewhere that lack resources for reliable drinking water infrastructure.

    “In the future, instead of a passive water filter, we could be using a smart filter powered by clean electricity that turns on electrolytic action, which could extract multiple toxic metals, tell you when to regenerate the filter, and give you quality assurance about the water you’re drinking.” More

  • in

    Vapor-collection technology saves water while clearing the air

    About two-fifths of all the water that gets withdrawn from lakes, rivers, and wells in the U.S. is used not for agriculture, drinking, or sanitation, but to cool the power plants that provide electricity from fossil fuels or nuclear power. Over 65 percent of these plants use evaporative cooling, leading to huge white plumes that billow from their cooling towers, which can be a nuisance and, in some cases, even contribute to dangerous driving conditions.

    Now, a small company based on technology recently developed at MIT by the Varanasi Research Group is hoping to reduce both the water needs at these plants and the resultant plumes — and to potentially help alleviate water shortages in areas where power plants put pressure on local water systems.

    The technology is surprisingly simple in principle, but developing it to the point where it can now be tested at full scale on industrial plants was a more complex proposition. That required the real-world experience that the company’s founders gained from installing prototype systems, first on MIT’s natural-gas-powered cogeneration plant and then on MIT’s nuclear research reactor.

    In these demanding tests, which involved exposure to not only the heat and vibrations of a working industrial plant but also the rigors of New England winters, the system proved its effectiveness at both eliminating the vapor plume and recapturing water. And, it purified the water in the process, so that it was 100 times cleaner than the incoming cooling water. The system is now being prepared for full-scale tests in a commercial power plant and in a chemical processing plant.

    “Campus as a living laboratory”

    The technology was originally envisioned by professor of mechanical engineering Kripa Varanasi to develop efficient water-recovery systems by capturing water droplets from both natural fog and plumes from power plant cooling towers. The project began as part of doctoral thesis research of Maher Damak PhD ’18, with funding from the MIT Tata Center for Technology and Design, to improve the efficiency of fog-harvesting systems like the ones used in some arid coastal regions as a source of potable water. Those systems, which generally consist of plastic or metal mesh hung vertically in the path of fogbanks, are extremely inefficient, capturing only about 1 to 3 percent of the water droplets that pass through them.

    Varanasi and Damak found that vapor collection could be made much more efficient by first zapping the tiny droplets of water with a beam of electrically charged particles, or ions, to give each droplet a slight electric charge. Then, the stream of droplets passes through a wire mesh, like a window screen, that has an opposite electrical charge. This causes the droplets to be strongly attracted to the mesh, where they fall away due to gravity and can be collected in trays placed below the mesh.

    Lab tests showed the concept worked, and the researchers, joined by Karim Khalil PhD ’18, won the MIT $100K Entrepreneurship Competition in 2018 for the basic concept. The nascent company, which they called Infinite Cooling, with Damak as CEO, Khalil as CTO, and Varanasi as chairperson, immediately went to work setting up a test installation on one of the cooling towers of MIT’s natural-gas-powered Central Utility Plant, with funding from the MIT Office of Sustainability. After experimenting with various configurations, they were able to show that the system could indeed eliminate the plume and produce water of high purity.

    Professor Jacopo Buongiorno in the Department of Nuclear Science and Engineering immediately spotted a good opportunity for collaboration, offering the use of MIT’s Nuclear Reactor Laboratory research facility for further testing of the system with the help of NRL engineer Ed Block. With its 24/7 operation and its higher-temperature vapor emissions, the plant would provide a more stringent real-world test of the system, as well as proving its effectiveness in an actual operating reactor licensed by the Nuclear Regulatory Commission, an important step in “de-risking” the technology so that electric utilities could feel confident in adopting the system.

    After the system was installed above one of the plant’s four cooling towers, testing showed that the water being collected was more than 100 times cleaner than the feedwater coming into the cooling system. It also proved that the installation — which, unlike the earlier version, had its mesh screens mounted vertically, parallel to the vapor stream — had no effect at all on the operation of the plant. Video of the tests dramatically illustrates how as soon as the power is switched on to the collecting mesh, the white plume of vapor immediately disappears completely.

    The high temperature and volume of the vapor plume from the reactor’s cooling towers represented “kind of a worst-case scenario in terms of plumes,” Damak says, “so if we can capture that, we can basically capture anything.”

    Working with MIT’s Nuclear Reactor Laboratory, Varanasi says, “has been quite an important step because it helped us to test it at scale. … It really both validated the water quality and the performance of the system.” The process, he says, “shows the importance of using the campus as a living laboratory. It allows us to do these kinds of experiments at scale, and also showed the ability to sustainably reduce the water footprint of the campus.”

    Far-reaching benefits

    Power plant plumes are often considered an eyesore and can lead to local opposition to new power plants because of the potential for obscured views, and even potential traffic hazards when the obscuring plumes blow across roadways. “The ability to eliminate the plumes could be an important benefit, allowing plants to be sited in locations that might otherwise be restricted,” Buongiorno says. At the same time, the system could eliminate a significant amount of water used by the plants and then lost to the sky, potentially alleviating pressure on local water systems, which could be especially helpful in arid regions.

    The system is essentially a distillation process, and the pure water it produces could go into power plant boilers — which are separate from the cooling system — that require high-purity water. That might reduce the need for both fresh water and purification systems for the boilers.

    What’s more, in many arid coastal areas power plants are cooled directly with seawater. This system would essentially add a water desalination capability to the plant, at a fraction of the cost of building a new standalone desalination plant, and at an even smaller fraction of its operating costs since the heat would essentially be provided for free.

    Contamination of water is typically measured by testing its electrical conductivity, which increases with the amount of salts and other contaminants it contains. Water used in power plant cooling systems typically measures 3,000 microsiemens per centimeter, Khalil explains, while the water supply in the City of Cambridge is typically around 500 or 600 microsiemens per centimeter. The water captured by this system, he says, typically measures below 50 microsiemens per centimeter.

    Thanks to the validation provided by the testing on MIT’s plants, the company has now been able to secure arrangements for its first two installations on operating commercial plants, which should begin later this year. One is a 900-megawatt power plant where the system’s clean water production will be a major advantage, and the other is at a chemical manufacturing plant in the Midwest.

    In many locations power plants have to pay for the water they use for cooling, Varanasi says, and the new system is expected to reduce the need for water by up to 20 percent. For a typical power plant, that alone could account for about a million dollars saved in water costs per year, he says.

    “Innovation has been a hallmark of the U.S. commercial industry for more than six decades,” says Maria G. Korsnick, president and CEO of the Nuclear Energy Institute, who was not involved in the research. “As the changing climate impacts every aspect of life, including global water supplies, companies across the supply chain are innovating for solutions. The testing of this innovative technology at MIT provides a valuable basis for its consideration in commercial applications.” More

  • in

    A new way to detect the SARS-CoV-2 Alpha variant in wastewater

    Researchers from the Antimicrobial Resistance (AMR) interdisciplinary research group at the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, alongside collaborators from Biobot Analytics, Nanyang Technological University (NTU), and MIT, have successfully developed an innovative, open-source molecular detection method that is able to detect and quantify the B.1.1.7 (Alpha) variant of SARS-CoV-2. The breakthrough paves the way for rapid, inexpensive surveillance of other SARS-CoV-2 variants in wastewater.

    As the world continues to battle and contain Covid-19, the recent identification of SARS-CoV-2 variants with higher transmissibility and increased severity has made developing convenient variant tracking methods essential. Currently, identified variants include the B.1.17 (Alpha) variant first identified in the United Kingdom and the B.1.617.2 (Delta) variant first detected in India.

    Wastewater surveillance has emerged as a critical public health tool to safely and efficiently track the SARS-CoV-2 pandemic in a non-intrusive manner, providing complementary information that enables health authorities to acquire actionable community-level information. Most recently, viral fragments of SARS-CoV-2 were detected in housing estates in Singapore through a proactive wastewater surveillance program. This information, alongside surveillance testing, allowed Singapore’s Ministry of Health to swiftly respond, isolate, and conduct swab tests as part of precautionary measures.

    However, detecting variants through wastewater surveillance is less commonplace due to challenges in existing technology. Next-generation sequencing for wastewater surveillance is time-consuming and expensive. Tests also lack the sensitivity required to detect low variant abundances in dilute and mixed wastewater samples due to inconsistent and/or low sequencing coverage.

    The method developed by the researchers is uniquely tailored to address these challenges and expands the utility of wastewater surveillance beyond testing for SARS-CoV-2, toward tracking the spread of SARS-CoV-2 variants of concern.

    Wei Lin Lee, research scientist at SMART AMR and first author on the paper adds, “This is especially important in countries battling SARS-CoV-2 variants. Wastewater surveillance will help find out the true proportion and spread of the variants in the local communities. Our method is sensitive enough to detect variants in highly diluted SARS-CoV-2 concentrations typically seen in wastewater samples, and produces reliable results even for samples which contain multiple SARS-CoV-2 lineages.”

    Led by Janelle Thompson, NTU associate professor, and Eric Alm, MIT professor and SMART AMR principal investigator, the team’s study, “Quantitative SARS-CoV-2 Alpha variant B.1.1.7 Tracking in Wastewater by Allele-Specific RT-qPCR” has been published in Environmental Science & Technology Letters. The research explains the innovative, open-source molecular detection method based on allele-specific RT-qPCR that detects and quantifies the B.1.1.7 (Alpha) variant. The developed assay, tested and validated in wastewater samples across 19 communities in the United States, is able to reliably detect and quantify low levels of the B.1.1.7 (Alpha) variant with low cross-reactivity, and at variant proportions down to 1 percent in a background of mixed SARS-CoV-2 viruses.

    Targeting spike protein mutations that are highly predictive of the B.1.1.7 (Alpha) variant, the method can be implemented using commercially available RT-qPCR protocols. Unlike commercially available products that use proprietary primers and probes for wastewater surveillance, the paper details the open-source method and its development that can be freely used by other organizations and research institutes for their work on wastewater surveillance of SARS-CoV-2 and its variants.

    The breakthrough by the research team in Singapore is currently used by Biobot Analytics, an MIT startup and global leader in wastewater epidemiology headquartered in Cambridge, Massachusetts, serving states and localities throughout the United States. Using the method, Biobot Analytics is able to accept and analyze wastewater samples for the B.1.1.7 (Alpha) variant and plans to add additional variants to its analysis as methods are developed. For example, the SMART AMR team is currently developing specific assays that will be able to detect and quantify the B.1.617.2 (Delta) variant, which has recently been identified as a variant of concern by the World Health Organization.

    “Using the team’s innovative method, we have been able to monitor the B.1.1.7 (Alpha) variant in local populations in the U.S. — empowering leaders with information about Covid-19 trends in their communities and allowing them to make considered recommendations and changes to control measures,” says Mariana Matus PhD ’18, Biobot Analytics CEO and co-founder.

    “This method can be rapidly adapted to detect new variants of concern beyond B.1.1.7,” adds MIT’s Alm. “Our partnership with Biobot Analytics has translated our research into real-world impact beyond the shores of Singapore and aid in the detection of Covid-19 and its variants, serving as an early warning system and guidance for policymakers as they trace infection clusters and consider suitable public health measures.”

    The research is carried out by SMART and supported by the National Research Foundation (NRF) Singapore under its Campus for Research Excellence And Technological Enterprise (CREATE) program.

    SMART was established by MIT in partnership with the National Research Foundation of Singapore (NRF) in 2007. SMART is the first entity in CREATE developed by NRF. SMART serves as an intellectual and innovation hub for research interactions between MIT and Singapore, undertaking cutting-edge research projects in areas of interest to both Singapore and MIT. SMART currently comprises an Innovation Center and five IRGs: AMR, Critical Analytics for Manufacturing Personalized-Medicine, Disruptive and Sustainable Technologies for Agricultural Precision, Future Urban Mobility, and Low Energy Electronic Systems.

    The AMR interdisciplinary research group is a translational research and entrepreneurship program that tackles the growing threat of antimicrobial resistance. By leveraging talent and convergent technologies across Singapore and MIT, AMR aims to develop multiple innovative and disruptive approaches to identify, respond to, and treat drug-resistant microbial infections. Through strong scientific and clinical collaborations, its goal is to provide transformative, holistic solutions for Singapore and the world. More

  • in

    A new approach to preventing human-induced earthquakes

    When humans pump large volumes of fluid into the ground, they can set off potentially damaging earthquakes, depending on the underlying geology. This has been the case in certain oil- and gas-producing regions, where wastewater, often mixed with oil, is disposed of by injecting it back into the ground — a process that has triggered sizable seismic events in recent years.

    Now MIT researchers, working with an interdisciplinary team of scientists from industry and academia, have developed a method to manage such human-induced seismicity, and have demonstrated that the technique successfully reduced the number of earthquakes occurring in an active oil field.

    Their results, appearing today in Nature, could help mitigate earthquakes caused by the oil and gas industry, not just from the injection of wastewater produced with oil, but also that produced from hydraulic fracturing, or “fracking.” The team’s approach could also help prevent quakes from other human activities, such as the filling of water reservoirs and aquifers, and the sequestration of carbon dioxide in deep geologic formations.

    “Triggered seismicity is a problem that goes way beyond producing oil,” says study lead author Bradford Hager, the Cecil and Ida Green Professor of Earth Sciences in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “This is a huge problem for society that will have to be confronted if we are to safely inject carbon dioxide into the subsurface. We demonstrated the kind of study that will be necessary for doing this.”

    The study’s co-authors include Ruben Juanes, professor of civil and environmental engineering at MIT, and collaborators from the University of California at Riverside, the University of Texas at Austin, Harvard University, and Eni, a multinational oil and gas company based in Italy.

    Safe injections

    Both natural and human-induced earthquakes occur along geologic faults, or fractures between two blocks of rock in the Earth’s crust. In stable periods, the rocks on either side of a fault are held in place by the pressures generated by surrounding rocks. But when a large volume of fluid is suddenly injected at high rates, it can upset a fault’s fluid stress balance. In some cases, this sudden injection can lubricate a fault and cause rocks on either side to slip and trigger an earthquake.

    The most common source of such fluid injections is from the oil and gas industry’s disposal of wastewater that is brought up along with oil. Field operators dispose of this water through injection wells that continuously pump the water back into the ground at high pressures.

    “There’s a lot of water produced with the oil, and that water is injected into the ground, which has caused a large number of quakes,” Hager notes. “So, for a while, oil-producing regions in Oklahoma had more magnitude 3 quakes than California, because of all this wastewater that was being injected.”

    In recent years, a similar problem arose in southern Italy, where injection wells on oil fields operated by Eni triggered microseisms in an area where large naturally occurring earthquakes had previously occurred. The company, looking for ways to address the problem, sought consulation from Hager and Juanes, both leading experts in seismicity and subsurface flows.

    “This was an opportunity for us to get access to high-quality seismic data about the subsurface, and learn how to do these injections safely,” Juanes says.

    Seismic blueprint

    The team made use of detailed information, accumulated by the oil company over years of operation in the Val D’Agri oil field, a region of southern Italy that lies in a tectonically active basin. The data included information about the region’s earthquake record, dating back to the 1600s, as well as the structure of rocks and faults, and the state of the subsurface corresponding to the various injection rates of each well.

    This video shows the change in stress on the geologic faults of the Val d’Agri field from 2001 to 2019, as predicted by a new MIT-derived model. Video credit: A. Plesch (Harvard University)

    This video shows small earthquakes occurring on the Costa Molina fault within the Val d’Agri field from 2004 to 2016. Each event is shown for two years fading from an initial bright color to the final dark color. Video credit: A. Plesch (Harvard University)

    The researchers integrated these data into a coupled subsurface flow and geomechanical model, which predicts how the stresses and strains of underground structures evolve as the volume of pore fluid, such as from the injection of water, changes. They connected this model to an earthquake mechanics model in order to translate the changes in underground stress and fluid pressure into a likelihood of triggering earthquakes. They then quantified the rate of earthquakes associated with various rates of water injection, and identified scenarios that were unlikely to trigger large quakes.

    When they ran the models using data from 1993 through 2016, the predictions of seismic activity matched with the earthquake record during this period, validating their approach. They then ran the models forward in time, through the year 2025, to predict the region’s seismic response to three different injection rates: 2,000, 2,500, and 3,000 cubic meters per day. The simulations showed that large earthquakes could be avoided if operators kept injection rates at 2,000 cubic meters per day — a flow rate comparable to a small public fire hydrant.

    Eni field operators implemented the team’s recommended rate at the oil field’s single water injection well over a 30-month period between January 2017 and June 2019. In this time, the team observed only a few tiny seismic events, which coincided with brief periods when operators went above the recommended injection rate.

    “The seismicity in the region has been very low in these two-and-a-half years, with around four quakes of 0.5 magnitude, as opposed to hundreds of quakes, of up to 3 magnitude, that were happening between 2006 and 2016,” Hager says. 

    The results demonstrate that operators can successfully manage earthquakes by adjusting injection rates, based on the underlying geology. Juanes says the team’s modeling approach may help to prevent earthquakes related to other processes, such as the building of water reservoirs and the sequestration of carbon dioxide — as long as there is detailed information about a region’s subsurface.

    “A lot of effort needs to go into understanding the geologic setting,” says Juanes, who notes that, if carbon sequestration were carried out on depleted oil fields, “such reservoirs could have this type of history, seismic information, and geologic interpretation that you could use to build similar models for carbon sequestration. We show it’s at least possible to manage seismicity in an operational setting. And we offer a blueprint for how to do it.”

    This research was supported, in part, by Eni. More

  • in

    What will happen to sediment plumes associated with deep-sea mining?

    In certain parts of the deep ocean, scattered across the seafloor, lie baseball-sized rocks layered with minerals accumulated over millions of years. A region of the central Pacific, called the Clarion Clipperton Fracture Zone (CCFZ), is estimated to contain vast reserves of these rocks, known as “polymetallic nodules,” that are rich in nickel and cobalt  — minerals that are commonly mined on land for the production of lithium-ion batteries in electric vehicles, laptops, and mobile phones.

    As demand for these batteries rises, efforts are moving forward to mine the ocean for these mineral-rich nodules. Such deep-sea-mining schemes propose sending down tractor-sized vehicles to vacuum up nodules and send them to the surface, where a ship would clean them and discharge any unwanted sediment back into the ocean. But the impacts of deep-sea mining — such as the effect of discharged sediment on marine ecosystems and how these impacts compare to traditional land-based mining — are currently unknown.

    Now oceanographers at MIT, the Scripps Institution of Oceanography, and elsewhere have carried out an experiment at sea for the first time to study the turbulent sediment plume that mining vessels would potentially release back into the ocean. Based on their observations, they developed a model that makes realistic predictions of how a sediment plume generated by mining operations would be transported through the ocean.

    The model predicts the size, concentration, and evolution of sediment plumes under various marine and mining conditions. These predictions, the researchers say, can now be used by biologists and environmental regulators to gauge whether and to what extent such plumes would impact surrounding sea life.

    “There is a lot of speculation about [deep-sea-mining’s] environmental impact,” says Thomas Peacock, professor of mechanical engineering at MIT. “Our study is the first of its kind on these midwater plumes, and can be a major contributor to international discussion and the development of regulations over the next two years.”

    The team’s study appears today in Nature Communications: Earth and Environment.

    Peacock’s co-authors at MIT include lead author Carlos Muñoz-Royo, Raphael Ouillon, Chinmay Kulkarni, Patrick Haley, Chris Mirabito, Rohit Supekar, Andrew Rzeznik, Eric Adams, Cindy Wang, and Pierre Lermusiaux, along with collaborators at Scripps, the U.S. Geological Survey, and researchers in Belgium and South Korea.

    Play video

    Out to sea

    Current deep-sea-mining proposals are expected to generate two types of sediment plumes in the ocean: “collector plumes” that vehicles generate on the seafloor as they drive around collecting nodules 4,500 meters below the surface; and possibly “midwater plumes” that are discharged through pipes that descend 1,000 meters or more into the ocean’s aphotic zone, where sunlight rarely penetrates.

    In their new study, Peacock and his colleagues focused on the midwater plume and how the sediment would disperse once discharged from a pipe.

    “The science of the plume dynamics for this scenario is well-founded, and our goal was to clearly establish the dynamic regime for such plumes to properly inform discussions,” says Peacock, who is the director of MIT’s Environmental Dynamics Laboratory.

    To pin down these dynamics, the team went out to sea. In 2018, the researchers boarded the research vessel Sally Ride and set sail 50 kilometers off the coast of Southern California. They brought with them equipment designed to discharge sediment 60 meters below the ocean’s surface.  

    “Using foundational scientific principles from fluid dynamics, we designed the system so that it fully reproduced a commercial-scale plume, without having to go down to 1,000 meters or sail out several days to the middle of the CCFZ,” Peacock says.

    Over one week the team ran a total of six plume experiments, using novel sensors systems such as a Phased Array Doppler Sonar (PADS) and epsilometer developed by Scripps scientists to monitor where the plumes traveled and how they evolved in shape and concentration. The collected data revealed that the sediment, when initially pumped out of a pipe, was a highly turbulent cloud of suspended particles that mixed rapidly with the surrounding ocean water.

    “There was speculation this sediment would form large aggregates in the plume that would settle relatively quickly to the deep ocean,” Peacock says. “But we found the discharge is so turbulent that it breaks the sediment up into its finest constituent pieces, and thereafter it becomes dilute so quickly that the sediment then doesn’t have a chance to stick together.”

    Dilution

    The team had previously developed a model to predict the dynamics of a plume that would be discharged into the ocean. When they fed the experiment’s initial conditions into the model, it produced the same behavior that the team observed at sea, proving the model could accurately predict plume dynamics within the vicinity of the discharge.

    The researchers used these results to provide the correct input for simulations of ocean dynamics to see how far currents would carry the initially released plume.

    “In a commercial operation, the ship is always discharging new sediment. But at the same time the background turbulence of the ocean is always mixing things. So you reach a balance. There’s a natural dilution process that occurs in the ocean that sets the scale of these plumes,” Peacock says. “What is key to determining the extent of the plumes is the strength of the ocean turbulence, the amount of sediment that gets discharged, and the environmental threshold level at which there is impact.”

    Based on their findings, the researchers have developed formulae to calculate the scale of a plume depending on a given environmental threshold. For instance, if regulators determine that a certain concentration of sediments could be detrimental to surrounding sea life, the formula can be used to calculate how far a plume above that concentration would extend, and what volume of ocean water would be impacted over the course of a 20-year nodule mining operation.

    “At the heart of the environmental question surrounding deep-sea mining is the extent of sediment plumes,” Peacock says. “It’s a multiscale problem, from micron-scale sediments, to turbulent flows, to ocean currents over thousands of kilometers. It’s a big jigsaw puzzle, and we are uniquely equipped to work on that problem and provide answers founded in science and data.”

    The team is now working on collector plumes, having recently returned from several weeks at sea to perform the first environmental monitoring of a nodule collector vehicle in the deep ocean in over 40 years.

    This research was supported in part by the MIT Environmental Solutions Initiative, the UC Ship Time Program, the MIT Policy Lab, the 11th Hour Project of the Schmidt Family Foundation, the Benioff Ocean Initiative, and Fundación Bancaria “la Caixa.” More

  • in

    Reducing emissions by decarbonizing industry

    A critical challenge in meeting the Paris Agreement’s long-term goal of keeping global warming well below 2 degrees Celsius is to vastly reduce carbon dioxide (CO2) and other greenhouse gas emissions generated by the most energy-intensive industries. According to a recent report by the International Energy Agency, these industries — cement, iron and steel, chemicals — account for about 20 percent of global CO2 emissions. Emissions from these industries are notoriously difficult to abate because, in addition to emissions associated with energy use, a significant portion of industrial emissions come from the process itself.

    For example, in the cement industry, about half the emissions come from the decomposition of limestone into lime and CO2. While a shift to zero-carbon energy sources such as solar or wind-powered electricity could lower CO2 emissions in the power sector, there are no easy substitutes for emissions-intensive industrial processes.

    Enter industrial carbon capture and storage (CCS). This technology, which extracts point-source carbon emissions and sequesters them underground, has the potential to remove up to 90-99 percent of CO2 emissions from an industrial facility, including both energy-related and process emissions. And that begs the question: Might CCS alone enable hard-to-abate industries to continue to grow while eliminating nearly all of the CO2 emissions they generate from the atmosphere?

    The answer is an unequivocal yes in a new study in the journal Applied Energy co-authored by researchers at the MIT Joint Program on the Science and Policy of Global Change, MIT Energy Initiative, and ExxonMobil.

    Using an enhanced version of the MIT Economic Projection and Policy Analysis (EPPA) model that represents different industrial CCS technology choices — and assuming that CCS is the only greenhouse gas emissions mitigation option available to hard-to-abate industries — the study assesses the long-term economic and environmental impacts of CCS deployment under a climate policy aimed at capping the rise in average global surface temperature at 2 C above preindustrial levels.

    The researchers find that absent industrial CCS deployment, the global costs of implementing the 2 C policy are higher by 12 percent in 2075 and 71 percent in 2100, relative to policy costs with CCS. They conclude that industrial CCS enables continued growth in the production and consumption of energy-intensive goods from hard-to-abate industries, along with dramatic reductions in the CO2 emissions they generate. Their projections show that as industrial CCS gains traction mid-century, this growth occurs globally as well as within geographical regions (primarily in China, Europe, and the United States) and the cement, iron and steel, and chemical sectors.

    “Because it can enable deep reductions in industrial emissions, industrial CCS is an essential mitigation option in the successful implementation of policies aligned with the Paris Agreement’s long-term climate targets,” says Sergey Paltsev, the study’s lead author and a deputy director of the MIT Joint Program and senior research scientist at the MIT Energy Initiative. “As the technology advances, our modeling approach offers decision-makers a pathway for projecting the deployment of industrial CCS across industries and regions.”

    But such advances will not take place without substantial, ongoing funding.

    “Sustained government policy support across decades will be needed if CCS is to realize its potential to promote the growth of energy-intensive industries and a stable climate,” says Howard Herzog, a co-author of the study and senior research engineer at the MIT Energy Initiative.

    The researchers also find that advanced CCS options such as cryogenic carbon capture (CCC), in which extracted CO2 is cooled to solid form using far less power than conventional coal- and gas-fired CCS technologies, could help expand the use of CCS in industrial settings through further production cost and emissions reductions.

    The study was supported by sponsors of the MIT Joint Program and by ExxonMobil through its membership in the MIT Energy Initiative. More

  • in

    Manipulating magnets in the quest for fusion

    “You get the high field, you get the performance.”

    Senior Research Scientist Brian LaBombard is summarizing what might be considered a guiding philosophy behind designing and engineering fusion devices at MIT’s Plasma Science and Fusion Center (PSFC). Beginning in 1972 with the Alcator A tokamak, through Alcator C (1978) and Alcator C-Mod (1991), the PSFC has used magnets with high fields to confine the hot plasma in compact, high-performance tokamaks. Joining what was then the Plasma Fusion Center as a graduate student in 1978, just as Alcator A was finishing its run, LaBombard is one of the few who has worked with each iteration of the high-field concept. Now he has turned his attention to the PSFC’s latest fusion venture, a fusion energy project called SPARC.

    Designed in collaboration with MIT spinoff Commonwealth Fusion Systems (CFS), SPARC employs novel high temperature superconducting (HTS) magnets at high-field to achieve fusion that will produce net energy gain. Some of these magnets will wrap toroidally around the tokamak’s doughnut-shaped vacuum chamber, confining fusion reactions and preventing damage to the walls of the device.

    The PSFC has spent three years researching, developing, and manufacturing a scaled version of these toroidal field (TF) coils — the toroidal field model coil, or TFMC. Before the TF coils can be built for SPARC, LaBombard and his team need to test the model coil under the conditions that it will experience in this tokamak.

    HTS magnets need to be cooled in order to remain superconducting, and to be protected from the heat generated by current. For testing, the TFMC will be enclosed in a cryostat, cooled to the low temperatures needed for eventual tokamak operation, and charged with current to produce magnetic field. How the magnet responds as the current is provided to the coil will determine if the technology is in hand to construct the 18 TF coils for SPARC.

    A history of achievement

    That LaBombard is part of the PSFC’s next fusion project is not unusual; that he is involved in designing, engineering, and testing the magnets is. Until 2018, when he led the R&D research team for one of the magnet designs being considered for SPARC, LaBombard’s 30-plus years of celebrated research had focused on other areas of the fusion question.

    As a graduate student, he gained early acclaim for the research he reported in his PhD thesis. Working on Alcator C, he made groundbreaking discoveries about the plasma physics in the “boundary” region of the tokamak, between the edge of the fusing core and the wall of the machine. With typical modesty, LaBombard credits some of his success to the fact that the topic was not well-studied, and that Alcator C provided measurements not possible on other machines.

    “People knew about the boundary, but nobody was really studying it in detail. On Alcator C, there were interesting phenomena, such as marfes [multifaceted asymmetric radiation from the edge], being detected for the first time. This pushed me to make boundary layer measurements in great detail that no one had ever seen before. It was all new territory, so I made a big splash.”

    That splash established him as a leading researcher in the field of boundary plasmas. After a two-year turn at the University of California at Los Angeles working on a plasma-wall test facility called PISCES, LaBombard, who grew up in New England, was happy to return to MIT to join the PSFC’s new Alcator C-Mod project.

    Over the next 28 years of C-Mod’s construction phase and operation, LaBombard continued to make groundbreaking contributions to understanding tokamak edge and divertor plasmas, and to design internal components that can survive the harsh conditions and provide plasma control — including C-Mod’s vertical target plate divertor and a unique divertor cryopump system. That experience led him to conceive of the “X-point target divertor” for handling extreme fusion power exhaust and to propose a national Advanced Divertor tokamak eXperiment (ADX) to test such ideas.

    All along, LaBombard’s true passion was in creating revolutionary diagnostics to unfold boundary layer physics and in guiding graduate students to do the same: an Omegatron, to measure impurity concentrations directly in the boundary plasma, resolved by charge-to-mass ratio; fast-scanning Langmuir-Mach probes to measure plasma flows; a Shoelace Antenna to provide insight into plasma fluctuations at the edge; the invention of a Mirror Langmuir Probe for the real-time measurements of plasma turbulence at high bandwidth.

    Switching sides

    His expertise established, he could have continued this focus on the edge of the plasma through collaborations with other laboratories and at the PSFC. Instead, he finds himself on the other side of the vacuum chamber, immersed in magnet design and technology. Challenged with finding an effective HTS magnet design for SPARC, he and his team were able to propose a winning strategy, one that seemed most likely to achieve the compact high field and high performance that PSFC tokamaks have been known for.

    LaBombard is stimulated by his new direction and excited about the upcoming test of the TFMC. His new role takes advantage of his physics background in electricity and magnetism. It also supports his passion for designing and building things, which he honed as high school apprentice to his machinist father and explored professionally building systems for Alcator C-Mod.

    “I view my principal role is to make sure the TF coil works electrically, the way it’s supposed to,” he says. “So it produces the magnetic field without damaging the coil.”

    A successful test would validate the understanding of how the new magnet technology works, and will prepare the team to build magnets for SPARC.

    Among those overseeing the hours of TFMC testing will be graduate students, current and former, reminding LaBombard of his own student days working on Alcator C, and of his years supervising students on Alcator C-Mod.

    “Those students were directly involved with Alcator C-Mod. They would jump in, make things happen — and as a team. This team spirit really enabled everyone to excel.

    “And looking to when SPARC was taking shape, you could see that across the board, from the new folks to the younger folks, they really got engaged by the spirit of Alcator — by recognition of the plasma performance that can be made possible by high magnetic fields.”

    He laughs as he looks to the past and to the future.

    “And they are taking it to SPARC.” More