More stories

  • in

    Cleaning up industrial filtration

    If you wanted to get pasta out of a pot of water, would you boil off the water, or use a strainer? While home cooks would choose the strainer, many industries continue to use energy-intensive thermal methods of separating out liquids. In some cases, that’s because it’s difficult to make a filtration system for chemical separation, which requires pores small enough to separate atoms.

    In other cases, membranes exist to separate liquids, but they are made of fragile polymers, which can break down or gum up in industrial use.

    Via Separations, a startup that emerged from MIT in 2017, has set out to address these challenges with a membrane that is cost-effective and robust. Made of graphene oxide (a “cousin” of pencil lead), the membrane can reduce the amount of energy used in industrial separations by 90 percent, according to Shreya Dave PhD ’16, company co-founder and CEO.

    This is valuable because separation processes account for about 22 percent of all in-plant energy use in the United States, according to Oak Ridge National Laboratory. By making such processes significantly more efficient, Via Separations plans to both save energy and address the significant emissions produced by thermal processes. “Our goal is eliminating 500 megatons of carbon dioxide emissions by 2050,” Dave says.

    Play video

    What do our passions for pasta and decarbonizing the Earth have in common? MIT alumna Shreya Dave PhD ’16 explains how she and her team at Via Separations are building the equivalent of a pasta strainer to separate chemical compounds for industry.

    Via Separations began piloting its technology this year at a U.S. paper company and expects to deploy a full commercial system there in the spring of 2022. “Our vision is to help manufacturers slow carbon dioxide emissions next year,” Dave says.

    MITEI Seed Grant

    The story of Via Separations begins in 2012, when the MIT Energy Initiative (MITEI) awarded a Seed Fund grant to Professor Jeffrey Grossman, who is now the Morton and Claire Goulder and Family Professor in Environmental Systems and head of MIT’s Department of Materials Science and Engineering. Grossman was pursuing research into nanoporous membranes for water desalination. “We thought we could bring down the cost of desalination and improve access to clean water,” says Dave, who worked on the project as a graduate student in Grossman’s lab.

    There, she teamed up with Brent Keller PhD ’16, another Grossman graduate student and a 2016-17 ExxonMobil-MIT Energy Fellow, who was developing lab experiments to fabricate and test new materials. “We were early comrades in figuring out how to debug experiments or fix equipment,” says Keller, Via Separations’ co-founder and chief technology officer. “We were fast friends who spent a lot of time talking about science over burritos.”

    Dave went on to write her doctoral thesis on using graphene oxide for water desalination, but that turned out to be the wrong application of the technology from a business perspective, she says. “The cost of desalination doesn’t lie in the membrane materials,” she explains.

    So, after Dave and Keller graduated from MIT in 2016, they spent a lot of time talking to customers to learn more about the needs and opportunities for their new separation technology. This research led them to target the paper industry, because the environmental benefits of improving paper processing are enormous, Dave says. “The paper industry is particularly exciting because separation processes just in that industry account for more than 2 percent of U.S. energy consumption,” she says. “It’s a very concentrated, high-energy-use industry.”

    Most paper today is made by breaking down the chemical bonds in wood to create wood pulp, the primary ingredient of paper. This process generates a byproduct called black liquor, a toxic solution that was once simply dumped into waterways. To clean up this process, paper mills turned to boiling off the water from black liquor and recovering both water and chemicals for reuse in the pulping process. (Today, the most valuable way to use the liquor is as biomass feedstock to generate energy.) Via Separations plans to accomplish this same separation work by filtering black liquor through its graphene oxide membrane.

    “The advantage of graphene oxide is that it’s very robust,” Dave says. “It’s got carbon double bonds that hold together in a lot of environments, including at different pH levels and temperatures that are typically unfriendly to materials.”

    Such properties should also make the company’s membranes attractive to other industries that use membrane separation, Keller says, because today’s polymer membranes have drawbacks. “For most of the things we make — from plastics to paper and gasoline — those polymers will swell or react or degrade,” he says.

    Graphene oxide is significantly more durable, and Via Separations can customize the pores in the material to suit each industry’s application. “That’s our secret sauce,” Dave says, “modulating pore size while retaining robustness to operate in challenging environments.”

    “We’re building a catalog of products to serve different applications,” Keller says, noting that the next target market could be the food and beverage industry. “In that industry, instead of separating different corrosive paper chemicals from water, we’re trying to separate particular sugars and food ingredients from other things.”

    Future target customers include pharmaceutical companies, oil refineries, and semiconductor manufacturers, or even carbon capture businesses.

    Scaling up

    Dave, Keller, and Grossman launched Via Separations in 2017 — with a lot of help from MIT. After the seed grant, in 2015, the founders received a year of funding and support from the J-WAFS Solutions program to explore markets and to develop their business plans. The company’s first capital investment came from The Engine, a venture firm founded by MIT to support “tough tech” companies (tech businesses with transformative potential but long and challenging paths to success). They also received advice and support from MIT’s Deshpande Center for Technological Innovation, Venture Mentoring Service, and Technology Licensing Office. In addition, Grossman continues to serve the company as chief scientist.

    “We were incredibly fortunate to be starting a company in the MIT entrepreneurial ecosystem,” Keller says, noting that The Engine support alone “probably shaved years off our progress.”

    Already, Via Separations has grown to employ 17 people, while significantly scaling up its product. “Our customers are producing thousands of gallons per minute,” Keller explains. “To process that much liquid, we need huge areas of membrane.”

    Via Separations’ manufacturing process, which is now capable of making more than 10,000 square feet of membrane in one production run, is a key competitive advantage, Dave says. The company rolls 300-400 square feet of membrane into a module, and modules can be combined as needed to increase filtration capacity.

    The goal, Dave says, is to contribute to a more sustainable world by making an environmentally beneficial product that makes good business sense. “What we do is make manufacturing things more energy-efficient,” she says. “We allow a paper mill or chemical facility to make more product using less energy and with lower costs. So, there is a bottom-line benefit that’s significant on an industrial scale.”

    Keller says he shares Dave’s goal of building a more sustainable future. “Climate change and energy are central challenges of our time,” he says. “Working on something that has a chance to make a meaningful impact on something so important to everyone is really fulfilling.”

    This article appears in the Spring 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative.  More

  • in

    Amy Watterson: Model engineer

    “I love that we are doing something that no one else is doing.”

    Amy Watterson is excited when she talks about SPARC, the pilot fusion plant being developed by MIT spinoff Commonwealth Fusion Systems (CSF). Since being hired as a mechanical engineer at the Plasma Science and Fusion Center (PSFC) two years ago, Watterson has found her skills stretching to accommodate the multiple needs of the project.

    Fusion, which fuels the sun and stars, has long been sought as a carbon-free energy source for the world. For decades researchers have pursued the “tokamak,” a doughnut-shaped vacuum chamber where hot plasma can be contained by magnetic fields and heated to the point where fusion occurs. Sustaining the fusion reactions long enough to draw energy from them has been a challenge.

    Watterson is intimately aware of this difficulty. Much of her life she has heard the quip, “Fusion is 50 years away and always will be.” The daughter of PSFC research scientist Catherine Fiore, who headed the PSFC’s Office of Environment, Safety and Health, and Reich Watterson, an optical engineer working at the center, she had watched her parents devote years to making fusion a reality. She determined before entering Rensselaer Polytechnic Institute that she could forgo any attempt to follow her parents into a field that might not produce results during her career.

    Working on SPARC has changed her mindset. Taking advantage of a novel high-temperature superconducting tape, SPARC’s magnets will be compact while generating magnetic fields stronger than would be possible from other mid-sized tokamaks, and producing more fusion power. It suggests a high-field device that produces net fusion gain is not 50 years away. SPARC is scheduled to be begin operation in 2025.

    An education in modeling

    Watterson’s current excitement, and focus, is due to an approaching milestone for SPARC: a test of the Toroidal Field Magnet Coil (TFMC), a scaled prototype for the HTS magnets that will surround SPARC’s toroidal vacuum chamber. Its design and manufacture have been shaped by computer models and simulations. As part of a large research team, Waterson has received an education in modeling over the past two years.

    Computer models move scientific experiments forward by allowing researchers to predict what will happen to an experiment — or its materials — if a parameter is changed. Modeling a component of the TFMC, for example, researchers can test how it is affected by varying amounts of current, different temperatures or different materials. With this information they can make choices that will improve the success of the experiment.

    In preparation for the magnet testing, Watterson has modeled aspects of the cryogenic system that will circulate helium gas around the TFMC to keep it cold enough to remain superconducting. Taking into consideration the amount of cooling entering the system, the flow rate of the helium, the resistance created by valves and transfer lines and other parameters, she can model how much helium flow will be necessary to guarantee the magnet stays cold enough. Adjusting a parameter can make the difference between a magnet remaining superconducting and becoming overheated or even damaged.

    Watterson and her teammates have also modeled pressures and stress on the inside of the TFMC. Pumping helium through the coil to cool it down will add 20 atmospheres of pressure, which could create a degree of flex in elements of the magnet that are welded down. Modeling can help determine how much pressure a weld can sustain.

    “How thick does a weld need to be, and where should you put the weld so that it doesn’t break — that’s something you don’t want to leave until you’re finally assembling it,” says Watterson.

    Modeling the behavior of helium is particularly challenging because its properties change significantly as the pressure and temperature change.

    “A few degrees or a little pressure will affect the fluid’s viscosity, density, thermal conductivity, and heat capacity,” says Watterson. “The flow has different pressures and temperatures at different places in the cryogenic loop. You end up with a set of equations that are very dependent on each other, which makes it a challenge to solve.”

    Role model

    Watterson notes that her modeling depends on the contributions of colleagues at the PSFC, and praises the collaborative spirit among researchers and engineers, a community that now feels like family. Her teammates have been her mentors. “I’ve learned so much more on the job in two years than I did in four years at school,” she says.

    She realizes that having her mother as a role model in her own family has always made it easier for her to imagine becoming a scientist or engineer. Tracing her early passion for engineering to a middle school Lego robotics tournament, her eyes widen as she talks about the need for more female engineers, and the importance of encouraging girls to believe they are equal to the challenge.

    “I want to be a role model and tell them ‘I’m a successful engineer, you can be too.’ Something I run into a lot is that little girls will say, ‘I can’t be an engineer, I’m not cut out for that.’ And I say, ‘Well that’s not true. Let me show you. If you can make this Lego robot, then you can be an engineer.’ And it turns out they usually can.”

    Then, as if making an adjustment to one of her computer models, she continues.

    “Actually, they always can.” More

  • in

    A new way to detect the SARS-CoV-2 Alpha variant in wastewater

    Researchers from the Antimicrobial Resistance (AMR) interdisciplinary research group at the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, alongside collaborators from Biobot Analytics, Nanyang Technological University (NTU), and MIT, have successfully developed an innovative, open-source molecular detection method that is able to detect and quantify the B.1.1.7 (Alpha) variant of SARS-CoV-2. The breakthrough paves the way for rapid, inexpensive surveillance of other SARS-CoV-2 variants in wastewater.

    As the world continues to battle and contain Covid-19, the recent identification of SARS-CoV-2 variants with higher transmissibility and increased severity has made developing convenient variant tracking methods essential. Currently, identified variants include the B.1.17 (Alpha) variant first identified in the United Kingdom and the B.1.617.2 (Delta) variant first detected in India.

    Wastewater surveillance has emerged as a critical public health tool to safely and efficiently track the SARS-CoV-2 pandemic in a non-intrusive manner, providing complementary information that enables health authorities to acquire actionable community-level information. Most recently, viral fragments of SARS-CoV-2 were detected in housing estates in Singapore through a proactive wastewater surveillance program. This information, alongside surveillance testing, allowed Singapore’s Ministry of Health to swiftly respond, isolate, and conduct swab tests as part of precautionary measures.

    However, detecting variants through wastewater surveillance is less commonplace due to challenges in existing technology. Next-generation sequencing for wastewater surveillance is time-consuming and expensive. Tests also lack the sensitivity required to detect low variant abundances in dilute and mixed wastewater samples due to inconsistent and/or low sequencing coverage.

    The method developed by the researchers is uniquely tailored to address these challenges and expands the utility of wastewater surveillance beyond testing for SARS-CoV-2, toward tracking the spread of SARS-CoV-2 variants of concern.

    Wei Lin Lee, research scientist at SMART AMR and first author on the paper adds, “This is especially important in countries battling SARS-CoV-2 variants. Wastewater surveillance will help find out the true proportion and spread of the variants in the local communities. Our method is sensitive enough to detect variants in highly diluted SARS-CoV-2 concentrations typically seen in wastewater samples, and produces reliable results even for samples which contain multiple SARS-CoV-2 lineages.”

    Led by Janelle Thompson, NTU associate professor, and Eric Alm, MIT professor and SMART AMR principal investigator, the team’s study, “Quantitative SARS-CoV-2 Alpha variant B.1.1.7 Tracking in Wastewater by Allele-Specific RT-qPCR” has been published in Environmental Science & Technology Letters. The research explains the innovative, open-source molecular detection method based on allele-specific RT-qPCR that detects and quantifies the B.1.1.7 (Alpha) variant. The developed assay, tested and validated in wastewater samples across 19 communities in the United States, is able to reliably detect and quantify low levels of the B.1.1.7 (Alpha) variant with low cross-reactivity, and at variant proportions down to 1 percent in a background of mixed SARS-CoV-2 viruses.

    Targeting spike protein mutations that are highly predictive of the B.1.1.7 (Alpha) variant, the method can be implemented using commercially available RT-qPCR protocols. Unlike commercially available products that use proprietary primers and probes for wastewater surveillance, the paper details the open-source method and its development that can be freely used by other organizations and research institutes for their work on wastewater surveillance of SARS-CoV-2 and its variants.

    The breakthrough by the research team in Singapore is currently used by Biobot Analytics, an MIT startup and global leader in wastewater epidemiology headquartered in Cambridge, Massachusetts, serving states and localities throughout the United States. Using the method, Biobot Analytics is able to accept and analyze wastewater samples for the B.1.1.7 (Alpha) variant and plans to add additional variants to its analysis as methods are developed. For example, the SMART AMR team is currently developing specific assays that will be able to detect and quantify the B.1.617.2 (Delta) variant, which has recently been identified as a variant of concern by the World Health Organization.

    “Using the team’s innovative method, we have been able to monitor the B.1.1.7 (Alpha) variant in local populations in the U.S. — empowering leaders with information about Covid-19 trends in their communities and allowing them to make considered recommendations and changes to control measures,” says Mariana Matus PhD ’18, Biobot Analytics CEO and co-founder.

    “This method can be rapidly adapted to detect new variants of concern beyond B.1.1.7,” adds MIT’s Alm. “Our partnership with Biobot Analytics has translated our research into real-world impact beyond the shores of Singapore and aid in the detection of Covid-19 and its variants, serving as an early warning system and guidance for policymakers as they trace infection clusters and consider suitable public health measures.”

    The research is carried out by SMART and supported by the National Research Foundation (NRF) Singapore under its Campus for Research Excellence And Technological Enterprise (CREATE) program.

    SMART was established by MIT in partnership with the National Research Foundation of Singapore (NRF) in 2007. SMART is the first entity in CREATE developed by NRF. SMART serves as an intellectual and innovation hub for research interactions between MIT and Singapore, undertaking cutting-edge research projects in areas of interest to both Singapore and MIT. SMART currently comprises an Innovation Center and five IRGs: AMR, Critical Analytics for Manufacturing Personalized-Medicine, Disruptive and Sustainable Technologies for Agricultural Precision, Future Urban Mobility, and Low Energy Electronic Systems.

    The AMR interdisciplinary research group is a translational research and entrepreneurship program that tackles the growing threat of antimicrobial resistance. By leveraging talent and convergent technologies across Singapore and MIT, AMR aims to develop multiple innovative and disruptive approaches to identify, respond to, and treat drug-resistant microbial infections. Through strong scientific and clinical collaborations, its goal is to provide transformative, holistic solutions for Singapore and the world. More

  • in

    A new approach to preventing human-induced earthquakes

    When humans pump large volumes of fluid into the ground, they can set off potentially damaging earthquakes, depending on the underlying geology. This has been the case in certain oil- and gas-producing regions, where wastewater, often mixed with oil, is disposed of by injecting it back into the ground — a process that has triggered sizable seismic events in recent years.

    Now MIT researchers, working with an interdisciplinary team of scientists from industry and academia, have developed a method to manage such human-induced seismicity, and have demonstrated that the technique successfully reduced the number of earthquakes occurring in an active oil field.

    Their results, appearing today in Nature, could help mitigate earthquakes caused by the oil and gas industry, not just from the injection of wastewater produced with oil, but also that produced from hydraulic fracturing, or “fracking.” The team’s approach could also help prevent quakes from other human activities, such as the filling of water reservoirs and aquifers, and the sequestration of carbon dioxide in deep geologic formations.

    “Triggered seismicity is a problem that goes way beyond producing oil,” says study lead author Bradford Hager, the Cecil and Ida Green Professor of Earth Sciences in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “This is a huge problem for society that will have to be confronted if we are to safely inject carbon dioxide into the subsurface. We demonstrated the kind of study that will be necessary for doing this.”

    The study’s co-authors include Ruben Juanes, professor of civil and environmental engineering at MIT, and collaborators from the University of California at Riverside, the University of Texas at Austin, Harvard University, and Eni, a multinational oil and gas company based in Italy.

    Safe injections

    Both natural and human-induced earthquakes occur along geologic faults, or fractures between two blocks of rock in the Earth’s crust. In stable periods, the rocks on either side of a fault are held in place by the pressures generated by surrounding rocks. But when a large volume of fluid is suddenly injected at high rates, it can upset a fault’s fluid stress balance. In some cases, this sudden injection can lubricate a fault and cause rocks on either side to slip and trigger an earthquake.

    The most common source of such fluid injections is from the oil and gas industry’s disposal of wastewater that is brought up along with oil. Field operators dispose of this water through injection wells that continuously pump the water back into the ground at high pressures.

    “There’s a lot of water produced with the oil, and that water is injected into the ground, which has caused a large number of quakes,” Hager notes. “So, for a while, oil-producing regions in Oklahoma had more magnitude 3 quakes than California, because of all this wastewater that was being injected.”

    In recent years, a similar problem arose in southern Italy, where injection wells on oil fields operated by Eni triggered microseisms in an area where large naturally occurring earthquakes had previously occurred. The company, looking for ways to address the problem, sought consulation from Hager and Juanes, both leading experts in seismicity and subsurface flows.

    “This was an opportunity for us to get access to high-quality seismic data about the subsurface, and learn how to do these injections safely,” Juanes says.

    Seismic blueprint

    The team made use of detailed information, accumulated by the oil company over years of operation in the Val D’Agri oil field, a region of southern Italy that lies in a tectonically active basin. The data included information about the region’s earthquake record, dating back to the 1600s, as well as the structure of rocks and faults, and the state of the subsurface corresponding to the various injection rates of each well.

    This video shows the change in stress on the geologic faults of the Val d’Agri field from 2001 to 2019, as predicted by a new MIT-derived model. Video credit: A. Plesch (Harvard University)

    This video shows small earthquakes occurring on the Costa Molina fault within the Val d’Agri field from 2004 to 2016. Each event is shown for two years fading from an initial bright color to the final dark color. Video credit: A. Plesch (Harvard University)

    The researchers integrated these data into a coupled subsurface flow and geomechanical model, which predicts how the stresses and strains of underground structures evolve as the volume of pore fluid, such as from the injection of water, changes. They connected this model to an earthquake mechanics model in order to translate the changes in underground stress and fluid pressure into a likelihood of triggering earthquakes. They then quantified the rate of earthquakes associated with various rates of water injection, and identified scenarios that were unlikely to trigger large quakes.

    When they ran the models using data from 1993 through 2016, the predictions of seismic activity matched with the earthquake record during this period, validating their approach. They then ran the models forward in time, through the year 2025, to predict the region’s seismic response to three different injection rates: 2,000, 2,500, and 3,000 cubic meters per day. The simulations showed that large earthquakes could be avoided if operators kept injection rates at 2,000 cubic meters per day — a flow rate comparable to a small public fire hydrant.

    Eni field operators implemented the team’s recommended rate at the oil field’s single water injection well over a 30-month period between January 2017 and June 2019. In this time, the team observed only a few tiny seismic events, which coincided with brief periods when operators went above the recommended injection rate.

    “The seismicity in the region has been very low in these two-and-a-half years, with around four quakes of 0.5 magnitude, as opposed to hundreds of quakes, of up to 3 magnitude, that were happening between 2006 and 2016,” Hager says. 

    The results demonstrate that operators can successfully manage earthquakes by adjusting injection rates, based on the underlying geology. Juanes says the team’s modeling approach may help to prevent earthquakes related to other processes, such as the building of water reservoirs and the sequestration of carbon dioxide — as long as there is detailed information about a region’s subsurface.

    “A lot of effort needs to go into understanding the geologic setting,” says Juanes, who notes that, if carbon sequestration were carried out on depleted oil fields, “such reservoirs could have this type of history, seismic information, and geologic interpretation that you could use to build similar models for carbon sequestration. We show it’s at least possible to manage seismicity in an operational setting. And we offer a blueprint for how to do it.”

    This research was supported, in part, by Eni. More

  • in

    What will happen to sediment plumes associated with deep-sea mining?

    In certain parts of the deep ocean, scattered across the seafloor, lie baseball-sized rocks layered with minerals accumulated over millions of years. A region of the central Pacific, called the Clarion Clipperton Fracture Zone (CCFZ), is estimated to contain vast reserves of these rocks, known as “polymetallic nodules,” that are rich in nickel and cobalt  — minerals that are commonly mined on land for the production of lithium-ion batteries in electric vehicles, laptops, and mobile phones.

    As demand for these batteries rises, efforts are moving forward to mine the ocean for these mineral-rich nodules. Such deep-sea-mining schemes propose sending down tractor-sized vehicles to vacuum up nodules and send them to the surface, where a ship would clean them and discharge any unwanted sediment back into the ocean. But the impacts of deep-sea mining — such as the effect of discharged sediment on marine ecosystems and how these impacts compare to traditional land-based mining — are currently unknown.

    Now oceanographers at MIT, the Scripps Institution of Oceanography, and elsewhere have carried out an experiment at sea for the first time to study the turbulent sediment plume that mining vessels would potentially release back into the ocean. Based on their observations, they developed a model that makes realistic predictions of how a sediment plume generated by mining operations would be transported through the ocean.

    The model predicts the size, concentration, and evolution of sediment plumes under various marine and mining conditions. These predictions, the researchers say, can now be used by biologists and environmental regulators to gauge whether and to what extent such plumes would impact surrounding sea life.

    “There is a lot of speculation about [deep-sea-mining’s] environmental impact,” says Thomas Peacock, professor of mechanical engineering at MIT. “Our study is the first of its kind on these midwater plumes, and can be a major contributor to international discussion and the development of regulations over the next two years.”

    The team’s study appears today in Nature Communications: Earth and Environment.

    Peacock’s co-authors at MIT include lead author Carlos Muñoz-Royo, Raphael Ouillon, Chinmay Kulkarni, Patrick Haley, Chris Mirabito, Rohit Supekar, Andrew Rzeznik, Eric Adams, Cindy Wang, and Pierre Lermusiaux, along with collaborators at Scripps, the U.S. Geological Survey, and researchers in Belgium and South Korea.

    Play video

    Out to sea

    Current deep-sea-mining proposals are expected to generate two types of sediment plumes in the ocean: “collector plumes” that vehicles generate on the seafloor as they drive around collecting nodules 4,500 meters below the surface; and possibly “midwater plumes” that are discharged through pipes that descend 1,000 meters or more into the ocean’s aphotic zone, where sunlight rarely penetrates.

    In their new study, Peacock and his colleagues focused on the midwater plume and how the sediment would disperse once discharged from a pipe.

    “The science of the plume dynamics for this scenario is well-founded, and our goal was to clearly establish the dynamic regime for such plumes to properly inform discussions,” says Peacock, who is the director of MIT’s Environmental Dynamics Laboratory.

    To pin down these dynamics, the team went out to sea. In 2018, the researchers boarded the research vessel Sally Ride and set sail 50 kilometers off the coast of Southern California. They brought with them equipment designed to discharge sediment 60 meters below the ocean’s surface.  

    “Using foundational scientific principles from fluid dynamics, we designed the system so that it fully reproduced a commercial-scale plume, without having to go down to 1,000 meters or sail out several days to the middle of the CCFZ,” Peacock says.

    Over one week the team ran a total of six plume experiments, using novel sensors systems such as a Phased Array Doppler Sonar (PADS) and epsilometer developed by Scripps scientists to monitor where the plumes traveled and how they evolved in shape and concentration. The collected data revealed that the sediment, when initially pumped out of a pipe, was a highly turbulent cloud of suspended particles that mixed rapidly with the surrounding ocean water.

    “There was speculation this sediment would form large aggregates in the plume that would settle relatively quickly to the deep ocean,” Peacock says. “But we found the discharge is so turbulent that it breaks the sediment up into its finest constituent pieces, and thereafter it becomes dilute so quickly that the sediment then doesn’t have a chance to stick together.”

    Dilution

    The team had previously developed a model to predict the dynamics of a plume that would be discharged into the ocean. When they fed the experiment’s initial conditions into the model, it produced the same behavior that the team observed at sea, proving the model could accurately predict plume dynamics within the vicinity of the discharge.

    The researchers used these results to provide the correct input for simulations of ocean dynamics to see how far currents would carry the initially released plume.

    “In a commercial operation, the ship is always discharging new sediment. But at the same time the background turbulence of the ocean is always mixing things. So you reach a balance. There’s a natural dilution process that occurs in the ocean that sets the scale of these plumes,” Peacock says. “What is key to determining the extent of the plumes is the strength of the ocean turbulence, the amount of sediment that gets discharged, and the environmental threshold level at which there is impact.”

    Based on their findings, the researchers have developed formulae to calculate the scale of a plume depending on a given environmental threshold. For instance, if regulators determine that a certain concentration of sediments could be detrimental to surrounding sea life, the formula can be used to calculate how far a plume above that concentration would extend, and what volume of ocean water would be impacted over the course of a 20-year nodule mining operation.

    “At the heart of the environmental question surrounding deep-sea mining is the extent of sediment plumes,” Peacock says. “It’s a multiscale problem, from micron-scale sediments, to turbulent flows, to ocean currents over thousands of kilometers. It’s a big jigsaw puzzle, and we are uniquely equipped to work on that problem and provide answers founded in science and data.”

    The team is now working on collector plumes, having recently returned from several weeks at sea to perform the first environmental monitoring of a nodule collector vehicle in the deep ocean in over 40 years.

    This research was supported in part by the MIT Environmental Solutions Initiative, the UC Ship Time Program, the MIT Policy Lab, the 11th Hour Project of the Schmidt Family Foundation, the Benioff Ocean Initiative, and Fundación Bancaria “la Caixa.” More

  • in

    Investigating materials for safe, secure nuclear power

    Michael Short came to MIT in the fall of 2001 as an 18-year-old first-year who grew up in Boston’s North Shore. He immediately felt at home, so much so that he’s never really left. It’s not that Short has no interest in exploring the world beyond the confines of the Institute, as he is an energetic and venturesome fellow. It’s just that almost everything he hopes to achieve in his scientific career can, in his opinion, be best pursued at this university.

    Last year — after collecting four MIT degrees and joining the faculty of the Department of Nuclear Science and Engineering (NSE) in 2013 — he was promoted to the status of tenured associate professor.

    Short’s enthusiasm for MIT began early in high school when he attended weekend programs that were mainly taught by undergraduates. “It was a program filled with my kind of people,” he recalls. “My high school was very good, but this was at a different level — at the level I was seeking and hoping to achieve. I felt more at home here than I did in my hometown, and the Saturdays at MIT were the highlight of my week.” He loved his four-year experience as an MIT undergraduate, including the research he carried out in the Uhlig Corrosion Laboratory, and he wasn’t ready for it to end.

    After graduating in 2005 with two BS degrees (one in NSE and another in materials science and engineering), he took on some computer programming jobs and worked half time in the Uhlig lab under the supervision of Ronald Ballinger, a professor in both NSE and the Department of Materials Science and Engineering. Short soon realized that computer programming was not for him, and he started graduate studies with Ballinger as his advisor, earning a master’s and a PhD in nuclear science and engineering in 2010.

    Even as an undergraduate, Short was convinced that nuclear power was essential to our nation’s (and the world’s) energy future, especially in light of the urgent need to move toward carbon-free sources of power. During his first year, he was told by Ballinger that the main challenge confronting nuclear power was to find materials, and metals in particular, that could last long enough in the face of radiation and the chemically destructive effects of corrosion.

    Those words, persuasively stated, led him to his double major.  “Materials and radiation damage have been at the core of my research ever since,” Short says. “Remarkably, the stuff I started studying in my first year of college is what I do today, though I’ve extended this work in many directions.”

    Corrosion has proven to be an unexpectedly rich subject. “The traditional view is to expose metals to various things and see what happens — ‘cook and look,’ as it’s called,” he says. “A lot of folks view it that way, but it’s actually much more complex. In fact, some members of our own faculty don’t want to touch corrosion because it’s too complicated, too dirty. But that’s what I like about it.”

    In a 2020 paper published in Nature Communications, Short, his student Weiyue Zhou, and other colleagues made a surprising discovery. “Most people think radiation is bad and makes everything worse, but that’s not always the case,” Short maintains. His team found a specific set of conditions under which a metal (a nickel-chromium alloy) performs better when it is irradiated while undergoing corrosion in a molten salt mixture. Their finding is relevant, he adds, “because these are the conditions under which people are hoping to run the next generation of nuclear reactors.” Leading candidates for alternatives to today’s water-cooled reactors are molten salt and liquid metal (specifically liquid lead and sodium) cooled reactors. To this end, Short and his colleagues are currently carrying out similar experiments involving the irradiation of metal alloys immersed in liquid lead.

    Meanwhile, Short has pursued another multiyear project, trying to devise a new standard to serve as “a measurable unit of radiation damage.” In fact, these were the very words he wrote on his research statement when applying for his first faculty position at MIT, although he admits that he didn’t know then how to realize that goal. But the effort is finally paying off, as Short and his collaborators are about to submit their first big paper on the topic. He’s found that you can’t reduce radiation damage to a single number, which is what people have tried to do in the past, because that’s too simple. Instead, their new standard relates to the density of defects — the number of radiation-induced defects (or unintentional changes to the lattice structure) per unit volume for a given material.

    “Our approach is based on a theory that everyone agrees on — that defects have energy,” Short explains. However, many people told him and his team that the amount of energy stored within those defects would be too small to measure. But that just spurred them to try harder, making measurements at the microjoule level, at the very limits of detection.

    Short is convinced that their new standard will become “universally useful, but it will take years of testing on many, many materials followed by more years of convincing people using the classic method: Repeat, repeat, repeat, making sure that each time you get the same result. It’s the unglamorous side of science, but that’s the side that really matters.”

    The approach has already led Short, in collaboration with NSE proliferation expert Scott Kemp, into the field of nuclear security. Equipped with new insights into the signatures left behind by radiation damage, students co-supervised by Kemp and Short have devised methods for determining how much fissionable material has passed through a uranium enrichment facility, for example, by scrutinizing the materials exposed to these radioactive substances. “I never thought my preliminary work on corrosion experiments as an undergraduate would lead to this,” Short says.

    He has also turned his attention to “microreactors” — nuclear reactors with power ratings as small as a single megawatt, as compared to the 1,000-megawatt behemoths of today. Flexibility in the size of future power plants is essential to the economic viability of nuclear power, he insists, “because nobody wants to pay $10 billion for a reactor now, and I don’t blame them.”

    But the proposed microreactors, he says, “pose new material challenges that I want to solve. It comes down to cramming more material into a smaller volume, and we don’t have a lot of knowledge about how materials perform at such high densities.” Short is currently conducting experiments with the Idaho National Laboratory, irradiating possible microreactor materials to see how they change using a laser technique, transient grating spectroscopy (TGS), which his MIT group has had a big hand in advancing.

    It’s been an exhilarating 20 years at MIT for Short, and he has even more ambitious goals for the next 20 years. “I’d like to be one of those who came up with a way to verify the Iran nuclear deal and thereby helped clamp down on nuclear proliferation worldwide,” he says. “I’d like to choose the materials for our first power-generating nuclear fusion reactors. And I’d like to have influenced perhaps 50 to 100 former students who chose to stay in science because they truly enjoy it.

    “I see my job as creating scientists, not science,” he says, “though science is, of course, a convenient byproduct.” More

  • in

    A material difference

    Eesha Khare has always seen a world of matter. The daughter of a hardware engineer and a biologist, she has an insatiable interest in what substances — both synthetic and biological — have in common. Not surprisingly, that perspective led her to the study of materials.

    “I recognized early on that everything around me is a material,” she says. “How our phones respond to touches, how trees in nature to give us both structural wood and foldable paper, or how we are able to make high skyscrapers with steel and glass, it all comes down to the fundamentals: This is materials science and engineering.”

    As a rising fourth-year PhD student in the MIT Department of Materials Science and Engineering (DMSE), Khare now studies the metal-coordination bonds that allow mussels to bind to rocks along turbulent coastlines. But Khare’s scientific enthusiasm has also led to expansive interests from science policy to climate advocacy and entrepreneurship.

    A material world

    A Silicon Valley native, Khare recalls vividly how excited she was about science as a young girl, both at school and at myriad science fairs and high school laboratory internships. One such internship at the University of California at Santa Cruz introduced her to the study of nanomaterials, or materials that are smaller than a single human cell. The project piqued her interest in how research could lead to energy-storage applications, and she began to ponder the connections between materials, science policy, and the environment.

    As an undergraduate at Harvard University, Khare pursued a degree in engineering sciences and chemistry while also working at the Harvard Kennedy School Institute of Politics. There, she grew fascinated by environmental advocacy in the policy space, working for then-professor Gina McCarthy, who is currently serving in the Biden administration as the first-ever White House climate advisor.

    Following her academic explorations in college, Khare wanted to consider science in a new light before pursuing her doctorate in materials science and engineering. She deferred her program acceptance at MIT in order to attend Cambridge University in the U.K., where she earned a master’s degree in the history and philosophy of science. “Especially in a PhD program, it can often feel like your head is deep in the science as you push new research frontiers, but I wanted take a step back and be inspired by how scientists in the past made their discoveries,” she says.

    Her experience at Cambridge was both challenging and informative, but Khare quickly found that her mechanistic curiosity remained persistent — a realization that came in the form of a biological material.

    “My very first master’s research project was about environmental pollution indicators in the U.K., and I was looking specifically at lichen to understand the social and political reasons why they were adopted by the public as pollution indicators,” Khare explains. “But I found myself wondering more about how lichen can act as pollution indicators. And I found that to be quite similar for most of my research projects: I was more interested in how the technology or discovery actually worked.”

    Enthusiasm for innovation

    Fittingly, these bioindicators confirmed for her that studying materials at MIT was the right course. Now Khare works on a different organism altogether, conducting research on the metal-coordination chemical interactions of a biopolymer secreted by mussels.

    “Mussels secrete this thread and can adhere to ocean walls. So, when ocean waves come, mussels don’t get dislodged that easily,” Khare says. “This is partly because of how metal ions in this material bind to different amino acids in the protein. There’s no input from the mussel itself to control anything there; all the magic is in this biological material that is not only very sticky, but also doesn’t break very readily, and if you cut it, it can re-heal that interface as well! If we could better understand and replicate this biological material in our own world, we could have materials self-heal and never break and thus eliminate so much waste.”

    To study this natural material, Khare combines computational and experimental techniques, experimentally synthesizing her own biopolymers and studying their properties with in silico molecular dynamics. Her co-advisors — Markus Buehler, the Jerry McAfee Professor of Engineering in Civil and Environmental Engineering, and Niels Holten-Andersen, professor of materials science and engineering — have embraced this dual-approach to her project, as well as her abundant enthusiasm for innovation.

    Khare likes to take one exploratory course per semester, and a recent offering in the MIT Sloan School of Management inspired her to pursue entrepreneurship. These days she is spending much of her free time on a startup called Taxie, formed with fellow MIT students after taking the course 15.390 (New Enterprises). Taxie attempts to electrify the rideshare business by making electric rental cars available to rideshare drivers. Khare hopes this project will initiate some small first steps in making the ridesharing industry environmentally cleaner — and in democratizing access to electric vehicles for rideshare drivers, who often hail from lower-income or immigrant backgrounds.

    “There are a lot of goals thrown around for reducing emissions or helping our environment. But we are slowly getting physical things on the road, physical things to real people, and I like to think that we are helping to accelerate the electric transition,” Khare says. “These small steps are helpful for learning, at the very least, how we can make a transition to electric or to a cleaner industry.”

    Alongside her startup work, Khare has pursued a number of other extracurricular activities at MIT, including co-organizing her department’s Student Application Assistance Program and serving on DMSE’s Diversity, Equity, and Inclusion Council. Her varied interests also have led to a diverse group of friends, which suits her well, because she is a self-described “people-person.”

    In a year where maintaining connections has been more challenging than usual, Khare has focused on the positive, spending her spring semester with family in California and practicing Bharatanatyam, a form of Indian classical dance, over Zoom. As she looks to the future, Khare hopes to bring even more of her interests together, like materials science and climate.

    “I want to understand the energy and environmental sector at large to identify the most pressing technology gaps and how can I use my knowledge to contribute. My goal is to figure out where can I personally make a difference and where it can have a bigger impact to help our climate,” she says. “I like being outside of my comfort zone.” More

  • in

    Reducing emissions by decarbonizing industry

    A critical challenge in meeting the Paris Agreement’s long-term goal of keeping global warming well below 2 degrees Celsius is to vastly reduce carbon dioxide (CO2) and other greenhouse gas emissions generated by the most energy-intensive industries. According to a recent report by the International Energy Agency, these industries — cement, iron and steel, chemicals — account for about 20 percent of global CO2 emissions. Emissions from these industries are notoriously difficult to abate because, in addition to emissions associated with energy use, a significant portion of industrial emissions come from the process itself.

    For example, in the cement industry, about half the emissions come from the decomposition of limestone into lime and CO2. While a shift to zero-carbon energy sources such as solar or wind-powered electricity could lower CO2 emissions in the power sector, there are no easy substitutes for emissions-intensive industrial processes.

    Enter industrial carbon capture and storage (CCS). This technology, which extracts point-source carbon emissions and sequesters them underground, has the potential to remove up to 90-99 percent of CO2 emissions from an industrial facility, including both energy-related and process emissions. And that begs the question: Might CCS alone enable hard-to-abate industries to continue to grow while eliminating nearly all of the CO2 emissions they generate from the atmosphere?

    The answer is an unequivocal yes in a new study in the journal Applied Energy co-authored by researchers at the MIT Joint Program on the Science and Policy of Global Change, MIT Energy Initiative, and ExxonMobil.

    Using an enhanced version of the MIT Economic Projection and Policy Analysis (EPPA) model that represents different industrial CCS technology choices — and assuming that CCS is the only greenhouse gas emissions mitigation option available to hard-to-abate industries — the study assesses the long-term economic and environmental impacts of CCS deployment under a climate policy aimed at capping the rise in average global surface temperature at 2 C above preindustrial levels.

    The researchers find that absent industrial CCS deployment, the global costs of implementing the 2 C policy are higher by 12 percent in 2075 and 71 percent in 2100, relative to policy costs with CCS. They conclude that industrial CCS enables continued growth in the production and consumption of energy-intensive goods from hard-to-abate industries, along with dramatic reductions in the CO2 emissions they generate. Their projections show that as industrial CCS gains traction mid-century, this growth occurs globally as well as within geographical regions (primarily in China, Europe, and the United States) and the cement, iron and steel, and chemical sectors.

    “Because it can enable deep reductions in industrial emissions, industrial CCS is an essential mitigation option in the successful implementation of policies aligned with the Paris Agreement’s long-term climate targets,” says Sergey Paltsev, the study’s lead author and a deputy director of the MIT Joint Program and senior research scientist at the MIT Energy Initiative. “As the technology advances, our modeling approach offers decision-makers a pathway for projecting the deployment of industrial CCS across industries and regions.”

    But such advances will not take place without substantial, ongoing funding.

    “Sustained government policy support across decades will be needed if CCS is to realize its potential to promote the growth of energy-intensive industries and a stable climate,” says Howard Herzog, a co-author of the study and senior research engineer at the MIT Energy Initiative.

    The researchers also find that advanced CCS options such as cryogenic carbon capture (CCC), in which extracted CO2 is cooled to solid form using far less power than conventional coal- and gas-fired CCS technologies, could help expand the use of CCS in industrial settings through further production cost and emissions reductions.

    The study was supported by sponsors of the MIT Joint Program and by ExxonMobil through its membership in the MIT Energy Initiative. More