More stories

  • in

    3 Questions: Robert Stoner unpacks US climate and infrastructure laws

    This month, the 2022 United Nations Climate Change Conference (COP27) takes place in Sharm El Sheikh, Egypt, bringing together governments, experts, journalists, industry, and civil society to discuss climate action to enable countries to collectively sharply limit anthropogenic climate change. As MIT Energy Initiative Deputy Director for Science and Technology Robert Stoner attends the conference, he takes a moment to speak about the climate and infrastructure laws enacted in the last year in the United States, and about the impact these laws can have in the global energy transition.

    Q: COP27 is now underway. Can you set the scene?

    A: There’s a lot of interest among vulnerable countries about compensation for the impacts climate change has had on them, or “loss and damage,” a topic that the United States refused to address last year at COP26, for fear of opening up a floodgate and leaving U.S. taxpayers exposed to unlimited liability for our past (and future) emissions. This is a crucial issue of fairness for developed countries — and, well, of acknowledging our common humanity. But in a sense, it’s also a sideshow, and addressing it won’t prevent a climate catastrophe — we really need to focus on mitigation. With the passage of the bipartisan Infrastructure Investment and Jobs Act and the Inflation Reduction Act (IRA), the United States is now in a strong position to twist some arms. These laws are largely about subsidizing the deployment of low-carbon technologies — pretty much all of them. We’re going to do a lot in the United States in the next decade that will lead to dramatic cost reductions for these technologies and enable other countries with fewer resources to adopt them as well. It’s exactly the leadership role the United States has needed to assume. Now we have the opportunity to rally the rest of the world and get other countries to commit to more ambitious decarbonization goals, and to build practical programs that take advantage of the investable pathways we’re going to create for public and private actors.

    But that alone won’t get us there — money is still a huge problem, especially in emerging markets and developing countries. And I don’t think the institutions we rely on to help these countries fund infrastructure — energy and everything else — are adequately funded. Nor do these institutions have the right structures, incentives, and staffing to fund low-carbon development in these countries rapidly enough or on the necessary scale. I’m talking about the World Bank, for instance, but the other multilateral organizations have similar issues. I frankly don’t think the multilaterals can be reformed or sufficiently redirected on a short enough time frame. We definitely need new leadership for these organizations, and I think we probably need to quickly establish new multilaterals with new people, more money, and a clarity of purpose that is likely beyond what can be achieved incrementally. I don’t know if this is going to be an active public discussion at COP27, but I hope it takes place somewhere soon. Given the strong role our government plays in financing and selecting the leadership of these institutions, perhaps this is another opportunity for the United States to demonstrate courage and leadership.

    Q: What “investable pathways” are you talking about?

    A: Well, the pathways we’re implicitly trying to pursue with the Infrastructure Act and IRA are pretty clear, and I’ll come back to them. But first let me describe the landscape: There are three main sources of demand for energy in the economy — industry (meaning chemical production, fuel for electricity generation, cement production, materials and manufacturing, and so on), transportation (cars, trucks, ships, planes, and trains), and buildings (for heating and cooling, mostly). That’s about it, and these three sectors account for 75 percent of our total greenhouse gas emissions. So the pathways are all about how to decarbonize these three end-use sectors. There are a lot of technologies — some that exist, some that don’t — that will have to be brought to bear. And so it can be a little overwhelming to try to imagine how it will all transpire, but it’s pretty clear at a high level what our options are:

    First, generate a lot of low-carbon electricity and electrify as many industrial processes, vehicles, and building heating systems as we can.
    Second, develop and deploy at massive scale technologies that can capture carbon dioxide from smokestacks, or the air, and put it somewhere that it can never escape from — in other words, carbon capture and sequestration, or CCS.
    Third, for end uses like aviation that really need to use fuels because of their extraordinary energy density, develop low-carbon alternatives to fossil fuels.
    And fourth is energy efficiency across the board — but I don’t really count that as a separate pathway per se.
    So, by “investable pathways” I mean specific ways to pursue these options that will attract investors. What the Infrastructure Act and the IRA do is deploy carrots (in the form of subsidies) in a variety of ways to close the gap between what it costs to deploy technologies like CCS that aren’t yet at a commercial stage because they’re immature, and what energy markets will tolerate. A similar situation occurs for low-carbon production of hydrogen, one of the leading low-carbon fuel candidates. We can make it by splitting water with electricity (electrolysis), but that costs too much with present-day technology; or we can make it more cheaply by separating it from methane (which is what natural gas mainly is), but that creates CO2 that has to be transported and sequestered somewhere. And then we have to store the hydrogen until we’re ready to use it, and transport it by pipeline to the industrial facilities where it will be used. That requires infrastructure that doesn’t exist — pipelines, compression stations, big tanks! Come to think of it, the demand for all that hydrogen doesn’t exist either — at least not if industry has to pay what it actually costs.

    So, one very important thing these new acts do is subsidize production of hydrogen in various ways — and subsidize the creation of a CCS industry. The other thing they do is subsidize the deployment at enormous scale of low-carbon energy technologies. Some of them are already pretty cheap, like solar and wind, but they need to be supported by a lot of storage on the grid (which we don’t yet have) and by other sorts of grid infrastructure that, again, don’t exist. So, they now get subsidized, too, along with other carbon-free and low-carbon generation technologies — basically all of them. The idea is that by stimulating at-scale deployment of all these established and emerging technologies, and funding demonstrations of novel infrastructure — effectively lowering the cost of supply of low-carbon energy in the form of electricity and fuels — we will draw out the private sector to build out much more of the connective infrastructure and invest in new industrial processes, new home heating systems, and low-carbon transportation. This subsidized build-out will take place over a decade and then phase out as costs fall — hopefully, leaving the foundation for a thriving low-carbon energy economy in its wake, along with crucial technologies and knowledge that will benefit the whole world.

    Q: Is all of the federal investment in energy infrastructure in the United States relevant to the energy crisis in Europe right now?

    A: Not in a direct way — Europe is a near-term catastrophe with a long-term challenge that is in many ways more difficult than ours because Europe doesn’t have the level of primary energy resources like oil and gas that we have in abundance. Energy costs more in Europe, especially absent Russian pipelines. In a way, the narrowing of Europe’s options creates an impetus to invest in low-carbon technologies sooner than otherwise. The result either way will be expensive energy and quite a lot of economic suffering for years. The near-term challenge is to protect people from high energy prices. The big spikes in electricity prices we see now are driven by the natural gas market disruption, which will eventually dissipate as new sources of electricity come online (Sweden, for example, just announced a plan to develop new nuclear, and we’re seeing other countries like Germany soften their stance on nuclear) — and gas markets will sort themselves out. Meanwhile governments are trying to shield their people with electricity price caps and other subsidies, but that’s enormously burdensome.

    The EU recently announced gas price caps for imported gas to try to eliminate price-gouging by importers and reduce the subsidy burden. That may help to lower downstream prices, or it may make matters worse by reducing the flow of gas into the EU and fueling scarcity pricing, and ultimately adding to the subsidy burden. A lot people are quite reasonably suggesting that if electricity prices are subject to crazy behavior in gas markets, then why not disconnect from the grid and self-generate? Wouldn’t that also help reduce demand for gas overall and also reduce CO2 emissions? It would. But it’s expensive to put solar panels on your roof and batteries in your basement — so for those rich enough to do this, it would lead to higher average electricity costs that would live on far into the future, even when grid prices eventually come down.

    So, an interesting idea is taking hold, with considerable encouragement from national governments — the idea of “energy communities,” basically, towns or cities that encourage local firms and homeowners to install solar and batteries, and make some sort of business arrangement with the local utility to allow the community to disconnect from the national grid at times of high prices and self-supply — in other words, use the utility’s wires to sell locally generated power locally. It’s interesting to think about — it takes less battery storage to handle the intermittency of solar when you have a lot of generators and consumers, so forming a community helps lower costs, and with a good deal from the utility for using their wires, it might not be that much more expensive. And of course, when the national grid is working well and prices are normal, the community would reconnect and buy power cheaply, while selling back its self-generated power to the grid. There are also potentially important social benefits that might accrue in these energy communities, too. It’s not a dumb idea, and we’ll see some interesting experimentation in this area in the coming years — as usual, the Germans are enthusiastic! More

  • in

    Advancing the energy transition amidst global crises

    “The past six years have been the warmest on the planet, and our track record on climate change mitigation is drastically short of what it needs to be,” said Robert C. Armstrong, MIT Energy Initiative (MITEI) director and the Chevron Professor of Chemical Engineering, introducing MITEI’s 15th Annual Research Conference.

    At the symposium, participants from academia, industry, and finance acknowledged the deepening difficulties of decarbonizing a world rocked by geopolitical conflicts and suffering from supply chain disruptions, energy insecurity, inflation, and a persistent pandemic. In spite of this grim backdrop, the conference offered evidence of significant progress in the energy transition. Researchers provided glimpses of a low-carbon future, presenting advances in such areas as long-duration energy storage, carbon capture, and renewable technologies.

    In his keynote remarks, Ernest J. Moniz, the Cecil and Ida Green Professor of Physics and Engineering Systems Emeritus, founding director of MITEI, and former U.S. secretary of energy, highlighted “four areas that have materially changed in the last year” that could shake up, and possibly accelerate, efforts to address climate change.

    Extreme weather seems to be propelling the public and policy makers of both U.S. parties toward “convergence … at least in recognition of the challenge,” Moniz said. He perceives a growing consensus that climate goals will require — in diminishing order of certainty — firm (always-on) power to complement renewable energy sources, a fuel (such as hydrogen) flowing alongside electricity, and removal of atmospheric carbon dioxide (CO2).

    Russia’s invasion of Ukraine, with its “weaponization of natural gas” and global energy impacts, underscores the idea that climate, energy security, and geopolitics “are now more or less recognized widely as one conversation.” Moniz pointed as well to new U.S. laws on climate change and infrastructure that will amplify the role of science and technology and “address the drive to technological dominance by China.”

    The rapid transformation of energy systems will require a comprehensive industrial policy, Moniz said. Government and industry must select and rapidly develop low-carbon fuels, firm power sources (possibly including nuclear power), CO2 removal systems, and long-duration energy storage technologies. “We will need to make progress on all fronts literally in this decade to come close to our goals for climate change mitigation,” he concluded.

    Global cooperation?

    Over two days, conference participants delved into many of the issues Moniz raised. In one of the first panels, scholars pondered whether the international community could forge a coordinated climate change response. The United States’ rift with China, especially over technology trade policies, loomed large.

    “Hatred of China is a bipartisan hobby and passion, but a blanket approach isn’t right, even for the sake of national security,” said Yasheng Huang, the Epoch Foundation Professor of Global Economics and Management at the MIT Sloan School of Management. “Although the United States and China working together would have huge effects for both countries, it is politically unpalatable in the short term,” said F. Taylor Fravel, the Arthur and Ruth Sloan Professor of Political Science and director of the MIT Security Studies Program. John E. Parsons, deputy director for research at the MIT Center for Energy and Environmental Policy Research, suggested that the United States should use this moment “to get our own act together … and start doing things,” such as building nuclear power plants in a cost-effective way.

    Debating carbon removal

    Several panels took up the matter of carbon emissions and the most promising technologies for contending with them. Charles Harvey, MIT professor of civil and environmental engineering, and Howard Herzog, a senior research engineer at MITEI, set the stage early, debating whether capturing carbon was essential to reaching net-zero targets.

    “I have no trouble getting to net zero without carbon capture and storage,” said David Keith, the Gordon McKay Professor of Applied Physics at Harvard University, in a subsequent roundtable. Carbon capture seems more risky to Keith than solar geoengineering, which involves injecting sulfur into the stratosphere to offset CO2 and its heat-trapping impacts.

    There are new ways of moving carbon from where it’s a problem to where it’s safer. Kripa K. Varanasi, MIT professor of mechanical engineering, described a process for modulating the pH of ocean water to remove CO2. Timothy Krysiek, managing director for Equinor Ventures, talked about construction of a 900-kilometer pipeline transporting CO2 from northern Germany to a large-scale storage site located in Norwegian waters 3,000 meters below the seabed. “We can use these offshore Norwegian assets as a giant carbon sink for Europe,” he said.

    A startup showcase featured additional approaches to the carbon challenge. Mantel, which received MITEI Seed Fund money, is developing molten salt material to capture carbon for long-term storage or for use in generating electricity. Verdox has come up with an electrochemical process for capturing dilute CO2 from the atmosphere.

    But while much of the global warming discussion focuses on CO2, other greenhouse gases are menacing. Another panel discussed measuring and mitigating these pollutants. “Methane has 82 times more warming power than CO2 from the point of emission,” said Desirée L. Plata, MIT associate professor of civil and environmental engineering. “Cutting methane is the strongest lever we have to slow climate change in the next 25 years — really the only lever.”

    Steven Hamburg, chief scientist and senior vice president of the Environmental Defense Fund, cautioned that emission of hydrogen molecules into the atmosphere can cause increases in other greenhouse gases such as methane, ozone, and water vapor. As researchers and industry turn to hydrogen as a fuel or as a feedstock for commercial processes, “we will need to minimize leakage … or risk increasing warming,” he said.

    Supply chains, markets, and new energy ventures

    In panels on energy storage and the clean energy supply chain, there were interesting discussions of challenges ahead. High-density energy materials such as lithium, cobalt, nickel, copper, and vanadium for grid-scale energy storage, electric vehicles (EVs), and other clean energy technologies, can be difficult to source. “These often come from water-stressed regions, and we need to be super thoughtful about environmental stresses,” said Elsa Olivetti, the Esther and Harold E. Edgerton Associate Professor in Materials Science and Engineering. She also noted that in light of the explosive growth in demand for metals such as lithium, recycling EVs won’t be of much help. “The amount of material coming back from end-of-life batteries is minor,” she said, until EVs are much further along in their adoption cycle.

    Arvind Sanger, founder and managing partner of Geosphere Capital, said that the United States should be developing its own rare earths and minerals, although gaining the know-how will take time, and overcoming “NIMBYism” (not in my backyard-ism) is a challenge. Sanger emphasized that we must continue to use “denser sources of energy” to catalyze the energy transition over the next decade. In particular, Sanger noted that “for every transition technology, steel is needed,” and steel is made in furnaces that use coal and natural gas. “It’s completely woolly-headed to think we can just go to a zero-fossil fuel future in a hurry,” he said.

    The topic of power markets occupied another panel, which focused on ways to ensure the distribution of reliable and affordable zero-carbon energy. Integrating intermittent resources such as wind and solar into the grid requires a suite of retail markets and new digital tools, said Anuradha Annaswamy, director of MIT’s Active-Adaptive Control Laboratory. Tim Schittekatte, a postdoc at the MIT Sloan School of Management, proposed auctions as a way of insuring consumers against periods of high market costs.

    Another panel described the very different investment needs of new energy startups, such as longer research and development phases. Hooisweng Ow, technology principal at Eni Next LLC Ventures, which is developing drilling technology for geothermal energy, recommends joint development and partnerships to reduce risk. Michael Kearney SM ’11, PhD ’19, SM ’19 is a partner at The Engine, a venture firm built by MIT investing in path-breaking technology to solve the toughest challenges in climate and other problems. Kearney believes the emergence of new technologies and markets will bring on “a labor transition on an order of magnitude never seen before in this country,” he said. “Workforce development is not a natural zone for startups … and this will have to change.”

    Supporting the global South

    The opportunities and challenges of the energy transition look quite different in the developing world. In conversation with Robert Armstrong, Luhut Binsar Pandjaitan, the coordinating minister for maritime affairs and investment of the Republic of Indonesia, reported that his “nation is rich with solar, wind, and energy transition minerals like nickel and copper,” but cannot on its own tackle developing renewable energy or reducing carbon emissions and improving grid infrastructure. “Education is a top priority, and we are very far behind in high technologies,” he said. “We need help and support from MIT to achieve our target,” he said.

    Technologies that could springboard Indonesia and other nations of the global South toward their climate goals are emerging in MITEI-supported projects and at young companies MITEI helped spawn. Among the promising innovations unveiled at the conference are new materials and designs for cooling buildings in hot climates and reducing the environmental costs of construction, and a sponge-like substance that passively sucks moisture out of the air to lower the energy required for running air conditioners in humid climates.

    Other ideas on the move from lab to market have great potential for industrialized nations as well, such as a computational framework for maximizing the energy output of ocean-based wind farms; a process for using ammonia as a renewable fuel with no CO2 emissions; long-duration energy storage derived from the oxidation of iron; and a laser-based method for unlocking geothermal steam to drive power plants. More

  • in

    Machine learning facilitates “turbulence tracking” in fusion reactors

    Fusion, which promises practically unlimited, carbon-free energy using the same processes that power the sun, is at the heart of a worldwide research effort that could help mitigate climate change.

    A multidisciplinary team of researchers is now bringing tools and insights from machine learning to aid this effort. Scientists from MIT and elsewhere have used computer-vision models to identify and track turbulent structures that appear under the conditions needed to facilitate fusion reactions.

    Monitoring the formation and movements of these structures, called filaments or “blobs,” is important for understanding the heat and particle flows exiting from the reacting fuel, which ultimately determines the engineering requirements for the reactor walls to meet those flows. However, scientists typically study blobs using averaging techniques, which trade details of individual structures in favor of aggregate statistics. Individual blob information must be tracked by marking them manually in video data. 

    The researchers built a synthetic video dataset of plasma turbulence to make this process more effective and efficient. They used it to train four computer vision models, each of which identifies and tracks blobs. They trained the models to pinpoint blobs in the same ways that humans would.

    When the researchers tested the trained models using real video clips, the models could identify blobs with high accuracy — more than 80 percent in some cases. The models were also able to effectively estimate the size of blobs and the speeds at which they moved.

    Because millions of video frames are captured during just one fusion experiment, using machine-learning models to track blobs could give scientists much more detailed information.

    “Before, we could get a macroscopic picture of what these structures are doing on average. Now, we have a microscope and the computational power to analyze one event at a time. If we take a step back, what this reveals is the power available from these machine-learning techniques, and ways to use these computational resources to make progress,” says Theodore Golfinopoulos, a research scientist at the MIT Plasma Science and Fusion Center and co-author of a paper detailing these approaches.

    His fellow co-authors include lead author Woonghee “Harry” Han, a physics PhD candidate; senior author Iddo Drori, a visiting professor in the Computer Science and Artificial Intelligence Laboratory (CSAIL), faculty associate professor at Boston University, and adjunct at Columbia University; as well as others from the MIT Plasma Science and Fusion Center, the MIT Department of Civil and Environmental Engineering, and the Swiss Federal Institute of Technology at Lausanne in Switzerland. The research appears today in Nature Scientific Reports.

    Heating things up

    For more than 70 years, scientists have sought to use controlled thermonuclear fusion reactions to develop an energy source. To reach the conditions necessary for a fusion reaction, fuel must be heated to temperatures above 100 million degrees Celsius. (The core of the sun is about 15 million degrees Celsius.)

    A common method for containing this super-hot fuel, called plasma, is to use a tokamak. These devices utilize extremely powerful magnetic fields to hold the plasma in place and control the interaction between the exhaust heat from the plasma and the reactor walls.

    However, blobs appear like filaments falling out of the plasma at the very edge, between the plasma and the reactor walls. These random, turbulent structures affect how energy flows between the plasma and the reactor.

    “Knowing what the blobs are doing strongly constrains the engineering performance that your tokamak power plant needs at the edge,” adds Golfinopoulos.

    Researchers use a unique imaging technique to capture video of the plasma’s turbulent edge during experiments. An experimental campaign may last months; a typical day will produce about 30 seconds of data, corresponding to roughly 60 million video frames, with thousands of blobs appearing each second. This makes it impossible to track all blobs manually, so researchers rely on average sampling techniques that only provide broad characteristics of blob size, speed, and frequency.

    “On the other hand, machine learning provides a solution to this by blob-by-blob tracking for every frame, not just average quantities. This gives us much more knowledge about what is happening at the boundary of the plasma,” Han says.

    He and his co-authors took four well-established computer vision models, which are commonly used for applications like autonomous driving, and trained them to tackle this problem.

    Simulating blobs

    To train these models, they created a vast dataset of synthetic video clips that captured the blobs’ random and unpredictable nature.

    “Sometimes they change direction or speed, sometimes multiple blobs merge, or they split apart. These kinds of events were not considered before with traditional approaches, but we could freely simulate those behaviors in the synthetic data,” Han says.

    Creating synthetic data also allowed them to label each blob, which made the training process more effective, Drori adds.

    Using these synthetic data, they trained the models to draw boundaries around blobs, teaching them to closely mimic what a human scientist would draw.

    Then they tested the models using real video data from experiments. First, they measured how closely the boundaries the models drew matched up with actual blob contours.

    But they also wanted to see if the models predicted objects that humans would identify. They asked three human experts to pinpoint the centers of blobs in video frames and checked to see if the models predicted blobs in those same locations.

    The models were able to draw accurate blob boundaries, overlapping with brightness contours which are considered ground-truth, about 80 percent of the time. Their evaluations were similar to those of human experts, and successfully predicted the theory-defined regime of the blob, which agrees with the results from a traditional method.

    Now that they have shown the success of using synthetic data and computer vision models for tracking blobs, the researchers plan to apply these techniques to other problems in fusion research, such as estimating particle transport at the boundary of a plasma, Han says.

    They also made the dataset and models publicly available, and look forward to seeing how other research groups apply these tools to study the dynamics of blobs, says Drori.

    “Prior to this, there was a barrier to entry that mostly the only people working on this problem were plasma physicists, who had the datasets and were using their methods. There is a huge machine-learning and computer-vision community. One goal of this work is to encourage participation in fusion research from the broader machine-learning community toward the broader goal of helping solve the critical problem of climate change,” he adds.

    This research is supported, in part, by the U.S. Department of Energy and the Swiss National Science Foundation. More

  • in

    Doubling down on sustainability innovation in Kendall Square

    From its new headquarters in Cambridge’s Kendall Square, The Engine is investing in a number of “tough tech” startups seeking to transform the world’s energy systems. A few blocks away, the startup Inari is using gene editing to improve seeds’ resilience to climate change. On the MIT campus nearby, researchers are working on groundbreaking innovations to meet the urgent challenges our planet faces.

    Kendall Square is known as the biotech capital of the world, but as the latest annual meeting of the Kendal Square Association (KSA) made clear, it’s also a thriving hub of sustainability-related innovation.

    The Oct. 20 event, which began at MIT’s Welcome Center before moving to the MIT Museum for a panel discussion, brought together professionals from across Cambridge’s prolific innovation ecosystem — not just entrepreneurs working at startups, but also students, restaurant and retail shop owners, and people from local nonprofits.

    Titled “[Re] Imagining a Sustainable Future,” the meeting highlighted advances in climate change technologies that are afoot in Kendall Square, to help inspire and connect the community as it works toward common sustainability goals.

    “Our focus is on building a better future together — and together is the most important word there,” KSA Executive Director Beth O’Neill Maloney said in her opening remarks. “This is an incredibly innovative ecosystem and community that’s making changes that affect us here in Kendall Square and far, far beyond.”

    The pace of change

    The main event of the evening was a panel discussion moderated by Lee McGuire, the chief communications officer of the Broad Institute of MIT and Harvard. The panel featured Stuart Brown, chief financial officer at Inari; Emily Knight, chief operating officer at The Engine; and Joe Higgins, vice president for campus services and stewardship at MIT.

    “Sustainability is obviously one of the most important — if not the most important — challenge facing us as a society today,” said McGuire, opening the discussion. “Kendall Square is known for its work in biotech, life sciences, AI, and climate, and the more we dug into it the more we realized how interconnected all of those things are. The talent in Kendall Square wants to work on problems relevant for humanity, and the tools and skills you need for that can be very similar depending on the problem you’re working on.”

    Higgins, who oversees the creation of programs to reduce MIT’s environmental impact and improve the resilience of campus operations, focused on the enormity of the problem humanity is facing. He showed the audience a map of the U.S. power grid, with power plants and transmission lines illuminated in a complex web across the country, to underscore the scale of electrification that will be needed to mitigate the worst effects of climate change.

    “The U.S. power grid is the largest machine ever made by mankind,” Higgins said. “It’s been developed over 100 years; it has 7,000 generating plants that feed into it every day; it has 7 million miles of cable and wires; there are transformers and substations; and it lives in every single one of your walls. But people don’t think about it that much.”

    Many cities, states, and organizations like MIT have made commitments to shift to 100 percent clean energy in coming decades. Higgins wanted the audience to try to grasp what that’s going to take.

    “Hundreds of millions of devices and equipment across the planet are going to have to be swapped from fossil fuel to electric-based,” Higgins said. “Our cars, appliances, processes in industry, like making steel and concrete, are going to need to come from this grid. It’ll need to undergo a major modernization and transformation. The good news is it’s already changing.”

    Multiple panelists pointed to developments like the passing of the Inflation Reduction Act to show there was progress being made in reaching urgent sustainability goals.

    “There is a tide change coming, and it’s not only being driven by private capital,” Knight said. “There’s a huge opportunity here, and it’s a really important part of this [Kendall Square] ecosystem.”

    Chief among the topics of discussion was technology development. Even as leaders implement today’s technologies to decarbonize, people in Kendall Square keep a close eye on the new tech being developed and commercialized nearby.

    “I was trying to think about where we are with gene editing,” Brown said. “CRISPR’s been around for 10 years. Compare that to video games. Pong was the first video game when it came out in 1972. Today you have Chess.com using artificial intelligence to power chess games. On gene editing and a lot of these other technologies, we’re much closer to Pong than we are to where it’s going to be. We just can’t imagine today the technology changes we’re going to see over the next five to 10 years.”

    In that regard, Knight discussed some of the promising portfolio companies of The Engine, which invests in early stage, technologically innovative companies. In particular, she highlighted two companies seeking to transform the world’s energy systems with entirely new, 100 percent clean energy sources. MIT spinout Commonwealth Fusion Systems is working on nuclear fusion reactors that could provide abundant, safe, and constant streams of clean energy to our grids, while fellow MIT spinout Quaise Energy is seeking to harvest a new kind of deep geothermal energy using millimeter wave drilling technology.

    “All of our portfolio companies have a focus on sustainability in one way or another,” Knight said. “People who are working on these very hard technologies will change the world.”

    Knight says the kind of collaboration championed by the KSA is important for startups The Engine invests in.

    “We know these companies need a lot of people around them, whether from government, academia, advisors, corporate partners, anyone who can help them on their path, because for a lot of them this is a new path and a new market,” Knight said.

    Reasons for hope

    The KSA is made up of over 150 organizations across Kendall Square. From major employers like Sanofi, Pfizer, MIT, and the Broad Institute to local nonprofit organizations, startups, and independent shops and restaurants, the KSA represents the entire Kendall ecosystem.

    O’Neill Maloney celebrated a visible example of sustainability in Kendall Square early on by the Charles River Conservancy, which has built a floating wetland designed to naturally remove harmful algae blooms from Charles River.

    Other examples of sustainability work in the neighborhood can be found at MIT. Under its “Fast Forward” climate action plan, the Institute has set a goal of eliminating direct emissions from its campus by 2050, including a near-term milestone of achieving net-zero emissions by 2026. Since 2014, when MIT launched a five-year plan for action on climate change, net campus emissions have already been cut by 20 percent by making its campus buildings more energy efficient, transitioning to electric vehicles, and enabling large-scale renewable energy projects, among other strategies.

    In the face of a daunting global challenge, such milestones are reason for optimism.

    “If anybody’s going to be able to do this [shift to 100 percent clean energy] and show how it can be done at an urban, city scale, it’s probably MIT and the city of Cambridge,” McGuire said. “We have a lot of good ingredients to figure this out.”

    Throughout the night, many speakers, attendees, and panelists echoed that sentiment. They said they see plenty of reasons for hope.

    “I’m absolutely optimistic,” Higgins said. “I’m seeing utility companies working with businesses working with regulators — people are coming together on this topic. And one of these new technologies being commercialized is going to change things before 2030, whether its fusion, deep geothermal, small modular nuclear reactors, the technology is just moving so quickly.” More

  • in

    Finding community in high-energy-density physics

    Skylar Dannhoff knew one thing: She did not want to be working alone.

    As an undergraduate at Case Western Reserve University, she had committed to a senior project that often felt like solitary lab work, a feeling heightened by the pandemic. Though it was an enriching experience, she was determined to find a graduate school environment that would foster community, one “with lots of people, lots of collaboration; where it’s impossible to work until 3 a.m. without anyone noticing.” A unique group at the Plasma Science and Fusion Center (PSFC) looked promising: the High-Energy-Density Physics (HEDP) division, a lead partner in the National Nuclear Security Administration’s Center for Excellence at MIT.

    “It was a shot in the dark, just more of a whim than anything,” she says of her request to join HEDP on her application to MIT’s Department of Physics. “And then, somehow, they reached out to me. I told them I’m willing to learn about plasma. I didn’t know anything about it.”

    What she did know was that the HEDP group collaborates with other U.S. laboratories on an approach to creating fusion energy known as inertial confinement fusion (ICF). One version of the technique, known as direct-drive ICF, aims multiple laser beams symmetrically onto a spherical capsule filled with nuclear fuel. The other, indirect-drive ICF, instead aims multiple lasers beams into a gold cylindrical cavity called a hohlraum, within which the spherical fuel capsule is positioned. The laser beams are configured to hit the inner hohlraum wall, generating a “bath” of X-rays, which in turn compress the fuel capsule.

    Imploding the capsule generates intense fusion energy within a tiny fraction of a second (an order of tens of picoseconds). In August 2021, the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory (LLNL) used this method to produce an historic fusion yield of 1.3 megajoules, putting researchers within reach of “ignition,” the point where the self-sustained fusion burn spreads into the surrounding fuel, leading to a high fusion-energy gain.  

    Joining the group just a month before this long-sought success, Dannhoff was impressed more with the response of her new teammates and the ICF community than with the scientific milestone. “I got a better appreciation for people who had spent their entire careers working on this project, just chugging along doing their best, ignoring the naysayers. I was excited for the people.”

    Dannhoff is now working toward extending the success of NIF and other ICF experiments, like the OMEGA laser at the University of Rochester’s Laboratory for Laser Energetics. Under the supervision of Senior Research Scientist Chikang Li, she is studying what happens to the flow of plasma within the hohlraum cavity during indirect ICF experiments, particularly for hohlraums with inner-wall aerogel foam linings. Experiments, over the last decade, have shown just how excruciatingly precise the symmetry in ICF targets must be. The more symmetric the X-ray drive, the more effective the implosion, and it is possible that these foam linings will improve the X-ray symmetry and drive efficiency.

    Dannhoff is specifically interested in studying the behavior of silicon and tantalum-based foam liners. She is as concerned with the challenges of the people at General Atomics (GA) and LLNL who are creating these targets as she is with the scientific outcome.

    “I just had a meeting with GA yesterday,” she notes. “And it’s a really tricky process. It’s kind of pushing the boundaries of what is doable at the moment. I got a much better sense of how demanding this project is for them, how much we’re asking of them.”

    What excites Dannhoff is the teamwork she observes, both at MIT and between ICF institutions around the United States. With roughly 10 graduate students and postdocs down the hall, each with an assigned lead role in lab management, she knows she can consult an expert on almost any question. And collaborators across the country are just an email away. “Any information that people can give you, they will give you, and usually very freely,” she notes. “Everyone just wants to see this work.”

    That Dannhoff is a natural team player is also evidenced in her hobbies. A hockey goalie, she prioritizes playing with MIT’s intramural teams, “because goalies are a little hard to come by. I just play with whoever needs a goalie on that night, and it’s a lot of fun.”

    She is also a member of the radio community, a fellowship she first embraced at Case Western — a moment she describes as a turning point in her life. “I literally don’t know who I would be today if I hadn’t figured out radio is something I’m interested in,” she admits. The MIT Radio Society provided the perfect landing pad for her arrival in Cambridge, full of the kinds of supportive, interesting, knowledgeable students she had befriended as an undergraduate. She credits radio with helping her realize that she could make her greatest contributions to science by focusing on engineering.

    Danhoff gets philosophical as she marvels at the invisible waves that surround us.

    “Not just radio waves: every wave,” she asserts. “The voice is the everywhere. Music, signal, space phenomena: it’s always around. And all we have to do is make the right little device and have the right circuit elements put in the right order to unmix and mix the signals and amplify them. And bada-bing, bada-boom, we’re talking with the universe.”

    “Maybe that epitomizes physics to me,” she adds. “We’re trying to listen to the universe, and it’s talking to us. We just have to come up with the right tools and hear what it’s trying to say.” More

  • in

    3 Questions: Blue hydrogen and the world’s energy systems

    In the past several years, hydrogen energy has increasingly become a more central aspect of the clean energy transition. Hydrogen can produce clean, on-demand energy that could complement variable renewable energy sources such as wind and solar power. That being said, pathways for deploying hydrogen at scale have yet to be fully explored. In particular, the optimal form of hydrogen production remains in question.

    MIT Energy Initiative Research Scientist Emre Gençer and researchers from a wide range of global academic and research institutions recently published “On the climate impacts of blue hydrogen production,” a comprehensive life-cycle assessment analysis of blue hydrogen, a term referring to natural gas-based hydrogen production with carbon capture and storage. Here, Gençer describes blue hydrogen and the role that hydrogen will play more broadly in decarbonizing the world’s energy systems.

    Q: What are the differences between gray, green, and blue hydrogen?

    A: Though hydrogen does not generate any emissions directly when it is used, hydrogen production can have a huge environmental impact. Colors of hydrogen are increasingly used to distinguish different production methods and as a proxy to represent the associated environmental impact. Today, close to 95 percent of hydrogen production comes from fossil resources. As a result, the carbon dioxide (CO2) emissions from hydrogen production are quite high. Gray, black, and brown hydrogen refer to fossil-based production. Gray is the most common form of production and comes from natural gas, or methane, using steam methane reformation but without capturing CO2.

    There are two ways to move toward cleaner hydrogen production. One is applying carbon capture and storage to the fossil fuel-based hydrogen production processes. Natural gas-based hydrogen production with carbon capture and storage is referred to as blue hydrogen. If substantial amounts of CO2 from natural gas reforming are captured and permanently stored, such hydrogen could be a low-carbon energy carrier. The second way to produce cleaner hydrogen is by using electricity to produce hydrogen via electrolysis. In this case, the source of the electricity determines the environmental impact of the hydrogen, with the lowest impact being achieved when electricity is generated from renewable sources, such as wind and solar. This is known as green hydrogen.

    Q: What insights have you gleaned with a life cycle assessment (LCA) of blue hydrogen and other low-carbon energy systems?

    A: Mitigating climate change requires significant decarbonization of the global economy. Accurate estimation of cumulative greenhouse gas (GHG) emissions and its reduction pathways is critical irrespective of the source of emissions. An LCA approach allows the quantification of the environmental life cycle of a commercial product, process, or service impact with all the stages (cradle-to-grave). The LCA-based comparison of alternative energy pathways, fuel options, etc., provides an apples-to-apples comparison of low-carbon energy choices. In the context of low-carbon hydrogen, it is essential to understand the GHG impact of supply chain options. Depending on the production method, contribution of life-cycle stages to the total emissions might vary. For example, with natural gas–based hydrogen production, emissions associated with production and transport of natural gas might be a significant contributor based on its leakage and flaring rates. If these rates are not precisely accounted for, the environmental impact of blue hydrogen can be underestimated. However, the same rationale is also true for electricity-based hydrogen production. If the electricity is not supplied from low-
carbon sources such as wind, solar, or nuclear, the carbon intensity of hydrogen can be significantly underestimated. In the case of nuclear, there are also other environmental impact considerations.

    An LCA approach — if performed with consistent system boundaries — can provide an accurate environmental impact comparison. It should also be noted that these estimations can only be as good as the assumptions and correlations used unless they are supported by measurements. 

    Q: What conditions are needed to make blue hydrogen production most effective, and how can it complement other decarbonization pathways?

    A: Hydrogen is considered one of the key vectors for the decarbonization of hard-to-abate sectors such as heavy-duty transportation. Currently, more than 95 percent of global hydrogen production is fossil-fuel based. In the next decade, massive amounts of hydrogen must be produced to meet this anticipated demand. It is very hard, if not impossible, to meet this demand without leveraging existing production assets. The immediate and relatively cost-effective option is to retrofit existing plants with carbon capture and storage (blue hydrogen).

    The environmental impact of blue hydrogen may vary over large ranges but depends on only a few key parameters: the methane emission rate of the natural gas supply chain, the CO2 removal rate at the hydrogen production plant, and the global warming metric applied. State-of-the-art reforming with high CO2 capture rates, combined with natural gas supply featuring low methane emissions, substantially reduces GHG emissions compared to conventional natural gas reforming. Under these conditions, blue hydrogen is compatible with low-carbon economies and exhibits climate change impacts at the upper end of the range of those caused by hydrogen production from renewable-based electricity. However, neither current blue nor green hydrogen production pathways render fully “net-zero” hydrogen without additional CO2 removal.

    This article appears in the Spring 2022 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Processing waste biomass to reduce airborne emissions

    To prepare fields for planting, farmers the world over often burn corn stalks, rice husks, hay, straw, and other waste left behind from the previous harvest. In many places, the practice creates huge seasonal clouds of smog, contributing to air pollution that kills 7 million people globally a year, according to the World Health Organization.

    Annually, $120 billion worth of crop and forest residues are burned in the open worldwide — a major waste of resources in an energy-starved world, says Kevin Kung SM ’13, PhD ’17. Kung is working to transform this waste biomass into marketable products — and capitalize on a billion-dollar global market — through his MIT spinoff company, Takachar.

    Founded in 2015, Takachar develops small-scale, low-cost, portable equipment to convert waste biomass into solid fuel using a variety of thermochemical treatments, including one known as oxygen-lean torrefaction. The technology emerged from Kung’s PhD project in the lab of Ahmed Ghoniem, the Ronald C. Crane (1972) Professor of Mechanical Engineering at MIT.

    Biomass fuels, including wood, peat, and animal dung, are a major source of carbon emissions — but billions of people rely on such fuels for cooking, heating, and other household needs. “Currently, burning biomass generates 10 percent of the primary energy used worldwide, and the process is used largely in rural, energy-poor communities. We’re not going to change that overnight. There are places with no other sources of energy,” Ghoniem says.

    What Takachar’s technology provides is a way to use biomass more cleanly and efficiently by concentrating the fuel and eliminating contaminants such as moisture and dirt, thus creating a “clean-burning” fuel — one that generates less smoke. “In rural communities where biomass is used extensively as a primary energy source, torrefaction will address air pollution head-on,” Ghoniem says.

    Thermochemical treatment densifies biomass at elevated temperatures, converting plant materials that are typically loose, wet, and bulky into compact charcoal. Centralized processing plants exist, but collection and transportation present major barriers to utilization, Kung says. Takachar’s solution moves processing into the field: To date, Takachar has worked with about 5,500 farmers to process 9,000 metric tons of crops.

    Takachar estimates its technology has the potential to reduce carbon dioxide equivalent emissions by gigatons per year at scale. (“Carbon dioxide equivalent” is a measure used to gauge global warming potential.) In recognition, in 2021 Takachar won the first-ever Earthshot Prize in the clean air category, a £1 million prize funded by Prince William and Princess Kate’s Royal Foundation.

    Roots in Kenya

    As Kung tells the story, Takachar emerged from a class project that took him to Kenya — which explains the company’s name, a combination of takataka, which mean “trash” in Swahili, and char, for the charcoal end product.

    It was 2011, and Kung was at MIT as a biological engineering grad student focused on cancer research. But “MIT gives students big latitude for exploration, and I took courses outside my department,” he says. In spring 2011, he signed up for a class known as 15.966 (Global Health Delivery Lab) in the MIT Sloan School of Management. The class brought Kung to Kenya to work with a nongovernmental organization in Nairobi’s Kibera, the largest urban slum in Africa.

    “We interviewed slum households for their views on health, and that’s when I noticed the charcoal problem,” Kung says. The problem, as Kung describes it, was that charcoal was everywhere in Kibera — piled up outside, traded by the road, and used as the primary fuel, even indoors. Its creation contributed to deforestation, and its smoke presented a serious health hazard.

    Eager to address this challenge, Kung secured fellowship support from the MIT International Development Initiative and the Priscilla King Gray Public Service Center to conduct more research in Kenya. In 2012, he formed Takachar as a team and received seed money from the MIT IDEAS Global Challenge, MIT Legatum Center for Development and Entrepreneurship, and D-Lab to produce charcoal from household organic waste. (This work also led to a fertilizer company, Safi Organics, that Kung founded in 2016 with the help of MIT IDEAS. But that is another story.)

    Meanwhile, Kung had another top priority: finding a topic for his PhD dissertation. Back at MIT, he met Alexander Slocum, the Walter M. May and A. Hazel May Professor of Mechanical Engineering, who on a long walk-and-talk along the Charles River suggested he turn his Kenya work into a thesis. Slocum connected him with Robert Stoner, deputy director for science and technology at the MIT Energy Initiative (MITEI) and founding director of MITEI’s Tata Center for Technology and Design. Stoner in turn introduced Kung to Ghoniem, who became his PhD advisor, while Slocum and Stoner joined his doctoral committee.

    Roots in MIT lab

    Ghoniem’s telling of the Takachar story begins, not surprisingly, in the lab. Back in 2010, he had a master’s student interested in renewable energy, and he suggested the student investigate biomass. That student, Richard Bates ’10, SM ’12, PhD ’16, began exploring the science of converting biomass to more clean-burning charcoal through torrefaction.

    Most torrefaction (also known as low-temperature pyrolysis) systems use external heating sources, but the lab’s goal, Ghoniem explains, was to develop an efficient, self-sustained reactor that would generate fewer emissions. “We needed to understand the chemistry and physics of the process, and develop fundamental scaling models, before going to the lab to build the device,” he says.

    By the time Kung joined the lab in 2013, Ghoniem was working with the Tata Center to identify technology suitable for developing countries and largely based on renewable energy. Kung was able to secure a Tata Fellowship and — building on Bates’ research — develop the small-scale, practical device for biomass thermochemical conversion in the field that launched Takachar.

    This device, which was patented by MIT with inventors Kung, Ghoniem, Stoner, MIT research scientist Santosh Shanbhogue, and Slocum, is self-contained and scalable. It burns a little of the biomass to generate heat; this heat bakes the rest of the biomass, releasing gases; the system then introduces air to enable these gases to combust, which burns off the volatiles and generates more heat, keeping the thermochemical reaction going.

    “The trick is how to introduce the right amount of air at the right location to sustain the process,” Ghoniem explains. “If you put in more air, that will burn the biomass. If you put in less, there won’t be enough heat to produce the charcoal. That will stop the reaction.”

    About 10 percent of the biomass is used as fuel to support the reaction, Kung says, adding that “90 percent is densified into a form that’s easier to handle and utilize.” He notes that the research received financial support from the Abdul Latif Jameel Water and Food Systems Lab and the Deshpande Center for Technological Innovation, both at MIT. Sonal Thengane, another postdoc in Ghoniem’s lab, participated in the effort to scale up the technology at the MIT Bates Lab (no relation to Richard Bates).

    The charcoal produced is more valuable per ton and easier to transport and sell than biomass, reducing transportation costs by two-thirds and giving farmers an additional income opportunity — and an incentive not to burn agricultural waste, Kung says. “There’s more income for farmers, and you get better air quality.”

    Roots in India

    When Kung became a Tata Fellow, he joined a program founded to take on the biggest challenges of the developing world, with a focus on India. According to Stoner, Tata Fellows, including Kung, typically visit India twice a year and spend six to eight weeks meeting stakeholders in industry, the government, and in communities to gain perspective on their areas of study.

    “A unique part of Tata is that you’re considering the ecosystem as a whole,” says Kung, who interviewed hundreds of smallholder farmers, met with truck drivers, and visited existing biomass processing plants during his Tata trips to India. (Along the way, he also connected with Indian engineer Vidyut Mohan, who became Takachar’s co-founder.)

    “It was very important for Kevin to be there walking about, experimenting, and interviewing farmers,” Stoner says. “He learned about the lives of farmers.”

    These experiences helped instill in Kung an appreciation for small farmers that still drives him today as Takachar rolls out its first pilot programs, tinkers with the technology, grows its team (now up to 10), and endeavors to build a revenue stream. So, while Takachar has gotten a lot of attention and accolades — from the IDEAS award to the Earthshot Prize — Kung says what motivates him is the prospect of improving people’s lives.

    The dream, he says, is to empower communities to help both the planet and themselves. “We’re excited about the environmental justice perspective,” he says. “Our work brings production and carbon removal or avoidance to rural communities — providing them with a way to convert waste, make money, and reduce air pollution.”

    This article appears in the Spring 2022 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Ocean scientists measure sediment plume stirred up by deep-sea-mining vehicle

    What will be the impact to the ocean if humans are to mine the deep sea? It’s a question that’s gaining urgency as interest in marine minerals has grown.

    The ocean’s deep-sea bed is scattered with ancient, potato-sized rocks called “polymetallic nodules” that contain nickel and cobalt — minerals that are in high demand for the manufacturing of batteries, such as for powering electric vehicles and storing renewable energy, and in response to factors such as increasing urbanization. The deep ocean contains vast quantities of mineral-laden nodules, but the impact of mining the ocean floor is both unknown and highly contested.

    Now MIT ocean scientists have shed some light on the topic, with a new study on the cloud of sediment that a collector vehicle would stir up as it picks up nodules from the seafloor.

    The study, appearing today in Science Advances, reports the results of a 2021 research cruise to a region of the Pacific Ocean known as the Clarion Clipperton Zone (CCZ), where polymetallic nodules abound. There, researchers equipped a pre-prototype collector vehicle with instruments to monitor sediment plume disturbances as the vehicle maneuvered across the seafloor, 4,500 meters below the ocean’s surface. Through a sequence of carefully conceived maneuvers. the MIT scientists used the vehicle to monitor its own sediment cloud and measure its properties.

    Their measurements showed that the vehicle created a dense plume of sediment in its wake, which spread under its own weight, in a phenomenon known in fluid dynamics as a “turbidity current.” As it gradually dispersed, the plume remained relatively low, staying within 2 meters of the seafloor, as opposed to immediately lofting higher into the water column as had been postulated.

    “It’s quite a different picture of what these plumes look like, compared to some of the conjecture,” says study co-author Thomas Peacock, professor of mechanical engineering at MIT. “Modeling efforts of deep-sea mining plumes will have to account for these processes that we identified, in order to assess their extent.”

    The study’s co-authors include lead author Carlos Muñoz-Royo, Raphael Ouillon, and Souha El Mousadik of MIT; and Matthew Alford of the Scripps Institution of Oceanography.

    Deep-sea maneuvers

    To collect polymetallic nodules, some mining companies are proposing to deploy tractor-sized vehicles to the bottom of the ocean. The vehicles would vacuum up the nodules along with some sediment along their path. The nodules and sediment would then be separated inside of the vehicle, with the nodules sent up through a riser pipe to a surface vessel, while most of the sediment would be discharged immediately behind the vehicle.

    Peacock and his group have previously studied the dynamics of the sediment plume that associated surface operation vessels may pump back into the ocean. In their current study, they focused on the opposite end of the operation, to measure the sediment cloud created by the collectors themselves.

    In April 2021, the team joined an expedition led by Global Sea Mineral Resources NV (GSR), a Belgian marine engineering contractor that is exploring the CCZ for ways to extract metal-rich nodules. A European-based science team, Mining Impacts 2, also conducted separate studies in parallel. The cruise was the first in over 40 years to test a “pre-prototype” collector vehicle in the CCZ. The machine, called Patania II, stands about 3 meters high, spans 4 meters wide, and is about one-third the size of what a commercial-scale vehicle is expected to be.

    While the contractor tested the vehicle’s nodule-collecting performance, the MIT scientists monitored the sediment cloud created in the vehicle’s wake. They did so using two maneuvers that the vehicle was programmed to take: a “selfie,” and a “drive-by.”

    Both maneuvers began in the same way, with the vehicle setting out in a straight line, all its suction systems turned on. The researchers let the vehicle drive along for 100 meters, collecting any nodules in its path. Then, in the “selfie” maneuver, they directed the vehicle to turn off its suction systems and double back around to drive through the cloud of sediment it had just created. The vehicle’s installed sensors measured the concentration of sediment during this “selfie” maneuver, allowing the scientists to monitor the cloud within minutes of the vehicle stirring it up.

    Play video

    A movie of the Patania II pre-prototype collector vehicle entering, driving through, and leaving the low-lying turbidity current plume as part of a selfie operation. For scale, the instrumentation post attached to the front of the vehicle reaches about 3m above the seabed. The movie is sped up by a factor of 20. Credit: Global Sea Mineral Resources

    For the “drive-by” maneuver, the researchers placed a sensor-laden mooring 50 to 100 meters from the vehicle’s planned tracks. As the vehicle drove along collecting nodules, it created a plume that eventually spread past the mooring after an hour or two. This “drive-by” maneuver enabled the team to monitor the sediment cloud over a longer timescale of several hours, capturing the plume evolution.

    Out of steam

    Over multiple vehicle runs, Peacock and his team were able to measure and track the evolution of the sediment plume created by the deep-sea-mining vehicle.

    “We saw that the vehicle would be driving in clear water, seeing the nodules on the seabed,” Peacock says. “And then suddenly there’s this very sharp sediment cloud coming through when the vehicle enters the plume.”

    From the selfie views, the team observed a behavior that was predicted by some of their previous modeling studies: The vehicle stirred up a heavy amount of sediment that was dense enough that, even after some mixing with the surrounding water, it generated a plume that behaved almost as a separate fluid, spreading under its own weight in what’s known as a turbidity current.

    “The turbidity current spreads under its own weight for some time, tens of minutes, but as it does so, it’s depositing sediment on the seabed and eventually running out of steam,” Peacock says. “After that, the ocean currents get stronger than the natural spreading, and the sediment transitions to being carried by the ocean currents.”

    By the time the sediment drifted past the mooring, the researchers estimate that 92 to 98 percent of the sediment either settled back down or remained within 2 meters of the seafloor as a low-lying cloud. There is, however, no guarantee that the sediment always stays there rather than drifting further up in the water column. Recent and future studies by the research team are looking into this question, with the goal of consolidating understanding for deep-sea mining sediment plumes.

    “Our study clarifies the reality of what the initial sediment disturbance looks like when you have a certain type of nodule mining operation,” Peacock says. “The big takeaway is that there are complex processes like turbidity currents that take place when you do this kind of collection. So, any effort to model a deep-sea-mining operation’s impact will have to capture these processes.”

    “Sediment plumes produced by deep-seabed mining are a major concern with regards to environmental impact, as they will spread over potentially large areas beyond the actual site of mining and affect deep-sea life,” says Henko de Stigter, a marine geologist at the Royal Netherlands Institute for Sea Research, who was not involved in the research. “The current paper provides essential insight in the initial development of these plumes.”

    This research was supported, in part, by the National Science Foundation, ARPA-E, the 11th Hour Project, the Benioff Ocean Initiative, and Global Sea Mineral Resources. The funders had no role in any aspects of the research analysis, the research team states. More