More stories

  • in

    Smarter regulation of global shipping emissions could improve air quality and health outcomes

    Emissions from shipping activities around the world account for nearly 3 percent of total human-caused greenhouse gas emissions, and could increase by up to 50 percent by 2050, making them an important and often overlooked target for global climate mitigation. At the same time, shipping-related emissions of additional pollutants, particularly nitrogen and sulfur oxides, pose a significant threat to global health, as they degrade air quality enough to cause premature deaths.

    The main source of shipping emissions is the combustion of heavy fuel oil in large diesel engines, which disperses pollutants into the air over coastal areas. The nitrogen and sulfur oxides emitted from these engines contribute to the formation of PM2.5, airborne particulates with diameters of up to 2.5 micrometers that are linked to respiratory and cardiovascular diseases. Previous studies have estimated that PM2.5  from shipping emissions contribute to about 60,000 cardiopulmonary and lung cancer deaths each year, and that IMO 2020, an international policy that caps engine fuel sulfur content at 0.5 percent, could reduce PM2.5 concentrations enough to lower annual premature mortality by 34 percent.

    Global shipping emissions arise from both domestic (between ports in the same country) and international (between ports of different countries) shipping activities, and are governed by national and international policies, respectively. Consequently, effective mitigation of the air quality and health impacts of global shipping emissions will require that policymakers quantify the relative contributions of domestic and international shipping activities to these adverse impacts in an integrated global analysis.

    A new study in the journal Environmental Research Letters provides that kind of analysis for the first time. To that end, the study’s co-authors — researchers from MIT and the Hong Kong University of Science and Technology — implement a three-step process. First, they create global shipping emission inventories for domestic and international vessels based on ship activity records of the year 2015 from the Automatic Identification System (AIS). Second, they apply an atmospheric chemistry and transport model to this data to calculate PM2.5 concentrations generated by that year’s domestic and international shipping activities. Finally, they apply a model that estimates mortalities attributable to these pollutant concentrations.

    The researchers find that approximately 94,000 premature deaths were associated with PM2.5 exposure due to maritime shipping in 2015 — 83 percent international and 17 percent domestic. While international shipping accounted for the vast majority of the global health impact, some regions experienced significant health burdens from domestic shipping operations. This is especially true in East Asia: In China, 44 percent of shipping-related premature deaths were attributable to domestic shipping activities.

    “By comparing the health impacts from international and domestic shipping at the global level, our study could help inform decision-makers’ efforts to coordinate shipping emissions policies across multiple scales, and thereby reduce the air quality and health impacts of these emissions more effectively,” says Yiqi Zhang, a researcher at the Hong Kong University of Science and Technology who led the study as a visiting student supported by the MIT Joint Program on the Science and Policy of Global Change.

    In addition to estimating the air-quality and health impacts of domestic and international shipping, the researchers evaluate potential health outcomes under different shipping emissions-control policies that are either currently in effect or likely to be implemented in different regions in the near future.

    They estimate about 30,000 avoided deaths per year under a scenario consistent with IMO 2020, an international regulation limiting the sulfur content in shipping fuel oil to 0.5 percent — a finding that tracks with previous studies. Further strengthening regulations on sulfur content would yield only slight improvement; limiting sulfur content to 0.1 percent reduces annual shipping-attributable PM2.5-related premature deaths by an additional 5,000. In contrast, regulating nitrogen oxides instead, involving a Tier III NOx Standard would produce far greater benefits than a 0.1-percent sulfur cap, with 33,000 further avoided deaths.

    “Areas with high proportions of mortalities contributed by domestic shipping could effectively use domestic regulations to implement controls,” says study co-author Noelle Selin, a professor at MIT’s Institute for Data, Systems and Society and Department of Earth, Atmospheric and Planetary Sciences, and a faculty affiliate of the MIT Joint Program. “For other regions where much damage comes from international vessels, further international cooperation is required to mitigate impacts.” More

  • in

    Global warming begets more warming, new paleoclimate study finds

    It is increasingly clear that the prolonged drought conditions, record-breaking heat, sustained wildfires, and frequent, more extreme storms experienced in recent years are a direct result of rising global temperatures brought on by humans’ addition of carbon dioxide to the atmosphere. And a new MIT study on extreme climate events in Earth’s ancient history suggests that today’s planet may become more volatile as it continues to warm.

    The study, appearing today in Science Advances, examines the paleoclimate record of the last 66 million years, during the Cenozoic era, which began shortly after the extinction of the dinosaurs. The scientists found that during this period, fluctuations in the Earth’s climate experienced a surprising “warming bias.” In other words, there were far more warming events — periods of prolonged global warming, lasting thousands to tens of thousands of years — than cooling events. What’s more, warming events tended to be more extreme, with greater shifts in temperature, than cooling events.

    The researchers say a possible explanation for this warming bias may lie in a “multiplier effect,” whereby a modest degree of warming — for instance from volcanoes releasing carbon dioxide into the atmosphere — naturally speeds up certain biological and chemical processes that enhance these fluctuations, leading, on average, to still more warming.

    Interestingly, the team observed that this warming bias disappeared about 5 million years ago, around the time when ice sheets started forming in the Northern Hemisphere. It’s unclear what effect the ice has had on the Earth’s response to climate shifts. But as today’s Arctic ice recedes, the new study suggests that a multiplier effect may kick back in, and the result may be a further amplification of human-induced global warming.

    “The Northern Hemisphere’s ice sheets are shrinking, and could potentially disappear as a long-term consequence of human actions” says the study’s lead author Constantin Arnscheidt, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “Our research suggests that this may make the Earth’s climate fundamentally more susceptible to extreme, long-term global warming events such as those seen in the geologic past.”

    Arnscheidt’s study co-author is Daniel Rothman, professor of geophysics at MIT, and  co-founder and co-director of MIT’s Lorenz Center.

    A volatile push

    For their analysis, the team consulted large databases of sediments containing deep-sea benthic foraminifera — single-celled organisms that have been around for hundreds of millions of years and whose hard shells are preserved in sediments. The composition of these shells is affected by the ocean temperatures as organisms are growing; the shells are therefore considered a reliable proxy for the Earth’s ancient temperatures.

    For decades, scientists have analyzed the composition of these shells, collected from all over the world and dated to various time periods, to track how the Earth’s temperature has fluctuated over millions of years. 

    “When using these data to study extreme climate events, most studies have focused on individual large spikes in temperature, typically of a few degrees Celsius warming,” Arnscheidt says. “Instead, we tried to look at the overall statistics and consider all the fluctuations involved, rather than picking out the big ones.”

    The team first carried out a statistical analysis of the data and observed that, over the last 66 million years, the distribution of global temperature fluctuations didn’t resemble a standard bell curve, with symmetric tails representing an equal probability of extreme warm and extreme cool fluctuations. Instead, the curve was noticeably lopsided, skewed toward more warm than cool events. The curve also exhibited a noticeably longer tail, representing warm events that were more extreme, or of higher temperature, than the most extreme cold events.

    “This indicates there’s some sort of amplification relative to what you would otherwise have expected,” Arnscheidt says. “Everything’s pointing to something fundamental that’s causing this push, or bias toward warming events.”

    “It’s fair to say that the Earth system becomes more volatile, in a warming sense,” Rothman adds.

    A warming multiplier

    The team wondered whether this warming bias might have been a result of “multiplicative noise” in the climate-carbon cycle. Scientists have long understood that higher temperatures, up to a point, tend to speed up biological and chemical processes. Because the carbon cycle, which is a key driver of long-term climate fluctuations, is itself composed of such processes, increases in temperature may lead to larger fluctuations, biasing the system towards extreme warming events.

    In mathematics, there exists a set of equations that describes such general amplifying, or multiplicative effects. The researchers applied this multiplicative theory to their analysis to see whether the equations could predict the asymmetrical distribution, including the degree of its skew and the length of its tails.

    In the end, they found that the data, and the observed bias toward warming, could be explained by the multiplicative theory. In other words, it’s very likely that, over the last 66 million years, periods of modest warming were on average further enhanced by multiplier effects, such as the response of biological and chemical processes that further warmed the planet.

    As part of the study, the researchers also looked at the correlation between past warming events and changes in Earth’s orbit. Over hundreds of thousands of years, Earth’s orbit around the sun regularly becomes more or less elliptical. But scientists have wondered why many past warming events appeared to coincide with these changes, and why these events feature outsized warming compared with what the change in Earth’s orbit could have wrought on its own.

    So, Arnscheidt and Rothman incorporated the Earth’s orbital changes into the multiplicative model and their analysis of Earth’s temperature changes, and found that multiplier effects could predictably amplify, on average, the modest temperature rises due to changes in Earth’s orbit.

    “Climate warms and cools in synchrony with orbital changes, but the orbital cycles themselves would predict only modest changes in climate,” Rothman says. “But if we consider a multiplicative model, then modest warming, paired with this multiplier effect, can result in extreme events that tend to occur at the same time as these orbital changes.”

    “Humans are forcing the system in a new way,” Arnscheidt adds. “And this study is showing that, when we increase temperature, we’re likely going to interact with these natural, amplifying effects.”

    This research was supported, in part, by MIT’s School of Science. More

  • in

    Finding common ground in Malden

    When disparate groups convene around a common goal, exciting things can happen.

    That is the inspiring story unfolding in Malden, Massachusetts, a city of about 60,000 — nearly half people of color — where a new type of community coalition continues to gain momentum on its plan to build a climate-resilient waterfront park along its river. The Malden River Works (MRW) project, recipient of the inaugural Leventhal City Prize, is seeking to connect to a contiguous greenway network where neighboring cities already have visitors coming to their parks and enjoying recreational boating. More important, the MRW is changing the model for how cities address civic growth, community engagement, equitable climate resilience, and environmental justice.                                                                                        

    The MRW’s steering committee consists of eight resident leaders of color, a resident environmental advocate, and three city representatives. One of the committee’s primary responsibilities is providing direction to the MRW’s project team, which includes urban designers, watershed and climate resilience planners, and a community outreach specialist. MIT’s Kathleen Vandiver, director of the Community Outreach Education and Engagement Core at MIT’s Center for Environmental Health Sciences (CEHS), and Marie Law Adams MArch ’06, a lecturer in the School of Architecture and Planning’s Department of Urban Studies and Planning (DUSP), serve on the project team.

    “This governance structure is somewhat unusual,” says Adams. “More typical is having city government as the primary decision-maker. It is important that one of the first things our team did was build a steering committee that is the decision maker on this project.”

    Evan Spetrini ’18 is the senior planner and policy manager for the Malden Redevelopment Authority and sits on both the steering committee and project team. He says placing the decision-making power with the steering committee and building it to be representative of marginalized communities was intentional. 

    “Changing that paradigm of power and decision-making in planning processes was the way we approached social resilience,” says Spetrini. “We have always intended this project to be a model for future planning projects in Malden.”

    This model ushers in a new history chapter for a city founded in 1640.

    Located about six miles north of Boston, Malden was home to mills and factories that used the Malden River for power, and a site for industrial waste over the last two centuries. Decades after the city’s industrial decline, there is little to no public access to the river. Many residents were not even aware there was a river in their city. Before the project was under way, Vandiver initiated a collaborative effort to evaluate the quality of the river’s water. Working with the Mystic River Watershed Association, Gradient Corporation, and CEHS, water samples were tested and a risk analysis conducted.

    “Having the study done made it clear the public could safely enjoy boating on the water,” says Vandiver. “It was a breakthrough that allowed people to see the river as an amenity.”

    A team effort

    Marcia Manong had never seen the river, but the Malden resident was persuaded to join the steering committee with the promise the project would be inclusive and of value to the community. Manong has been involved with civic engagement most of her life in the United States and for 20 years in South Africa.

    “It wasn’t going to be a marginalized, token-ized engagement,” says Manong. “It was clear to me that they were looking for people that would actually be sitting at the table.”

    Manong agreed to recruit additional people of color to join the team. From the beginning, she says, language was a huge barrier, given that nearly half of Malden’s residents do not speak English at home. Finding the translation efforts at their public events to be inadequate, the steering committee directed more funds to be made available for translation in several languages when public meetings began being held over Zoom this past year.

    “It’s unusual for most cities to spend this money, but our population is so diverse that we require it,” says Manong. “We have to do it. If the steering committee wasn’t raising this issue with the rest of the team, perhaps this would be overlooked.”

    Another alteration the steering committee has made is how the project engages with the community. While public attendance at meetings had been successful before the pandemic, Manong says they are “constantly working” to reach new people. One method has been to request invitations to attend the virtual meetings of other organizations to keep them apprised of the project.

    “We’ve said that people feel most comfortable when they’re in their own surroundings, so why not go where the people are instead of trying to get them to where we are,” says Manong.

    Buoyed by the $100,000 grant from MIT’s Norman B. Leventhal Center for Advanced Urbanism (LCAU) in 2019, the project team worked with Malden’s Department of Public Works, which is located along the river, to redesign its site and buildings and to study how to create a flood-resistant public open space as well as an elevated greenway path, connecting with other neighboring cities’ paths. The park’s plans also call for 75 new trees to reduce urban heat island effect, open lawn for gathering, and a dock for boating on the river.

    “The storm water infrastructure in these cities is old and isn’t going to be able to keep up with increased precipitation,” says Adams. “We’re looking for ways to store as much water as possible on the DPW site so we can hold it and release it more gradually into the river to avoid flooding.”

    The project along the 2.3-mile-long river continues to receive attention. Recently, the city of Malden was awarded a 2021 Accelerating Climate Resilience Grant of more than $50,000 from the state’s Metropolitan Area Planning Council and the Barr Foundation to support the project. Last fall, the project was awarded a $150,015 Municipal Vulnerability Preparedness Action Grant. Both awards are being directed to fund engineering work to refine the project’s design.

    “We — and in general, the planning profession — are striving to create more community empowerment in decision-making as to what happens to their community,” says Spetrini. “Putting the power in the community ensures that it’s actually responding to the needs of the community.”

    Contagious enthusiasm

    Manong says she’s happy she got involved with the project and believes the new governance structure is making a difference.

    “This project is definitely engaging with communities of color in a manner that is transformative and that is looking to build a long-lasting power dynamic built on trust,” she says. “It’s a new energized civic engagement and we’re making that happen. It’s very exciting.”

    Spetrini finds the challenge of creating an open space that’s publicly accessible and alongside an active work site professionally compelling.

    “There is a way to preserve the industrial employment base while also giving the public greater access to this natural resource,” he says. “It has real implications for other communities to follow this type of model.”

    Despite the pandemic this past year, enthusiasm for the project is palpable. For Spetrini, a Malden resident, it’s building “the first significant piece of what has been envisioned as the Malden River Greenway.” Adams sees the total project as a way to build social resilience as well as garnering community interest in climate resilience. For Vandiver, it’s the implications for improved community access.

    “From a health standpoint, everybody has learned from Covid-19 that the health aspects of walking in nature are really restorative,” says Vandiver. “Creating greater green space gives more attention to health issues. These are seemingly small side benefits, but they’re huge for mental health benefits.”

    Leventhal City Prize’s next cycle

    The Leventhal City Prize was established by the LCAU to catalyze innovative, interdisciplinary urban design, and planning approaches worldwide to improve both the environment and the quality of life for residents. Support for the LCAU was provided by the Muriel and Norman B. Leventhal Family Foundation and the Sherry and Alan Leventhal Family Foundation.

    “We’re thrilled with inaugural recipients of the award and the extensive work they’ve undertaken that is being held up as an exemplary model for others to learn from,” says Sarah Williams, LCAU director and a professor in DUSP. “Their work reflects the prize’s intent. We look forward to catalyzing these types of collaborative partnership in the next prize cycle.”

    Submissions for the next cycle of the Leventhal City Prize will open in early 2022.    More

  • in

    Using graphene foam to filter toxins from drinking water

    Some kinds of water pollution, such as algal blooms and plastics that foul rivers, lakes, and marine environments, lie in plain sight. But other contaminants are not so readily apparent, which makes their impact potentially more dangerous. Among these invisible substances is uranium. Leaching into water resources from mining operations, nuclear waste sites, or from natural subterranean deposits, the element can now be found flowing out of taps worldwide.

    In the United States alone, “many areas are affected by uranium contamination, including the High Plains and Central Valley aquifers, which supply drinking water to 6 million people,” says Ahmed Sami Helal, a postdoc in the Department of Nuclear Science and Engineering. This contamination poses a near and present danger. “Even small concentrations are bad for human health,” says Ju Li, the Battelle Energy Alliance Professor of Nuclear Science and Engineering and professor of materials science and engineering.

    Now, a team led by Li has devised a highly efficient method for removing uranium from drinking water. Applying an electric charge to graphene oxide foam, the researchers can capture uranium in solution, which precipitates out as a condensed solid crystal. The foam may be reused up to seven times without losing its electrochemical properties. “Within hours, our process can purify a large quantity of drinking water below the EPA limit for uranium,” says Li.

    A paper describing this work was published in this week Advanced Materials. The two first co-authors are Helal and Chao Wang, a postdoc at MIT during the study, who is now with the School of Materials Science and Engineering at Tongji University, Shanghai. Researchers from Argonne National Laboratory, Taiwan’s National Chiao Tung University, and the University of Tokyo also participated in the research. The Defense Threat Reduction Agency (U.S. Department of Defense) funded later stages of this work.

    Targeting the contaminant

    The project, launched three years ago, began as an effort to find better approaches to environmental cleanup of heavy metals from mining sites. To date, remediation methods for such metals as chromium, cadmium, arsenic, lead, mercury, radium, and uranium have proven limited and expensive. “These techniques are highly sensitive to organics in water, and are poor at separating out the heavy metal contaminants,” explains Helal. “So they involve long operation times, high capital costs, and at the end of extraction, generate more toxic sludge.”

    To the team, uranium seemed a particularly attractive target. Field testing from the U.S. Geological Service and the Environmental Protection Agency (EPA) has revealed unhealthy levels of uranium moving into reservoirs and aquifers from natural rock sources in the northeastern United States, from ponds and pits storing old nuclear weapons and fuel in places like Hanford, Washington, and from mining activities located in many western states. This kind of contamination is prevalent in many other nations as well. An alarming number of these sites show uranium concentrations close to or above the EPA’s recommended ceiling of 30 parts per billion (ppb) — a level linked to kidney damage, cancer risk, and neurobehavioral changes in humans.

    The critical challenge lay in finding a practical remediation process exclusively sensitive to uranium, capable of extracting it from solution without producing toxic residues. And while earlier research showed that electrically charged carbon fiber could filter uranium from water, the results were partial and imprecise.

    Wang managed to crack these problems — based on her investigation of the behavior of graphene foam used for lithium-sulfur batteries. “The physical performance of this foam was unique because of its ability to attract certain chemical species to its surface,” she says. “I thought the ligands in graphene foam would work well with uranium.”

    Simple, efficient, and clean

    The team set to work transforming graphene foam into the equivalent of a uranium magnet. They learned that by sending an electric charge through the foam, splitting water and releasing hydrogen, they could increase the local pH and induce a chemical change that pulled uranium ions out of solution. The researchers found that the uranium would graft itself onto the foam’s surface, where it formed a never-before-seen crystalline uranium hydroxide. On reversal of the electric charge, the mineral, which resembles fish scales, slipped easily off the foam.

    It took hundreds of tries to get the chemical composition and electrolysis just right. “We kept changing the functional chemical groups to get them to work correctly,” says Helal. “And the foam was initially quite fragile, tending to break into pieces, so we needed to make it stronger and more durable,” says Wang.

    This uranium filtration process is simple, efficient, and clean, according to Li: “Each time it’s used, our foam can capture four times its own weight of uranium, and we can achieve an extraction capacity of 4,000 mg per gram, which is a major improvement over other methods,” he says. “We’ve also made a major breakthrough in reusability, because the foam can go through seven cycles without losing its extraction efficiency.” The graphene foam functions as well in seawater, where it reduces uranium concentrations from 3 parts per million to 19.9 ppb, showing that other ions in the brine do not interfere with filtration.

    The team believes its low-cost, effective device could become a new kind of home water filter, fitting on faucets like those of commercial brands. “Some of these filters already have activated carbon, so maybe we could modify these, add low-voltage electricity to filter uranium,” says Li.

    “The uranium extraction this device achieves is very impressive when compared to existing methods,” says Ho Jin Ryu, associate professor of nuclear and quantum engineering at the Korea Advanced Institute of Science and Technology. Ryu, who was not involved in the research, believes that the demonstration of graphene foam reusability is a “significant advance,” and that “the technology of local pH control to enhance uranium deposition will be impactful because the scientific principle can be applied more generally to heavy metal extraction from polluted water.”

    The researchers have already begun investigating broader applications of their method. “There is a science to this, so we can modify our filters to be selective for other heavy metals such as lead, mercury, and cadmium,” says Li. He notes that radium is another significant danger for locales in the United States and elsewhere that lack resources for reliable drinking water infrastructure.

    “In the future, instead of a passive water filter, we could be using a smart filter powered by clean electricity that turns on electrolytic action, which could extract multiple toxic metals, tell you when to regenerate the filter, and give you quality assurance about the water you’re drinking.” More

  • in

    Vapor-collection technology saves water while clearing the air

    About two-fifths of all the water that gets withdrawn from lakes, rivers, and wells in the U.S. is used not for agriculture, drinking, or sanitation, but to cool the power plants that provide electricity from fossil fuels or nuclear power. Over 65 percent of these plants use evaporative cooling, leading to huge white plumes that billow from their cooling towers, which can be a nuisance and, in some cases, even contribute to dangerous driving conditions.

    Now, a small company based on technology recently developed at MIT by the Varanasi Research Group is hoping to reduce both the water needs at these plants and the resultant plumes — and to potentially help alleviate water shortages in areas where power plants put pressure on local water systems.

    The technology is surprisingly simple in principle, but developing it to the point where it can now be tested at full scale on industrial plants was a more complex proposition. That required the real-world experience that the company’s founders gained from installing prototype systems, first on MIT’s natural-gas-powered cogeneration plant and then on MIT’s nuclear research reactor.

    In these demanding tests, which involved exposure to not only the heat and vibrations of a working industrial plant but also the rigors of New England winters, the system proved its effectiveness at both eliminating the vapor plume and recapturing water. And, it purified the water in the process, so that it was 100 times cleaner than the incoming cooling water. The system is now being prepared for full-scale tests in a commercial power plant and in a chemical processing plant.

    “Campus as a living laboratory”

    The technology was originally envisioned by professor of mechanical engineering Kripa Varanasi to develop efficient water-recovery systems by capturing water droplets from both natural fog and plumes from power plant cooling towers. The project began as part of doctoral thesis research of Maher Damak PhD ’18, with funding from the MIT Tata Center for Technology and Design, to improve the efficiency of fog-harvesting systems like the ones used in some arid coastal regions as a source of potable water. Those systems, which generally consist of plastic or metal mesh hung vertically in the path of fogbanks, are extremely inefficient, capturing only about 1 to 3 percent of the water droplets that pass through them.

    Varanasi and Damak found that vapor collection could be made much more efficient by first zapping the tiny droplets of water with a beam of electrically charged particles, or ions, to give each droplet a slight electric charge. Then, the stream of droplets passes through a wire mesh, like a window screen, that has an opposite electrical charge. This causes the droplets to be strongly attracted to the mesh, where they fall away due to gravity and can be collected in trays placed below the mesh.

    Lab tests showed the concept worked, and the researchers, joined by Karim Khalil PhD ’18, won the MIT $100K Entrepreneurship Competition in 2018 for the basic concept. The nascent company, which they called Infinite Cooling, with Damak as CEO, Khalil as CTO, and Varanasi as chairperson, immediately went to work setting up a test installation on one of the cooling towers of MIT’s natural-gas-powered Central Utility Plant, with funding from the MIT Office of Sustainability. After experimenting with various configurations, they were able to show that the system could indeed eliminate the plume and produce water of high purity.

    Professor Jacopo Buongiorno in the Department of Nuclear Science and Engineering immediately spotted a good opportunity for collaboration, offering the use of MIT’s Nuclear Reactor Laboratory research facility for further testing of the system with the help of NRL engineer Ed Block. With its 24/7 operation and its higher-temperature vapor emissions, the plant would provide a more stringent real-world test of the system, as well as proving its effectiveness in an actual operating reactor licensed by the Nuclear Regulatory Commission, an important step in “de-risking” the technology so that electric utilities could feel confident in adopting the system.

    After the system was installed above one of the plant’s four cooling towers, testing showed that the water being collected was more than 100 times cleaner than the feedwater coming into the cooling system. It also proved that the installation — which, unlike the earlier version, had its mesh screens mounted vertically, parallel to the vapor stream — had no effect at all on the operation of the plant. Video of the tests dramatically illustrates how as soon as the power is switched on to the collecting mesh, the white plume of vapor immediately disappears completely.

    The high temperature and volume of the vapor plume from the reactor’s cooling towers represented “kind of a worst-case scenario in terms of plumes,” Damak says, “so if we can capture that, we can basically capture anything.”

    Working with MIT’s Nuclear Reactor Laboratory, Varanasi says, “has been quite an important step because it helped us to test it at scale. … It really both validated the water quality and the performance of the system.” The process, he says, “shows the importance of using the campus as a living laboratory. It allows us to do these kinds of experiments at scale, and also showed the ability to sustainably reduce the water footprint of the campus.”

    Far-reaching benefits

    Power plant plumes are often considered an eyesore and can lead to local opposition to new power plants because of the potential for obscured views, and even potential traffic hazards when the obscuring plumes blow across roadways. “The ability to eliminate the plumes could be an important benefit, allowing plants to be sited in locations that might otherwise be restricted,” Buongiorno says. At the same time, the system could eliminate a significant amount of water used by the plants and then lost to the sky, potentially alleviating pressure on local water systems, which could be especially helpful in arid regions.

    The system is essentially a distillation process, and the pure water it produces could go into power plant boilers — which are separate from the cooling system — that require high-purity water. That might reduce the need for both fresh water and purification systems for the boilers.

    What’s more, in many arid coastal areas power plants are cooled directly with seawater. This system would essentially add a water desalination capability to the plant, at a fraction of the cost of building a new standalone desalination plant, and at an even smaller fraction of its operating costs since the heat would essentially be provided for free.

    Contamination of water is typically measured by testing its electrical conductivity, which increases with the amount of salts and other contaminants it contains. Water used in power plant cooling systems typically measures 3,000 microsiemens per centimeter, Khalil explains, while the water supply in the City of Cambridge is typically around 500 or 600 microsiemens per centimeter. The water captured by this system, he says, typically measures below 50 microsiemens per centimeter.

    Thanks to the validation provided by the testing on MIT’s plants, the company has now been able to secure arrangements for its first two installations on operating commercial plants, which should begin later this year. One is a 900-megawatt power plant where the system’s clean water production will be a major advantage, and the other is at a chemical manufacturing plant in the Midwest.

    In many locations power plants have to pay for the water they use for cooling, Varanasi says, and the new system is expected to reduce the need for water by up to 20 percent. For a typical power plant, that alone could account for about a million dollars saved in water costs per year, he says.

    “Innovation has been a hallmark of the U.S. commercial industry for more than six decades,” says Maria G. Korsnick, president and CEO of the Nuclear Energy Institute, who was not involved in the research. “As the changing climate impacts every aspect of life, including global water supplies, companies across the supply chain are innovating for solutions. The testing of this innovative technology at MIT provides a valuable basis for its consideration in commercial applications.” More

  • in

    Inaugural fund supports early-stage collaborations between MIT and Jordan

    MIT International Science and Technology Initiatives (MISTI), together with the Abdul Hameed Shoman Foundation (AHSF), the cultural and social responsibility arm of the Arab Bank, recently created a new initiative to support collaboration with the Middle East. The MIT-Jordan Abdul Hameed Shoman Foundation Seed Fund is providing awardees with financial grants up to $30,000 to cover travel, meeting, and workshop expenses, including in-person visits to build cultural and scientific connections between MIT and Jordan. MISTI and AHSF recently celebrated the first round of awardees in a virtual ceremony held in Amman and the United States.

    The new grant is part of the Global Seed Funds (GSF), MISTI’s annual grant program that enables participating teams to collaborate with international peers, either at MIT or abroad, to develop and launch joint research projects. Many of the projects funded lead to additional grant awards and the development of valuable long-term relationships between international researchers and MIT faculty and students.

    Since MIT’s first major collaboration in the Middle East in the 1970s, the Institute has deepened its connection and commitment to the region, expanding to create the MIT-Arab World program. The MIT-Jordan Abdul Hameed Shoman Foundation Seed Fund enables the MIT-Arab World program to move forward on its key objectives: build critical cultural and scientific connections between MIT and the Arab world; develop a cadre of students who have a deep understanding of the Middle East; and bring tangible value to the partners in the region.

    Valentina Qussisiya, CEO of the foundation, shared the importance of collaboration between research institutes to improve and advance scientific research. She highlighted the role of AHSF in supporting science and researchers since 1982, emphasizing, “The partnership with MIT through the MISTI program is part of AHSF commitment toward this role in Jordan and hoped-for future collaborations and the impact of the fund on science in Jordan.”

    The new fund, open to both Jordanian and MIT faculty, is available to those pursuing research in the following fields: environmental engineering; water resource management; lean and modern technologies; automation; nanotechnology; entrepreneurship; nuclear engineering; materials engineering; energy and thermal engineering; biomedical engineering, prostheses, computational neuroscience, and technology; social and management sciences; urban studies and planning; science, technology, and society; innovation in education; Arabic language automation; and food security and sustainable agriculture.

    Philip S. Khoury, faculty director of MISTI’s MIT-Arab World program and Ford International Professor of History and associate provost at MIT, explained that the winning projects all deal with critical issues that will benefit both MIT and Jordan, both on- and off-campus. “Beyond the actual faculty collaboration, these projects will bring much value to the hands-on education of MIT and Jordanian students and their capacity to get to know one another as future leaders in science and technology,” he says.

    This year, the MIT-Jordan Abdul Hameed Shoman Foundation Seed Fund received numerous high-quality proposals. Applications were reviewed by MIT and Jordanian faculty and selected by a committee of MIT faculty. There were six winning projects in the inaugural round:

    Low-Cost Renewable-Powered Electrodialysis Desalination and Drip Irrigation: Amos Winter (MIT principal investigator) and Samer Talozi (international collaborator)

    iPSC and CRISPR Gene Editing to Study Rare Diseases: Ernest Fraenkel (MIT principal investigator) and Nidaa Ababneh (international collaborator)

    Use of Distributed Low-Cost Sensor Networks for Air Quality Monitoring in Amann: Jesse Kroll (MIT principal investigator) and Tareq Hussein (international collaborator)

    Radiation Effects on Medical Devices Made by 3D Printing: Ju Li (MIT principal investigator) and Belal Gharaibeh (international collaborator)

    Superprotonic Conductivity in Metal-Organic Frameworks for Proton-Exchange Membrane Fuel Cells: Mircea Dinca (MIT principal investigator) and Kyle Cordova (international collaborator)

    Mapping Urban Air Quality Using Mobile Low-cost Sensors and Geospatial Techniques: Sarah Williams (MIT principal investigator) and Khaled Hazaymeh (international collaborator)

    The goal of these funded projects is for researchers and their students to form meaningful professional partnerships across cultures and leave a lasting impact upon the scientific communities in Jordan and at MIT.

    “[The fund will] enhance the future career prospects of emerging scholars from both countries,” said awardee Professor Kyle Cordova, executive director for scientific research at Royal Scientific Society and assistant to Her Royal Highness Princess Sumaya bint El Hassan for scientific affairs. “Our young scholars will gain a unique perspective of the influence of different cultures on scientific investigation that will help them to function effectively in a multidisciplinary and multicultural environment.” More

  • in

    A new approach to preventing human-induced earthquakes

    When humans pump large volumes of fluid into the ground, they can set off potentially damaging earthquakes, depending on the underlying geology. This has been the case in certain oil- and gas-producing regions, where wastewater, often mixed with oil, is disposed of by injecting it back into the ground — a process that has triggered sizable seismic events in recent years.

    Now MIT researchers, working with an interdisciplinary team of scientists from industry and academia, have developed a method to manage such human-induced seismicity, and have demonstrated that the technique successfully reduced the number of earthquakes occurring in an active oil field.

    Their results, appearing today in Nature, could help mitigate earthquakes caused by the oil and gas industry, not just from the injection of wastewater produced with oil, but also that produced from hydraulic fracturing, or “fracking.” The team’s approach could also help prevent quakes from other human activities, such as the filling of water reservoirs and aquifers, and the sequestration of carbon dioxide in deep geologic formations.

    “Triggered seismicity is a problem that goes way beyond producing oil,” says study lead author Bradford Hager, the Cecil and Ida Green Professor of Earth Sciences in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “This is a huge problem for society that will have to be confronted if we are to safely inject carbon dioxide into the subsurface. We demonstrated the kind of study that will be necessary for doing this.”

    The study’s co-authors include Ruben Juanes, professor of civil and environmental engineering at MIT, and collaborators from the University of California at Riverside, the University of Texas at Austin, Harvard University, and Eni, a multinational oil and gas company based in Italy.

    Safe injections

    Both natural and human-induced earthquakes occur along geologic faults, or fractures between two blocks of rock in the Earth’s crust. In stable periods, the rocks on either side of a fault are held in place by the pressures generated by surrounding rocks. But when a large volume of fluid is suddenly injected at high rates, it can upset a fault’s fluid stress balance. In some cases, this sudden injection can lubricate a fault and cause rocks on either side to slip and trigger an earthquake.

    The most common source of such fluid injections is from the oil and gas industry’s disposal of wastewater that is brought up along with oil. Field operators dispose of this water through injection wells that continuously pump the water back into the ground at high pressures.

    “There’s a lot of water produced with the oil, and that water is injected into the ground, which has caused a large number of quakes,” Hager notes. “So, for a while, oil-producing regions in Oklahoma had more magnitude 3 quakes than California, because of all this wastewater that was being injected.”

    In recent years, a similar problem arose in southern Italy, where injection wells on oil fields operated by Eni triggered microseisms in an area where large naturally occurring earthquakes had previously occurred. The company, looking for ways to address the problem, sought consulation from Hager and Juanes, both leading experts in seismicity and subsurface flows.

    “This was an opportunity for us to get access to high-quality seismic data about the subsurface, and learn how to do these injections safely,” Juanes says.

    Seismic blueprint

    The team made use of detailed information, accumulated by the oil company over years of operation in the Val D’Agri oil field, a region of southern Italy that lies in a tectonically active basin. The data included information about the region’s earthquake record, dating back to the 1600s, as well as the structure of rocks and faults, and the state of the subsurface corresponding to the various injection rates of each well.

    This video shows the change in stress on the geologic faults of the Val d’Agri field from 2001 to 2019, as predicted by a new MIT-derived model. Video credit: A. Plesch (Harvard University)

    This video shows small earthquakes occurring on the Costa Molina fault within the Val d’Agri field from 2004 to 2016. Each event is shown for two years fading from an initial bright color to the final dark color. Video credit: A. Plesch (Harvard University)

    The researchers integrated these data into a coupled subsurface flow and geomechanical model, which predicts how the stresses and strains of underground structures evolve as the volume of pore fluid, such as from the injection of water, changes. They connected this model to an earthquake mechanics model in order to translate the changes in underground stress and fluid pressure into a likelihood of triggering earthquakes. They then quantified the rate of earthquakes associated with various rates of water injection, and identified scenarios that were unlikely to trigger large quakes.

    When they ran the models using data from 1993 through 2016, the predictions of seismic activity matched with the earthquake record during this period, validating their approach. They then ran the models forward in time, through the year 2025, to predict the region’s seismic response to three different injection rates: 2,000, 2,500, and 3,000 cubic meters per day. The simulations showed that large earthquakes could be avoided if operators kept injection rates at 2,000 cubic meters per day — a flow rate comparable to a small public fire hydrant.

    Eni field operators implemented the team’s recommended rate at the oil field’s single water injection well over a 30-month period between January 2017 and June 2019. In this time, the team observed only a few tiny seismic events, which coincided with brief periods when operators went above the recommended injection rate.

    “The seismicity in the region has been very low in these two-and-a-half years, with around four quakes of 0.5 magnitude, as opposed to hundreds of quakes, of up to 3 magnitude, that were happening between 2006 and 2016,” Hager says. 

    The results demonstrate that operators can successfully manage earthquakes by adjusting injection rates, based on the underlying geology. Juanes says the team’s modeling approach may help to prevent earthquakes related to other processes, such as the building of water reservoirs and the sequestration of carbon dioxide — as long as there is detailed information about a region’s subsurface.

    “A lot of effort needs to go into understanding the geologic setting,” says Juanes, who notes that, if carbon sequestration were carried out on depleted oil fields, “such reservoirs could have this type of history, seismic information, and geologic interpretation that you could use to build similar models for carbon sequestration. We show it’s at least possible to manage seismicity in an operational setting. And we offer a blueprint for how to do it.”

    This research was supported, in part, by Eni. More

  • in

    What will happen to sediment plumes associated with deep-sea mining?

    In certain parts of the deep ocean, scattered across the seafloor, lie baseball-sized rocks layered with minerals accumulated over millions of years. A region of the central Pacific, called the Clarion Clipperton Fracture Zone (CCFZ), is estimated to contain vast reserves of these rocks, known as “polymetallic nodules,” that are rich in nickel and cobalt  — minerals that are commonly mined on land for the production of lithium-ion batteries in electric vehicles, laptops, and mobile phones.

    As demand for these batteries rises, efforts are moving forward to mine the ocean for these mineral-rich nodules. Such deep-sea-mining schemes propose sending down tractor-sized vehicles to vacuum up nodules and send them to the surface, where a ship would clean them and discharge any unwanted sediment back into the ocean. But the impacts of deep-sea mining — such as the effect of discharged sediment on marine ecosystems and how these impacts compare to traditional land-based mining — are currently unknown.

    Now oceanographers at MIT, the Scripps Institution of Oceanography, and elsewhere have carried out an experiment at sea for the first time to study the turbulent sediment plume that mining vessels would potentially release back into the ocean. Based on their observations, they developed a model that makes realistic predictions of how a sediment plume generated by mining operations would be transported through the ocean.

    The model predicts the size, concentration, and evolution of sediment plumes under various marine and mining conditions. These predictions, the researchers say, can now be used by biologists and environmental regulators to gauge whether and to what extent such plumes would impact surrounding sea life.

    “There is a lot of speculation about [deep-sea-mining’s] environmental impact,” says Thomas Peacock, professor of mechanical engineering at MIT. “Our study is the first of its kind on these midwater plumes, and can be a major contributor to international discussion and the development of regulations over the next two years.”

    The team’s study appears today in Nature Communications: Earth and Environment.

    Peacock’s co-authors at MIT include lead author Carlos Muñoz-Royo, Raphael Ouillon, Chinmay Kulkarni, Patrick Haley, Chris Mirabito, Rohit Supekar, Andrew Rzeznik, Eric Adams, Cindy Wang, and Pierre Lermusiaux, along with collaborators at Scripps, the U.S. Geological Survey, and researchers in Belgium and South Korea.

    Play video

    Out to sea

    Current deep-sea-mining proposals are expected to generate two types of sediment plumes in the ocean: “collector plumes” that vehicles generate on the seafloor as they drive around collecting nodules 4,500 meters below the surface; and possibly “midwater plumes” that are discharged through pipes that descend 1,000 meters or more into the ocean’s aphotic zone, where sunlight rarely penetrates.

    In their new study, Peacock and his colleagues focused on the midwater plume and how the sediment would disperse once discharged from a pipe.

    “The science of the plume dynamics for this scenario is well-founded, and our goal was to clearly establish the dynamic regime for such plumes to properly inform discussions,” says Peacock, who is the director of MIT’s Environmental Dynamics Laboratory.

    To pin down these dynamics, the team went out to sea. In 2018, the researchers boarded the research vessel Sally Ride and set sail 50 kilometers off the coast of Southern California. They brought with them equipment designed to discharge sediment 60 meters below the ocean’s surface.  

    “Using foundational scientific principles from fluid dynamics, we designed the system so that it fully reproduced a commercial-scale plume, without having to go down to 1,000 meters or sail out several days to the middle of the CCFZ,” Peacock says.

    Over one week the team ran a total of six plume experiments, using novel sensors systems such as a Phased Array Doppler Sonar (PADS) and epsilometer developed by Scripps scientists to monitor where the plumes traveled and how they evolved in shape and concentration. The collected data revealed that the sediment, when initially pumped out of a pipe, was a highly turbulent cloud of suspended particles that mixed rapidly with the surrounding ocean water.

    “There was speculation this sediment would form large aggregates in the plume that would settle relatively quickly to the deep ocean,” Peacock says. “But we found the discharge is so turbulent that it breaks the sediment up into its finest constituent pieces, and thereafter it becomes dilute so quickly that the sediment then doesn’t have a chance to stick together.”

    Dilution

    The team had previously developed a model to predict the dynamics of a plume that would be discharged into the ocean. When they fed the experiment’s initial conditions into the model, it produced the same behavior that the team observed at sea, proving the model could accurately predict plume dynamics within the vicinity of the discharge.

    The researchers used these results to provide the correct input for simulations of ocean dynamics to see how far currents would carry the initially released plume.

    “In a commercial operation, the ship is always discharging new sediment. But at the same time the background turbulence of the ocean is always mixing things. So you reach a balance. There’s a natural dilution process that occurs in the ocean that sets the scale of these plumes,” Peacock says. “What is key to determining the extent of the plumes is the strength of the ocean turbulence, the amount of sediment that gets discharged, and the environmental threshold level at which there is impact.”

    Based on their findings, the researchers have developed formulae to calculate the scale of a plume depending on a given environmental threshold. For instance, if regulators determine that a certain concentration of sediments could be detrimental to surrounding sea life, the formula can be used to calculate how far a plume above that concentration would extend, and what volume of ocean water would be impacted over the course of a 20-year nodule mining operation.

    “At the heart of the environmental question surrounding deep-sea mining is the extent of sediment plumes,” Peacock says. “It’s a multiscale problem, from micron-scale sediments, to turbulent flows, to ocean currents over thousands of kilometers. It’s a big jigsaw puzzle, and we are uniquely equipped to work on that problem and provide answers founded in science and data.”

    The team is now working on collector plumes, having recently returned from several weeks at sea to perform the first environmental monitoring of a nodule collector vehicle in the deep ocean in over 40 years.

    This research was supported in part by the MIT Environmental Solutions Initiative, the UC Ship Time Program, the MIT Policy Lab, the 11th Hour Project of the Schmidt Family Foundation, the Benioff Ocean Initiative, and Fundación Bancaria “la Caixa.” More