More stories

  • in

    Surface-based sonar system could rapidly map the ocean floor at high resolution

    On June 18, 2023, the Titan submersible was about an hour-and-a-half into its two-hour descent to the Titanic wreckage at the bottom of the Atlantic Ocean when it lost contact with its support ship. This cease in communication set off a frantic search for the tourist submersible and five passengers onboard, located about two miles below the ocean’s surface.Deep-ocean search and recovery is one of the many missions of military services like the U.S. Coast Guard Office of Search and Rescue and the U.S. Navy Supervisor of Salvage and Diving. For this mission, the longest delays come from transporting search-and-rescue equipment via ship to the area of interest and comprehensively surveying that area. A search operation on the scale of that for Titan — which was conducted 420 nautical miles from the nearest port and covered 13,000 square kilometers, an area roughly twice the size of Connecticut — could take weeks to complete. The search area for Titan is considered relatively small, focused on the immediate vicinity of the Titanic. When the area is less known, operations could take months. (A remotely operated underwater vehicle deployed by a Canadian vessel ended up finding the debris field of Titan on the seafloor, four days after the submersible had gone missing.)A research team from MIT Lincoln Laboratory and the MIT Department of Mechanical Engineering’s Ocean Science and Engineering lab is developing a surface-based sonar system that could accelerate the timeline for small- and large-scale search operations to days. Called the Autonomous Sparse-Aperture Multibeam Echo Sounder, the system scans at surface-ship rates while providing sufficient resolution to find objects and features in the deep ocean, without the time and expense of deploying underwater vehicles. The echo sounder — which features a large sonar array using a small set of autonomous surface vehicles (ASVs) that can be deployed via aircraft into the ocean — holds the potential to map the seabed at 50 times the coverage rate of an underwater vehicle and 100 times the resolution of a surface vessel.

    Play video

    Autonomous Sparse-Aperture Multibeam Echo SounderVideo: MIT Lincoln Laboratory

    “Our array provides the best of both worlds: the high resolution of underwater vehicles and the high coverage rate of surface ships,” says co–principal investigator Andrew March, assistant leader of the laboratory’s Advanced Undersea Systems and Technology Group. “Though large surface-based sonar systems at low frequency have the potential to determine the materials and profiles of the seabed, they typically do so at the expense of resolution, particularly with increasing ocean depth. Our array can likely determine this information, too, but at significantly enhanced resolution in the deep ocean.”Underwater unknownOceans cover 71 percent of Earth’s surface, yet more than 80 percent of this underwater realm remains undiscovered and unexplored. Humans know more about the surface of other planets and the moon than the bottom of our oceans. High-resolution seabed maps would not only be useful to find missing objects like ships or aircraft, but also to support a host of other scientific applications: understanding Earth’s geology, improving forecasting of ocean currents and corresponding weather and climate impacts, uncovering archaeological sites, monitoring marine ecosystems and habitats, and identifying locations containing natural resources such as mineral and oil deposits.Scientists and governments worldwide recognize the importance of creating a high-resolution global map of the seafloor; the problem is that no existing technology can achieve meter-scale resolution from the ocean surface. The average depth of our oceans is approximately 3,700 meters. However, today’s technologies capable of finding human-made objects on the seabed or identifying person-sized natural features — these technologies include sonar, lidar, cameras, and gravitational field mapping — have a maximum range of less than 1,000 meters through water.Ships with large sonar arrays mounted on their hull map the deep ocean by emitting low-frequency sound waves that bounce off the seafloor and return as echoes to the surface. Operation at low frequencies is necessary because water readily absorbs high-frequency sound waves, especially with increasing depth; however, such operation yields low-resolution images, with each image pixel representing a football field in size. Resolution is also restricted because sonar arrays installed on large mapping ships are already using all of the available hull space, thereby capping the sonar beam’s aperture size. By contrast, sonars on autonomous underwater vehicles (AUVs) that operate at higher frequencies within a few hundred meters of the seafloor generate maps with each pixel representing one square meter or less, resulting in 10,000 times more pixels in that same football field–sized area. However, this higher resolution comes with trade-offs: AUVs are time-consuming and expensive to deploy in the deep ocean, limiting the amount of seafloor that can be mapped; they have a maximum range of about 1,000 meters before their high-frequency sound gets absorbed; and they move at slow speeds to conserve power. The area-coverage rate of AUVs performing high-resolution mapping is about 8 square kilometers per hour; surface vessels map the deep ocean at more than 50 times that rate.A solution surfacesThe Autonomous Sparse-Aperture Multibeam Echo Sounder could offer a cost-effective approach to high-resolution, rapid mapping of the deep seafloor from the ocean’s surface. A collaborative fleet of about 20 ASVs, each hosting a small sonar array, effectively forms a single sonar array 100 times the size of a large sonar array installed on a ship. The large aperture achieved by the array (hundreds of meters) produces a narrow beam, which enables sound to be precisely steered to generate high-resolution maps at low frequency. Because very few sonars are installed relative to the array’s overall size (i.e., a sparse aperture), the cost is tractable.However, this collaborative and sparse setup introduces some operational challenges. First, for coherent 3D imaging, the relative position of each ASV’s sonar subarray must be accurately tracked through dynamic ocean-induced motions. Second, because sonar elements are not placed directly next to each other without any gaps, the array suffers from a lower signal-to-noise ratio and is less able to reject noise coming from unintended or undesired directions. To mitigate these challenges, the team has been developing a low-cost precision-relative navigation system and leveraging acoustic signal processing tools and new ocean-field estimation algorithms. The MIT campus collaborators are developing algorithms for data processing and image formation, especially to estimate depth-integrated water-column parameters. These enabling technologies will help account for complex ocean physics, spanning physical properties like temperature, dynamic processes like currents and waves, and acoustic propagation factors like sound speed.Processing for all required control and calculations could be completed either remotely or onboard the ASVs. For example, ASVs deployed from a ship or flying boat could be controlled and guided remotely from land via a satellite link or from a nearby support ship (with direct communications or a satellite link), and left to map the seabed for weeks or months at a time until maintenance is needed. Sonar-return health checks and coarse seabed mapping would be conducted on board, while full, high-resolution reconstruction of the seabed would require a supercomputing infrastructure on land or on a support ship.”Deploying vehicles in an area and letting them map for extended periods of time without the need for a ship to return home to replenish supplies and rotate crews would significantly simplify logistics and operating costs,” says co–principal investigator Paul Ryu, a researcher in the Advanced Undersea Systems and Technology Group.Since beginning their research in 2018, the team has turned their concept into a prototype. Initially, the scientists built a scale model of a sparse-aperture sonar array and tested it in a water tank at the laboratory’s Autonomous Systems Development Facility. Then, they prototyped an ASV-sized sonar subarray and demonstrated its functionality in Gloucester, Massachusetts. In follow-on sea tests in Boston Harbor, they deployed an 8-meter array containing multiple subarrays equivalent to 25 ASVs locked together; with this array, they generated 3D reconstructions of the seafloor and a shipwreck. Most recently, the team fabricated, in collaboration with Woods Hole Oceanographic Institution, a first-generation, 12-foot-long, all-electric ASV prototype carrying a sonar array underneath. With this prototype, they conducted preliminary relative navigation testing in Woods Hole, Massachusetts and Newport, Rhode Island. Their full deep-ocean concept calls for approximately 20 such ASVs of a similar size, likely powered by wave or solar energy.This work was funded through Lincoln Laboratory’s internally administered R&D portfolio on autonomous systems. The team is now seeking external sponsorship to continue development of their ocean floor–mapping technology, which was recognized with a 2024 R&D 100 Award.  More

  • in

    New climate chemistry model finds “non-negligible” impacts of potential hydrogen fuel leakage

    As the world looks for ways to stop climate change, much discussion focuses on using hydrogen instead of fossil fuels, which emit climate-warming greenhouse gases (GHGs) when they’re burned. The idea is appealing. Burning hydrogen doesn’t emit GHGs to the atmosphere, and hydrogen is well-suited for a variety of uses, notably as a replacement for natural gas in industrial processes, power generation, and home heating.But while burning hydrogen won’t emit GHGs, any hydrogen that’s leaked from pipelines or storage or fueling facilities can indirectly cause climate change by affecting other compounds that are GHGs, including tropospheric ozone and methane, with methane impacts being the dominant effect. A much-cited 2022 modeling study analyzing hydrogen’s effects on chemical compounds in the atmosphere concluded that these climate impacts could be considerable. With funding from the MIT Energy Initiative’s Future Energy Systems Center, a team of MIT researchers took a more detailed look at the specific chemistry that poses the risks of using hydrogen as a fuel if it leaks.The researchers developed a model that tracks many more chemical reactions that may be affected by hydrogen and includes interactions among chemicals. Their open-access results, published Oct. 28 in Frontiers in Energy Research, showed that while the impact of leaked hydrogen on the climate wouldn’t be as large as the 2022 study predicted — and that it would be about a third of the impact of any natural gas that escapes today — leaked hydrogen will impact the climate. Leak prevention should therefore be a top priority as the hydrogen infrastructure is built, state the researchers.Hydrogen’s impact on the “detergent” that cleans our atmosphereGlobal three-dimensional climate-chemistry models using a large number of chemical reactions have also been used to evaluate hydrogen’s potential climate impacts, but results vary from one model to another, motivating the MIT study to analyze the chemistry. Most studies of the climate effects of using hydrogen consider only the GHGs that are emitted during the production of the hydrogen fuel. Different approaches may make “blue hydrogen” or “green hydrogen,” a label that relates to the GHGs emitted. Regardless of the process used to make the hydrogen, the fuel itself can threaten the climate. For widespread use, hydrogen will need to be transported, distributed, and stored — in short, there will be many opportunities for leakage. The question is, What happens to that leaked hydrogen when it reaches the atmosphere? The 2022 study predicting large climate impacts from leaked hydrogen was based on reactions between pairs of just four chemical compounds in the atmosphere. The results showed that the hydrogen would deplete a chemical species that atmospheric chemists call the “detergent of the atmosphere,” explains Candice Chen, a PhD candidate in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “It goes around zapping greenhouse gases, pollutants, all sorts of bad things in the atmosphere. So it’s cleaning our air.” Best of all, that detergent — the hydroxyl radical, abbreviated as OH — removes methane, which is an extremely potent GHG in the atmosphere. OH thus plays an important role in slowing the rate at which global temperatures rise. But any hydrogen leaked to the atmosphere would reduce the amount of OH available to clean up methane, so the concentration of methane would increase.However, chemical reactions among compounds in the atmosphere are notoriously complicated. While the 2022 study used a “four-equation model,” Chen and her colleagues — Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies and Chemistry; and Kane Stone, a research scientist in EAPS — developed a model that includes 66 chemical reactions. Analyses using their 66-equation model showed that the four-equation system didn’t capture a critical feedback involving OH — a feedback that acts to protect the methane-removal process.Here’s how that feedback works: As the hydrogen decreases the concentration of OH, the cleanup of methane slows down, so the methane concentration increases. However, that methane undergoes chemical reactions that can produce new OH radicals. “So the methane that’s being produced can make more of the OH detergent,” says Chen. “There’s a small countering effect. Indirectly, the methane helps produce the thing that’s getting rid of it.” And, says Chen, that’s a key difference between their 66-equation model and the four-equation one. “The simple model uses a constant value for the production of OH, so it misses that key OH-production feedback,” she says.To explore the importance of including that feedback effect, the MIT researchers performed the following analysis: They assumed that a single pulse of hydrogen was injected into the atmosphere and predicted the change in methane concentration over the next 100 years, first using four-equation model and then using the 66-equation model. With the four-equation system, the additional methane concentration peaked at nearly 2 parts per billion (ppb); with the 66-equation system, it peaked at just over 1 ppb.Because the four-equation analysis assumes only that the injected hydrogen destroys the OH, the methane concentration increases unchecked for the first 10 years or so. In contrast, the 66-equation analysis goes one step further: the methane concentration does increase, but as the system re-equilibrates, more OH forms and removes methane. By not accounting for that feedback, the four-equation analysis overestimates the peak increase in methane due to the hydrogen pulse by about 85 percent. Spread over time, the simple model doubles the amount of methane that forms in response to the hydrogen pulse.Chen cautions that the point of their work is not to present their result as “a solid estimate” of the impact of hydrogen. Their analysis is based on a simple “box” model that represents global average conditions and assumes that all the chemical species present are well mixed. Thus, the species can vary over time — that is, they can be formed and destroyed — but any species that are present are always perfectly mixed. As a result, a box model does not account for the impact of, say, wind on the distribution of species. “The point we’re trying to make is that you can go too simple,” says Chen. “If you’re going simpler than what we’re representing, you will get further from the right answer.” She goes on to note, “The utility of a relatively simple model like ours is that all of the knobs and levers are very clear. That means you can explore the system and see what affects a value of interest.”Leaked hydrogen versus leaked natural gas: A climate comparisonBurning natural gas produces fewer GHG emissions than does burning coal or oil; but as with hydrogen, any natural gas that’s leaked from wells, pipelines, and processing facilities can have climate impacts, negating some of the perceived benefits of using natural gas in place of other fossil fuels. After all, natural gas consists largely of methane, the highly potent GHG in the atmosphere that’s cleaned up by the OH detergent. Given its potency, even small leaks of methane can have a large climate impact.So when thinking about replacing natural gas fuel — essentially methane — with hydrogen fuel, it’s important to consider how the climate impacts of the two fuels compare if and when they’re leaked. The usual way to compare the climate impacts of two chemicals is using a measure called the global warming potential, or GWP. The GWP combines two measures: the radiative forcing of a gas — that is, its heat-trapping ability — with its lifetime in the atmosphere. Since the lifetimes of gases differ widely, to compare the climate impacts of two gases, the convention is to relate the GWP of each one to the GWP of carbon dioxide. But hydrogen and methane leakage cause increases in methane, and that methane decays according to its lifetime. Chen and her colleagues therefore realized that an unconventional procedure would work: they could compare the impacts of the two leaked gases directly. What they found was that the climate impact of hydrogen is about three times less than that of methane (on a per mass basis). So switching from natural gas to hydrogen would not only eliminate combustion emissions, but also potentially reduce the climate effects, depending on how much leaks.Key takeawaysIn summary, Chen highlights some of what she views as the key findings of the study. First on her list is the following: “We show that a really simple four-equation system is not what should be used to project out the atmospheric response to more hydrogen leakages in the future.” The researchers believe that their 66-equation model is a good compromise for the number of chemical reactions to include. It generates estimates for the GWP of methane “pretty much in line with the lower end of the numbers that most other groups are getting using much more sophisticated climate chemistry models,” says Chen. And it’s sufficiently transparent to use in exploring various options for protecting the climate. Indeed, the MIT researchers plan to use their model to examine scenarios that involve replacing other fossil fuels with hydrogen to estimate the climate benefits of making the switch in coming decades.The study also demonstrates a valuable new way to compare the greenhouse effects of two gases. As long as their effects exist on similar time scales, a direct comparison is possible — and preferable to comparing each with carbon dioxide, which is extremely long-lived in the atmosphere. In this work, the direct comparison generates a simple look at the relative climate impacts of leaked hydrogen and leaked methane — valuable information to take into account when considering switching from natural gas to hydrogen.Finally, the researchers offer practical guidance for infrastructure development and use for both hydrogen and natural gas. Their analyses determine that hydrogen fuel itself has a “non-negligible” GWP, as does natural gas, which is mostly methane. Therefore, minimizing leakage of both fuels will be necessary to achieve net-zero carbon emissions by 2050, the goal set by both the European Commission and the U.S. Department of State. Their paper concludes, “If used nearly leak-free, hydrogen is an excellent option. Otherwise, hydrogen should only be a temporary step in the energy transition, or it must be used in tandem with carbon-removal steps [elsewhere] to counter its warming effects.” More

  • in

    In a unique research collaboration, students make the case for less e-waste

    Brought together as part of the Social and Ethical Responsibilities of Computing (SERC) initiative within the MIT Schwarzman College of Computing, a community of students known as SERC Scholars is collaborating to examine the most urgent problems humans face in the digital landscape.Each semester, students from all levels from across MIT are invited to join a different topical working group led by a SERC postdoctoral associate. Each group delves into a specific issue — such as surveillance or data ownership — culminating in a final project presented at the end of the term.Typically, students complete the program with hands-on experience conducting research in a new cross-disciplinary field. However, one group of undergraduate and graduate students recently had the unique opportunity to enhance their resume by becoming published authors of a case study about the environmental and climate justice implications of the electronics hardware life cycle.Although it’s not uncommon for graduate students to co-author case studies, it’s unusual for undergraduates to earn this opportunity — and for their audience to be other undergraduates around the world.“Our team was insanely interdisciplinary,” says Anastasia Dunca, a junior studying computer science and one of the co-authors. “I joined the SERC Scholars Program because I liked the idea of being part of a cohort from across MIT working on a project that utilized all of our skillsets. It also helps [undergraduates] learn the ins and outs of computing ethics research.”Case study co-author Jasmin Liu, an MBA student in the MIT Sloan School of Management, sees the program as a platform to learn about the intersection of technology, society, and ethics: “I met team members spanning computer science, urban planning, to art/culture/technology. I was excited to work with a diverse team because I know complex problems must be approached with many different perspectives. Combining my background in humanities and business with the expertise of others allowed us to be more innovative and comprehensive.”Christopher Rabe, a former SERC postdoc who facilitated the group, says, “I let the students take the lead on identifying the topic and conducting the research.” His goal for the group was to challenge students across disciplines to develop a working definition of climate justice.From mining to e-wasteThe SERC Scholars’ case study, “From Mining to E-waste: The Environmental and Climate Justice Implications of the Electronics Hardware Life Cycle,” was published by the MIT Case Studies in Social and Ethical Responsibilities of Computing.The ongoing case studies series, which releases new issues twice a year on an open-source platform, is enabling undergraduate instructors worldwide to incorporate research-based education materials on computing ethics into their existing class syllabi.This particular case study broke down the electronics life cycle from mining to manufacturing, usage, and disposal. It offered an in-depth look at how this cycle promotes inequity in the Global South. Mining for the average of 60 minerals that power everyday devices lead to illegal deforestation, compromising air quality in the Amazon, and triggering armed conflict in Congo. Manufacturing leads to proven health risks for both formal and informal workers, some of whom are child laborers.Life cycle assessment and circular economy are proposed as mechanisms for analyzing environmental and climate justice issues in the electronics life cycle. Rather than posing solutions, the case study offers readers entry points for further discussion and for assessing their own individual responsibility as producers of e-waste.Crufting and crafting a case studyDunca joined Rabe’s working group, intrigued by the invitation to conduct a rigorous literature review examining issues like data center resource and energy use, manufacturing waste, ethical issues with AI, and climate change. Rabe quickly realized that a common thread among all participants was an interest in understanding and reducing e-waste and its impact on the environment.“I came in with the idea of us co-authoring a case study,” Rabe said. However, the writing-intensive process was initially daunting to those students who were used to conducting applied research. Once Rabe created sub-groups with discrete tasks, the steps for researching, writing, and iterating a case study became more approachable.For Ellie Bultena, an undergraduate student studying linguistics and philosophy and a contributor to the study, that meant conducting field research on the loading dock of MIT’s Stata Center, where students and faculty go “crufting” through piles of clunky printers, broken computers, and used lab equipment discarded by the Institute’s labs, departments, and individual users.Although not a formally sanctioned activity on-campus, “crufting” is the act of gleaning usable parts from these junk piles to be repurposed into new equipment or art. Bultena’s respondents, who opted to be anonymous, said that MIT could do better when it comes to the amount of e-waste generated and suggested that formal strategies could be implemented to encourage community members to repair equipment more easily or recycle more formally.Rabe, now an education program director at the MIT Environmental Solutions Initiative, is hopeful that through the Zero-Carbon Campus Initiative, which commits MIT to eliminating all direct emissions by 2050, MIT will ultimately become a model for other higher education institutions.Although the group lacked the time and resources to travel to communities in the Global South that they profiled in their case study, members leaned into exhaustive secondary research, collecting data on how some countries are irresponsibly dumping e-waste. In contrast, others have developed alternative solutions that can be duplicated elsewhere and scaled.“We source materials, manufacture them, and then throw them away,” Lelia Hampton says. A PhD candidate in electrical engineering and computer science and another co-author, Hampton jumped at the opportunity to serve in a writing role, bringing together the sub-groups research findings. “I’d never written a case study, and it was exciting. Now I want to write 10 more.”The content directly informed Hampton’s dissertation research, which “looks at applying machine learning to climate justice issues such as urban heat islands.” She said that writing a case study that is accessible to general audiences upskilled her for the non-profit organization she’s determined to start. “It’s going to provide communities with free resources and data needed to understand how they are impacted by climate change and begin to advocate against injustice,” Hampton explains.Dunca, Liu, Rabe, Bultena, and Hampton are joined on the case study by fellow authors Mrinalini Singha, a graduate student in the Art, Culture, and Technology program; Sungmoon Lim, a graduate student in urban studies and planning and EECS; Lauren Higgins, an undergraduate majoring in political science; and Madeline Schlegal, a Northeastern University co-op student.Taking the case study to classrooms around the worldAlthough PhD candidates have contributed to previous case studies in the series, this publication is the first to be co-authored with MIT undergraduates. Like any other peer-reviewed journal, before publication, the SERC Scholars’ case study was anonymously reviewed by senior scholars drawn from various fields.The series editor, David Kaiser, also served as one of SERC’s inaugural associate deans and helped shape the program. “The case studies, by design, are short, easy to read, and don’t take up lots of time,” Kaiser explained. “They are gateways for students to explore, and instructors can cover a topic that has likely already been on their mind.” This semester, Kaiser, the Germeshausen Professor of the History of Science and a professor of physics, is teaching STS.004 (Intersections: Science, Technology, and the World), an undergraduate introduction to the field of science, technology, and society. The last month of the semester has been dedicated wholly to SERC case studies, one of which is: “From Mining to E-Waste.”Hampton was visibly moved to hear that the case study is being used at MIT but also by some of the 250,000 visitors to the SERC platform, many of whom are based in the Global South and directly impacted by the issues she and her cohort researched. “Many students are focused on climate, whether through computer science, data science, or mechanical engineering. I hope that this case study educates them on environmental and climate aspects of e-waste and computing.” More

  • in

    Enabling a circular economy in the built environment

    The amount of waste generated by the construction sector underscores an urgent need for embracing circularity — a sustainable model that aims to minimize waste and maximize material efficiency through recovery and reuse — in the built environment: 600 million tons of construction and demolition waste was produced in the United States alone in 2018, with 820 million tons reported in the European Union, and an excess of 2 billion tons annually in China.This significant resource loss embedded in our current industrial ecosystem marks a linear economy that operates on a “take-make-dispose” model of construction; in contrast, the “make-use-reuse” approach of a circular economy offers an important opportunity to reduce environmental impacts.A team of MIT researchers has begun to assess what may be needed to spur widespread circular transition within the built environment in a new open-access study that aims to understand stakeholders’ current perceptions of circularity and quantify their willingness to pay.“This paper acts as an initial endeavor into understanding what the industry may be motivated by, and how integration of stakeholder motivations could lead to greater adoption,” says lead author Juliana Berglund-Brown, PhD student in the Department of Architecture at MIT.Considering stakeholders’ perceptionsThree different stakeholder groups from North America, Europe, and Asia — material suppliers, design and construction teams, and real estate developers — were surveyed by the research team that also comprises Akrisht Pandey ’23; Fabio Duarte, associate director of the MIT Senseable City Lab; Raquel Ganitsky, fellow in the Sustainable Real Estate Development Action Program; Randolph Kirchain, co-director of MIT Concrete Sustainability Hub; and Siqi Zheng, the STL Champion Professor of Urban and Real Estate Sustainability at Department of Urban Studies and Planning.Despite growing awareness of reuse practice among construction industry stakeholders, circular practices have yet to be implemented at scale — attributable to many factors that influence the intersection of construction needs with government regulations and the economic interests of real estate developers.The study notes that perceived barriers to circular adoption differ based on industry role, with lack of both client interest and standardized structural assessment methods identified as the primary concern of design and construction teams, while the largest deterrents for material suppliers are logistics complexity, and supply uncertainty. Real estate developers, on the other hand, are chiefly concerned with higher costs and structural assessment. Yet encouragingly, respondents expressed willingness to absorb higher costs, with developers indicating readiness to pay an average of 9.6 percent higher construction costs for a minimum 52.9 percent reduction in embodied carbon — and all stakeholders highly favor the potential of incentives like tax exemptions to aid with cost premiums.Next steps to encourage circularityThe findings highlight the need for further conversation between design teams and developers, as well as for additional exploration into potential solutions to practical challenges. “The thing about circularity is that there is opportunity for a lot of value creation, and subsequently profit,” says Berglund-Brown. “If people are motivated by cost, let’s provide a cost incentive, or establish strategies that have one.”When it comes to motivating reasons to adopt circularity practices, the study also found trends emerging by industry role. Future net-zero goals influence developers as well as design and construction teams, with government regulation the third-most frequently named reason across all respondent types.“The construction industry needs a market driver to embrace circularity,” says Berglund-Brown, “Be it carrots or sticks, stakeholders require incentives for adoption.”The effect of policy to motivate change cannot be understated, with major strides being made in low operational carbon building design after policy restricting emissions was introduced, such as Local Law 97 in New York City and the Building Emissions Reduction and Disclosure Ordinance in Boston. These pieces of policy, and their results, can serve as models for embodied carbon reduction policy elsewhere.Berglund-Brown suggests that municipalities might initiate ordinances requiring buildings to be deconstructed, which would allow components to be reused, curbing demolition methods that result in waste rather than salvage. Top-down ordinances could be one way to trigger a supply chain shift toward reprocessing building materials that are typically deemed “end-of-life.”The study also identifies other challenges to the implementation of circularity at scale, including risk associated with how to reuse materials in new buildings, and disrupting status quo design practices.“Understanding the best way to motivate transition despite uncertainty is where our work comes in,” says Berglund-Brown. “Beyond that, researchers can continue to do a lot to alleviate risk — like developing standards for reuse.”Innovations that challenge the status quoDisrupting the status quo is not unusual for MIT researchers; other visionary work in construction circularity pioneered at MIT includes “a smart kit of parts” called Pixelframe. This system for modular concrete reuse allows building elements to be disassembled and rebuilt several times, aiding deconstruction and reuse while maintaining material efficiency and versatility.Developed by MIT Climate and Sustainability Consortium Associate Director Caitlin Mueller’s research team, Pixelframe is designed to accommodate a wide range of applications from housing to warehouses, with each piece of interlocking precast concrete modules, called Pixels, assigned a material passport to enable tracking through its many life cycles.Mueller’s work demonstrates that circularity can work technically and logistically at the scale of the built environment — by designing specifically for disassembly, configuration, versatility, and upfront carbon and cost efficiency.“This can be built today. This is building code-compliant today,” said Mueller of Pixelframe in a keynote speech at the recent MCSC Annual Symposium, which saw industry representatives and members of the MIT community coming together to discuss scalable solutions to climate and sustainability problems. “We currently have the potential for high-impact carbon reduction as a compelling alternative to the business-as-usual construction methods we are used to.”Pixelframe was recently awarded a grant by the Massachusetts Clean Energy Center (MassCEC) to pursue commercialization, an important next step toward integrating innovations like this into a circular economy in practice. “It’s MassCEC’s job to make sure that these climate leaders have the resources they need to turn their technologies into successful businesses that make a difference around the world,” said MassCEC CEO Emily Reichart, in a press release.Additional support for circular innovation has emerged thanks to a historic piece of climate legislation from the Biden administration. The Environmental Protection Agency recently awarded a federal grant on the topic of advancing steel reuse to Berglund-Brown — whose PhD thesis focuses on scaling the reuse of structural heavy-section steel — and John Ochsendorf, the Class of 1942 Professor of Civil and Environmental Engineering and Architecture at MIT.“There is a lot of exciting upcoming work on this topic,” says Berglund-Brown. “To any practitioners reading this who are interested in getting involved — please reach out.”The study is supported in part by the MIT Climate and Sustainability Consortium. More

  • in

    So you want to build a solar or wind farm? Here’s how to decide where.

    Deciding where to build new solar or wind installations is often left up to individual developers or utilities, with limited overall coordination. But a new study shows that regional-level planning using fine-grained weather data, information about energy use, and energy system modeling can make a big difference in the design of such renewable power installations. This also leads to more efficient and economically viable operations.The findings show the benefits of coordinating the siting of solar farms, wind farms, and storage systems, taking into account local and temporal variations in wind, sunlight, and energy demand to maximize the utilization of renewable resources. This approach can reduce the need for sizable investments in storage, and thus the total system cost, while maximizing availability of clean power when it’s needed, the researchers found.The study, appearing today in the journal Cell Reports Sustainability, was co-authored by Liying Qiu and Rahman Khorramfar, postdocs in MIT’s Department of Civil and Environmental Engineering, and professors Saurabh Amin and Michael Howland.Qiu, the lead author, says that with the team’s new approach, “we can harness the resource complementarity, which means that renewable resources of different types, such as wind and solar, or different locations can compensate for each other in time and space. This potential for spatial complementarity to improve system design has not been emphasized and quantified in existing large-scale planning.”Such complementarity will become ever more important as variable renewable energy sources account for a greater proportion of power entering the grid, she says. By coordinating the peaks and valleys of production and demand more smoothly, she says, “we are actually trying to use the natural variability itself to address the variability.”Typically, in planning large-scale renewable energy installations, Qiu says, “some work on a country level, for example saying that 30 percent of energy should be wind and 20 percent solar. That’s very general.” For this study, the team looked at both weather data and energy system planning modeling on a scale of less than 10-kilometer (about 6-mile) resolution. “It’s a way of determining where should we, exactly, build each renewable energy plant, rather than just saying this city should have this many wind or solar farms,” she explains.To compile their data and enable high-resolution planning, the researchers relied on a variety of sources that had not previously been integrated. They used high-resolution meteorological data from the National Renewable Energy Laboratory, which is publicly available at 2-kilometer resolution but rarely used in a planning model at such a fine scale. These data were combined with an energy system model they developed to optimize siting at a sub-10-kilometer resolution. To get a sense of how the fine-scale data and model made a difference in different regions, they focused on three U.S. regions — New England, Texas, and California — analyzing up to 138,271 possible siting locations simultaneously for a single region.By comparing the results of siting based on a typical method vs. their high-resolution approach, the team showed that “resource complementarity really helps us reduce the system cost by aligning renewable power generation with demand,” which should translate directly to real-world decision-making, Qiu says. “If an individual developer wants to build a wind or solar farm and just goes to where there is the most wind or solar resource on average, it may not necessarily guarantee the best fit into a decarbonized energy system.”That’s because of the complex interactions between production and demand for electricity, as both vary hour by hour, and month by month as seasons change. “What we are trying to do is minimize the difference between the energy supply and demand rather than simply supplying as much renewable energy as possible,” Qiu says. “Sometimes your generation cannot be utilized by the system, while at other times, you don’t have enough to match the demand.”In New England, for example, the new analysis shows there should be more wind farms in locations where there is a strong wind resource during the night, when solar energy is unavailable. Some locations tend to be windier at night, while others tend to have more wind during the day.These insights were revealed through the integration of high-resolution weather data and energy system optimization used by the researchers. When planning with lower resolution weather data, which was generated at a 30-kilometer resolution globally and is more commonly used in energy system planning, there was much less complementarity among renewable power plants. Consequently, the total system cost was much higher. The complementarity between wind and solar farms was enhanced by the high-resolution modeling due to improved representation of renewable resource variability.The researchers say their framework is very flexible and can be easily adapted to any region to account for the local geophysical and other conditions. In Texas, for example, peak winds in the west occur in the morning, while along the south coast they occur in the afternoon, so the two naturally complement each other.Khorramfar says that this work “highlights the importance of data-driven decision making in energy planning.” The work shows that using such high-resolution data coupled with carefully formulated energy planning model “can drive the system cost down, and ultimately offer more cost-effective pathways for energy transition.”One thing that was surprising about the findings, says Amin, who is a principal investigator in the MIT Laboratory of Information and Data Systems, is how significant the gains were from analyzing relatively short-term variations in inputs and outputs that take place in a 24-hour period. “The kind of cost-saving potential by trying to harness complementarity within a day was not something that one would have expected before this study,” he says.In addition, Amin says, it was also surprising how much this kind of modeling could reduce the need for storage as part of these energy systems. “This study shows that there is actually a hidden cost-saving potential in exploiting local patterns in weather, that can result in a monetary reduction in storage cost.”The system-level analysis and planning suggested by this study, Howland says, “changes how we think about where we site renewable power plants and how we design those renewable plants, so that they maximally serve the energy grid. It has to go beyond just driving down the cost of energy of individual wind or solar farms. And these new insights can only be realized if we continue collaborating across traditional research boundaries, by integrating expertise in fluid dynamics, atmospheric science, and energy engineering.”The research was supported by the MIT Climate and Sustainability Consortium and MIT Climate Grand Challenges. More

  • in

    MIT delegation mainstreams biodiversity conservation at the UN Biodiversity Convention, COP16

    For the first time, MIT sent an organized engagement to the global Conference of the Parties for the Convention on Biological Diversity, which this year was held Oct. 21 to Nov. 1 in Cali, Colombia.The 10 delegates to COP16 included faculty, researchers, and students from the MIT Environmental Solutions Initiative (ESI), the Department of Electrical Engineering and Computer Science (EECS), the Computer Science and Artificial Intelligence Laboratory (CSAIL), the Department of Urban Studies and Planning (DUSP), the Institute for Data, Systems, and Society (IDSS), and the Center for Sustainability Science and Strategy.In previous years, MIT faculty had participated sporadically in the discussions. This organized engagement, led by the ESI, is significant because it brought representatives from many of the groups working on biodiversity across the Institute; showcased the breadth of MIT’s research in more than 15 events including panels, roundtables, and keynote presentations across the Blue and Green Zones of the conference (with the Blue Zone representing the primary venue for the official negotiations and discussions and the Green Zone representing public events); and created an experiential learning opportunity for students who followed specific topics in the negotiations and throughout side events.The conference also gathered attendees from governments, nongovernmental organizations, businesses, other academic institutions, and practitioners focused on stopping global biodiversity loss and advancing the 23 goals of the Kunming-Montreal Global Biodiversity Framework (KMGBF), an international agreement adopted in 2022 to guide global efforts to protect and restore biodiversity through 2030.MIT’s involvement was particularly pronounced when addressing goals related to building coalitions of sub-national governments (targets 11, 12, 14); technology and AI for biodiversity conservation (targets 20 and 21); shaping equitable markets (targets 3, 11, and 19); and informing an action plan for Afro-descendant communities (targets 3, 10, and 22).Building coalitions of sub-national governmentsThe ESI’s Natural Climate Solutions (NCS) Program was able to support two separate coalitions of Latin American cities, namely the Coalition of Cities Against Illicit Economies in the Biogeographic Chocó Region and the Colombian Amazonian Cities coalition, who successfully signed declarations to advance specific targets of the KMGBF (the aforementioned targets 11, 12, 14).This was accomplished through roundtables and discussions where team members — including Marcela Angel, research program director at the MIT ESI; Angelica Mayolo, ESI Martin Luther King Fellow 2023-25; and Silvia Duque and Hannah Leung, MIT Master’s in City Planning students — presented a set of multi-scale actions including transnational strategies, recommendations to strengthen local and regional institutions, and community-based actions to promote the conservation of the Biogeographic Chocó as an ecological corridor.“There is an urgent need to deepen the relationship between academia and local governments of cities located in biodiversity hotspots,” said Angel. “Given the scale and unique conditions of Amazonian cities, pilot research projects present an opportunity to test and generate a proof of concept. These could generate catalytic information needed to scale up climate adaptation and conservation efforts in socially and ecologically sensitive contexts.”ESI’s research also provided key inputs for the creation of the Fund for the Biogeographic Chocó Region, a multi-donor fund launched within the framework of COP16 by a coalition composed of Colombia, Ecuador, Panamá, and Costa Rica. The fund aims to support biodiversity conservation, ecosystem restoration, climate change mitigation and adaptation, and sustainable development efforts across the region.Technology and AI for biodiversity conservationData, technology, and artificial intelligence are playing an increasing role in how we understand biodiversity and ecosystem change globally. Professor Sara Beery’s research group at MIT focuses on this intersection, developing AI methods that enable species and environmental monitoring at previously unprecedented spatial, temporal, and taxonomic scales.During the International Union of Biological Diversity Science-Policy Forum, the high-level COP16 segment focused on outlining recommendations from scientific and academic community, Beery spoke on a panel alongside María Cecilia Londoño, scientific information manager of the Humboldt Institute and co-chair of the Global Biodiversity Observations Network, and Josh Tewksbury, director of the Smithsonian Tropical Research Institute, among others, about how these technological advancements will help humanity achieve our biodiversity targets. The panel emphasized that AI innovation was needed, but with emphasis on direct human-AI partnership, AI capacity building, and the need for data and AI policy to ensure equity of access and benefit from these technologies.As a direct outcome of the session, for the first time, AI was emphasized in the statement on behalf of science and academia delivered by Hernando Garcia, director of the Humboldt Institute, and David Skorton, secretary general of the Smithsonian Institute, to the high-level segment of the COP16.That statement read, “To effectively address current and future challenges, urgent action is required in equity, governance, valuation, infrastructure, decolonization and policy frameworks around biodiversity data and artificial intelligence.”Beery also organized a panel at the GEOBON pavilion in the Blue Zone on Scaling Biodiversity Monitoring with AI, which brought together global leaders from AI research, infrastructure development, capacity and community building, and policy and regulation. The panel was initiated and experts selected from the participants at the recent Aspen Global Change Institute Workshop on Overcoming Barriers to Impact in AI for Biodiversity, co-organized by Beery.Shaping equitable marketsIn a side event co-hosted by the ESI with CAF-Development Bank of Latin America, researchers from ESI’s Natural Climate Solutions Program — including Marcela Angel; Angelica Mayolo; Jimena Muzio, ESI research associate; and Martin Perez Lara, ESI research affiliate and director for Forest Climate Solutions Impact and Monitoring at World Wide Fund for Nature of the U.S. — presented results of a study titled “Voluntary Carbon Markets for Social Impact: Comprehensive Assessment of the Role of Indigenous Peoples and Local Communities (IPLC) in Carbon Forestry Projects in Colombia.” The report highlighted the structural barriers that hinder effective participation of IPLC, and proposed a conceptual framework to assess IPLC engagement in voluntary carbon markets.Communicating these findings is important because the global carbon market has experienced a credibility crisis since 2023, influenced by critical assessments in academic literature, journalism questioning the quality of mitigation results, and persistent concerns about the engagement of private actors with IPLC. Nonetheless, carbon forestry projects have expanded rapidly in Indigenous, Afro-descendant, and local communities’ territories, and there is a need to assess the relationships between private actors and IPLC and to propose pathways for equitable participation. 

    Panelists pose at the equitable markets side event at the Latin American Pavilion in the Blue Zone.

    Previous item
    Next item

    The research presentation and subsequent panel with representatives of the association for Carbon Project Developers in Colombia Asocarbono, Fondo Acción, and CAF further discussed recommendations for all actors in the value chain of carbon certificates — including those focused on promoting equitable benefit-sharing and safeguarding compliance, increased accountability, enhanced governance structures, strengthened institutionality, and regulatory frameworks  — necessary to create an inclusive and transparent market.Informing an action plan for Afro-descendant communitiesThe Afro-Interamerican Forum on Climate Change (AIFCC), an international network working to highlight the critical role of Afro-descendant peoples in global climate action, was also present at COP16.At the Afro Summit, Mayolo presented key recommendations prepared collectively by the members of AIFCC to the technical secretariat of the Convention on Biological Diversity (CBD). The recommendations emphasize:creating financial tools for conservation and supporting Afro-descendant land rights;including a credit guarantee fund for countries that recognize Afro-descendant collective land titling and research on their contributions to biodiversity conservation;calling for increased representation of Afro-descendant communities in international policy forums;capacity-building for local governments; andstrategies for inclusive growth in green business and energy transition.These actions aim to promote inclusive and sustainable development for Afro-descendant populations.“Attending COP16 with a large group from MIT contributing knowledge and informed perspectives at 15 separate events was a privilege and honor,” says MIT ESI Director John E. Fernández. “This demonstrates the value of the ESI as a powerful research and convening body at MIT. Science is telling us unequivocally that climate change and biodiversity loss are the two greatest challenges that we face as a species and a planet. MIT has the capacity, expertise, and passion to address not only the former, but also the latter, and the ESI is committed to facilitating the very best contributions across the institute for the critical years that are ahead of us.”A fuller overview of the conference is available via The MIT Environmental Solutions Initiative’s Primer of COP16. More

  • in

    A new catalyst can turn methane into something useful

    Although it is less abundant than carbon dioxide, methane gas contributes disproportionately to global warming because it traps more heat in the atmosphere than carbon dioxide, due to its molecular structure.MIT chemical engineers have now designed a new catalyst that can convert methane into useful polymers, which could help reduce greenhouse gas emissions.“What to do with methane has been a longstanding problem,” says Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT and the senior author of the study. “It’s a source of carbon, and we want to keep it out of the atmosphere but also turn it into something useful.”The new catalyst works at room temperature and atmospheric pressure, which could make it easier and more economical to deploy at sites of methane production, such as power plants and cattle barns.Daniel Lundberg PhD ’24 and MIT postdoc Jimin Kim are the lead authors of the study, which appears today in Nature Catalysis. Former postdoc Yu-Ming Tu and postdoc Cody Ritt also authors of the paper.Capturing methaneMethane is produced by bacteria known as methanogens, which are often highly concentrated in landfills, swamps, and other sites of decaying biomass. Agriculture is a major source of methane, and methane gas is also generated as a byproduct of transporting, storing, and burning natural gas. Overall, it is believed to account for about 15 percent of global temperature increases.At the molecular level, methane is made of a single carbon atom bound to four hydrogen atoms. In theory, this molecule should be a good building block for making useful products such as polymers. However, converting methane to other compounds has proven difficult because getting it to react with other molecules usually requires high temperature and high pressures.To achieve methane conversion without that input of energy, the MIT team designed a hybrid catalyst with two components: a zeolite and a naturally occurring enzyme. Zeolites are abundant, inexpensive clay-like minerals, and previous work has found that they can be used to catalyze the conversion of methane to carbon dioxide.In this study, the researchers used a zeolite called iron-modified aluminum silicate, paired with an enzyme called alcohol oxidase. Bacteria, fungi, and plants use this enzyme to oxidize alcohols.This hybrid catalyst performs a two-step reaction in which zeolite converts methane to methanol, and then the enzyme converts methanol to formaldehyde. That reaction also generates hydrogen peroxide, which is fed back into the zeolite to provide a source of oxygen for the conversion of methane to methanol.This series of reactions can occur at room temperature and doesn’t require high pressure. The catalyst particles are suspended in water, which can absorb methane from the surrounding air. For future applications, the researchers envision that it could be painted onto surfaces.“Other systems operate at high temperature and high pressure, and they use hydrogen peroxide, which is an expensive chemical, to drive the methane oxidation. But our enzyme produces hydrogen peroxide from oxygen, so I think our system could be very cost-effective and scalable,” Kim says.Creating a system that incorporates both enzymes and artificial catalysts is a “smart strategy,” says Damien Debecker, a professor at the Institute of Condensed Matter and Nanosciences at the University of Louvain, Belgium.“Combining these two families of catalysts is challenging, as they tend to operate in rather distinct operation conditions. By unlocking this constraint and mastering the art of chemo-enzymatic cooperation, hybrid catalysis becomes key-enabling: It opens new perspectives to run complex reaction systems in an intensified way,” says Debecker, who was not involved in the research.Building polymersOnce formaldehyde is produced, the researchers showed they could use that molecule to generate polymers by adding urea, a nitrogen-containing molecule found in urine. This resin-like polymer, known as urea-formaldehyde, is now used in particle board, textiles and other products.The researchers envision that this catalyst could be incorporated into pipes used to transport natural gas. Within those pipes, the catalyst could generate a polymer that could act as a sealant to heal cracks in the pipes, which are a common source of methane leakage. The catalyst could also be applied as a film to coat surfaces that are exposed to methane gas, producing polymers that could be collected for use in manufacturing, the researchers say.Strano’s lab is now working on catalysts that could be used to remove carbon dioxide from the atmosphere and combine it with nitrate to produce urea. That urea could then be mixed with the formaldehyde produced by the zeolite-enzyme catalyst to produce urea-formaldehyde.The research was funded by the U.S. Department of Energy. More

  • in

    Q&A: Transforming research through global collaborations

    The MIT Global Seed Funds (GSF) program fosters global research collaborations with MIT faculty and their peers abroad — creating partnerships that tackle complex global issues, from climate change to health-care challenges and beyond. Administered by the MIT Center for International Studies (CIS), the GSF program has awarded more than $26 million to over 1,200 faculty research projects since its inception in 2008. Through its unique funding structure — comprising a general fund for unrestricted geographical use and several specific funds within individual countries, regions, and universities — GSF supports a wide range of projects. The current call for proposals from MIT faculty and researchers with principal investigator status is open until Dec. 10. CIS recently sat down with faculty recipients Josephine Carstensen and David McGee to discuss the value and impact GSF added to their research. Carstensen, the Gilbert W. Winslow Career Development Associate Professor of Civil and Environmental Engineering, generates computational designs for large-scale structures with the intent of designing novel low-carbon solutions. McGee, the William R. Kenan, Jr. Professor in the Department of Earth, Atmospheric and Planetary Sciences (EAPS), reconstructs the patterns, pace, and magnitudes of past hydro-climate changes.Q: How did the Global Seed Funds program connect you with global partnerships related to your research?Carstensen: One of the projects my lab is working on is to unlock the potential of complex cast-glass structures. Through our GSF partnership with researchers at TUDelft (Netherlands), my group was able to leverage our expertise in generative design algorithms alongside the TUDelft team, who are experts in the physical casting and fabrication of glass structures. Our initial connection to TUDelft was actually through one of my graduate students who was at a conference and met TUDelft researchers. He was inspired by their work and felt there could be synergy between our labs. The question then became: How do we connect with TUDelft? And that was what led us to the Global Seed Funds program. McGee: Our research is based in fieldwork conducted in partnership with experts who have a rich understanding of local environments. These locations range from lake basins in Chile and Argentina to caves in northern Mexico, Vietnam, and Madagascar. GSF has been invaluable for helping foster partnerships with collaborators and universities in these different locations, enabling the pilot work and relationship-building necessary to establish longer-term, externally funded projects.Q: Tell us more about your GSF-funded work.Carstensen: In my research group at MIT, we live mainly in a computational regime, and we do very little proof-of-concept testing. To that point, we do not even have the facilities nor experience to physically build large-scale structures, or even specialized structures. GSF has enabled us to connect with the researchers at TUDelft who do much more experimental testing than we do. Being able to work with the experts at TUDelft within their physical realm provided valuable insights into their way of approaching problems. And, likewise, the researchers at TUDelft benefited from our expertise. It has been fruitful in ways we couldn’t have imagined within our lab at MIT.McGee: The collaborative work supported by the GSF has focused on reconstructing how past climate changes impacted rainfall patterns around the world, using natural archives like lake sediments and cave formations. One particularly successful project has been our work in caves in northeastern Mexico, which has been conducted in partnership with researchers from the National Autonomous University of Mexico (UNAM) and a local caving group. This project has involved several MIT undergraduate and graduate students, sponsored a research symposium in Mexico City, and helped us obtain funding from the National Science Foundation for a longer-term project.Q: You both mentioned the involvement of your graduate students. How exactly has the GSF augmented the research experience of your students?Carstensen: The collaboration has especially benefited the graduate students from both the MIT and TUDelft teams. The opportunity presented through this project to engage in research at an international peer institution has been extremely beneficial for their academic growth and maturity. It has facilitated training in new and complementary technical areas that they would not have had otherwise and allowed them to engage with leading world experts. An example of this aspect of the project’s success is that the collaboration has inspired one of my graduate students to actively pursue postdoc opportunities in Europe (including at TU Delft) after his graduation.McGee: MIT students have traveled to caves in northeastern Mexico and to lake basins in northern Chile to conduct fieldwork and build connections with local collaborators. Samples enabled by GSF-supported projects became the focus of two graduate students’ PhD theses, two EAPS undergraduate senior theses, and multiple UROP [Undergraduate Research Opportunity Program] projects.Q: Were there any unexpected benefits to the work funded by GSF?Carstensen: The success of this project would not have been possible without this specific international collaboration. Both the Delft and MIT teams bring highly different essential expertise that has been necessary for the successful project outcome. It allowed both the Delft and MIT teams to gain an in-depth understanding of the expertise areas and resources of the other collaborators. Both teams have been deeply inspired. This partnership has fueled conversations about potential future projects and provided multiple outcomes, including a plan to publish two journal papers on the project outcome. The first invited publication is being finalized now.McGee: GSF’s focus on reciprocal exchange has enabled external collaborators to spend time at MIT, sharing their work and exchanging ideas. Other funding is often focused on sending MIT researchers and students out, but GSF has helped us bring collaborators here, making the relationship more equal. A GSF-supported visit by Argentinian researchers last year made it possible for them to interact not just with my group, but with students and faculty across EAPS. More