More stories

  • in

    Surface-based sonar system could rapidly map the ocean floor at high resolution

    On June 18, 2023, the Titan submersible was about an hour-and-a-half into its two-hour descent to the Titanic wreckage at the bottom of the Atlantic Ocean when it lost contact with its support ship. This cease in communication set off a frantic search for the tourist submersible and five passengers onboard, located about two miles below the ocean’s surface.Deep-ocean search and recovery is one of the many missions of military services like the U.S. Coast Guard Office of Search and Rescue and the U.S. Navy Supervisor of Salvage and Diving. For this mission, the longest delays come from transporting search-and-rescue equipment via ship to the area of interest and comprehensively surveying that area. A search operation on the scale of that for Titan — which was conducted 420 nautical miles from the nearest port and covered 13,000 square kilometers, an area roughly twice the size of Connecticut — could take weeks to complete. The search area for Titan is considered relatively small, focused on the immediate vicinity of the Titanic. When the area is less known, operations could take months. (A remotely operated underwater vehicle deployed by a Canadian vessel ended up finding the debris field of Titan on the seafloor, four days after the submersible had gone missing.)A research team from MIT Lincoln Laboratory and the MIT Department of Mechanical Engineering’s Ocean Science and Engineering lab is developing a surface-based sonar system that could accelerate the timeline for small- and large-scale search operations to days. Called the Autonomous Sparse-Aperture Multibeam Echo Sounder, the system scans at surface-ship rates while providing sufficient resolution to find objects and features in the deep ocean, without the time and expense of deploying underwater vehicles. The echo sounder — which features a large sonar array using a small set of autonomous surface vehicles (ASVs) that can be deployed via aircraft into the ocean — holds the potential to map the seabed at 50 times the coverage rate of an underwater vehicle and 100 times the resolution of a surface vessel.

    Play video

    Autonomous Sparse-Aperture Multibeam Echo SounderVideo: MIT Lincoln Laboratory

    “Our array provides the best of both worlds: the high resolution of underwater vehicles and the high coverage rate of surface ships,” says co–principal investigator Andrew March, assistant leader of the laboratory’s Advanced Undersea Systems and Technology Group. “Though large surface-based sonar systems at low frequency have the potential to determine the materials and profiles of the seabed, they typically do so at the expense of resolution, particularly with increasing ocean depth. Our array can likely determine this information, too, but at significantly enhanced resolution in the deep ocean.”Underwater unknownOceans cover 71 percent of Earth’s surface, yet more than 80 percent of this underwater realm remains undiscovered and unexplored. Humans know more about the surface of other planets and the moon than the bottom of our oceans. High-resolution seabed maps would not only be useful to find missing objects like ships or aircraft, but also to support a host of other scientific applications: understanding Earth’s geology, improving forecasting of ocean currents and corresponding weather and climate impacts, uncovering archaeological sites, monitoring marine ecosystems and habitats, and identifying locations containing natural resources such as mineral and oil deposits.Scientists and governments worldwide recognize the importance of creating a high-resolution global map of the seafloor; the problem is that no existing technology can achieve meter-scale resolution from the ocean surface. The average depth of our oceans is approximately 3,700 meters. However, today’s technologies capable of finding human-made objects on the seabed or identifying person-sized natural features — these technologies include sonar, lidar, cameras, and gravitational field mapping — have a maximum range of less than 1,000 meters through water.Ships with large sonar arrays mounted on their hull map the deep ocean by emitting low-frequency sound waves that bounce off the seafloor and return as echoes to the surface. Operation at low frequencies is necessary because water readily absorbs high-frequency sound waves, especially with increasing depth; however, such operation yields low-resolution images, with each image pixel representing a football field in size. Resolution is also restricted because sonar arrays installed on large mapping ships are already using all of the available hull space, thereby capping the sonar beam’s aperture size. By contrast, sonars on autonomous underwater vehicles (AUVs) that operate at higher frequencies within a few hundred meters of the seafloor generate maps with each pixel representing one square meter or less, resulting in 10,000 times more pixels in that same football field–sized area. However, this higher resolution comes with trade-offs: AUVs are time-consuming and expensive to deploy in the deep ocean, limiting the amount of seafloor that can be mapped; they have a maximum range of about 1,000 meters before their high-frequency sound gets absorbed; and they move at slow speeds to conserve power. The area-coverage rate of AUVs performing high-resolution mapping is about 8 square kilometers per hour; surface vessels map the deep ocean at more than 50 times that rate.A solution surfacesThe Autonomous Sparse-Aperture Multibeam Echo Sounder could offer a cost-effective approach to high-resolution, rapid mapping of the deep seafloor from the ocean’s surface. A collaborative fleet of about 20 ASVs, each hosting a small sonar array, effectively forms a single sonar array 100 times the size of a large sonar array installed on a ship. The large aperture achieved by the array (hundreds of meters) produces a narrow beam, which enables sound to be precisely steered to generate high-resolution maps at low frequency. Because very few sonars are installed relative to the array’s overall size (i.e., a sparse aperture), the cost is tractable.However, this collaborative and sparse setup introduces some operational challenges. First, for coherent 3D imaging, the relative position of each ASV’s sonar subarray must be accurately tracked through dynamic ocean-induced motions. Second, because sonar elements are not placed directly next to each other without any gaps, the array suffers from a lower signal-to-noise ratio and is less able to reject noise coming from unintended or undesired directions. To mitigate these challenges, the team has been developing a low-cost precision-relative navigation system and leveraging acoustic signal processing tools and new ocean-field estimation algorithms. The MIT campus collaborators are developing algorithms for data processing and image formation, especially to estimate depth-integrated water-column parameters. These enabling technologies will help account for complex ocean physics, spanning physical properties like temperature, dynamic processes like currents and waves, and acoustic propagation factors like sound speed.Processing for all required control and calculations could be completed either remotely or onboard the ASVs. For example, ASVs deployed from a ship or flying boat could be controlled and guided remotely from land via a satellite link or from a nearby support ship (with direct communications or a satellite link), and left to map the seabed for weeks or months at a time until maintenance is needed. Sonar-return health checks and coarse seabed mapping would be conducted on board, while full, high-resolution reconstruction of the seabed would require a supercomputing infrastructure on land or on a support ship.”Deploying vehicles in an area and letting them map for extended periods of time without the need for a ship to return home to replenish supplies and rotate crews would significantly simplify logistics and operating costs,” says co–principal investigator Paul Ryu, a researcher in the Advanced Undersea Systems and Technology Group.Since beginning their research in 2018, the team has turned their concept into a prototype. Initially, the scientists built a scale model of a sparse-aperture sonar array and tested it in a water tank at the laboratory’s Autonomous Systems Development Facility. Then, they prototyped an ASV-sized sonar subarray and demonstrated its functionality in Gloucester, Massachusetts. In follow-on sea tests in Boston Harbor, they deployed an 8-meter array containing multiple subarrays equivalent to 25 ASVs locked together; with this array, they generated 3D reconstructions of the seafloor and a shipwreck. Most recently, the team fabricated, in collaboration with Woods Hole Oceanographic Institution, a first-generation, 12-foot-long, all-electric ASV prototype carrying a sonar array underneath. With this prototype, they conducted preliminary relative navigation testing in Woods Hole, Massachusetts and Newport, Rhode Island. Their full deep-ocean concept calls for approximately 20 such ASVs of a similar size, likely powered by wave or solar energy.This work was funded through Lincoln Laboratory’s internally administered R&D portfolio on autonomous systems. The team is now seeking external sponsorship to continue development of their ocean floor–mapping technology, which was recognized with a 2024 R&D 100 Award.  More

  • in

    MIT spinout Commonwealth Fusion Systems unveils plans for the world’s first fusion power plant

    America is one step closer to tapping into a new and potentially limitless clean energy source today, with the announcement from MIT spinout Commonwealth Fusion Systems (CFS) that it plans to build the world’s first grid-scale fusion power plant in Chesterfield County, Virginia.The announcement is the latest milestone for the company, which has made groundbreaking progress toward harnessing fusion — the reaction that powers the sun — since its founders first conceived of their approach in an MIT classroom in 2012. CFS is now commercializing a suite of advanced technologies developed in MIT research labs.“This moment exemplifies the power of MIT’s mission, which is to create knowledge that serves the nation and the world, whether via the classroom, the lab, or out in communities,” MIT Vice President for Research Ian Waitz says. “From student coursework 12 years ago to today’s announcement of the siting in Virginia of the world’s first fusion power plant, progress has been amazingly rapid. At the same time, we owe this progress to over 65 years of sustained investment by the U.S. federal government in basic science and energy research.”The new fusion power plant, named ARC, is expected to come online in the early 2030s and generate about 400 megawatts of clean, carbon-free electricity — enough energy to power large industrial sites or about 150,000 homes.The plant will be built at the James River Industrial Park outside of Richmond through a nonfinancial collaboration with Dominion Energy Virginia, which will provide development and technical expertise along with leasing rights for the site. CFS will independently finance, build, own, and operate the power plant.The plant will support Virginia’s economic and clean energy goals by generating what is expected to be billions of dollars in economic development and hundreds of jobs during its construction and long-term operation.More broadly, ARC will position the U.S. to lead the world in harnessing a new form of safe and reliable energy that could prove critical for economic prosperity and national security, including for meeting increasing electricity demands driven by needs like artificial intelligence.“This will be a watershed moment for fusion,” says CFS co-founder Dennis Whyte, the Hitachi America Professor of Engineering at MIT. “It sets the pace in the race toward commercial fusion power plants. The ambition is to build thousands of these power plants and to change the world.”Fusion can generate energy from abundant fuels like hydrogen and lithium isotopes, which can be sourced from seawater, and leave behind no emissions or toxic waste. However, harnessing fusion in a way that produces more power than it takes in has proven difficult because of the high temperatures needed to create and maintain the fusion reaction. Over the course of decades, scientists and engineers have worked to make the dream of fusion power plants a reality.In 2012, teaching the MIT class 22.63 (Principles of Fusion Engineering), Whyte challenged a group of graduate students to design a fusion device that would use a new kind of superconducting magnet to confine the plasma used in the reaction. It turned out the magnets enabled a more compact and economic reactor design. When Whyte reviewed his students’ work, he realized that could mean a new development path for fusion.Since then, a huge amount of capital and expertise has rushed into the once fledgling fusion industry. Today there are dozens of private fusion companies around the world racing to develop the first net-energy fusion power plants, many utilizing the new superconducting magnets. CFS, which Whyte founded with several students from his class, has attracted more than $2 billion in funding.“It all started with that class, where our ideas kept evolving as we challenged the standard assumptions that came with fusion,” Whyte says. “We had this new superconducting technology, so much of the common wisdom was no longer valid. It was a perfect forum for students, who can challenge the status quo.”Since the company’s founding in 2017, it has collaborated with researchers in MIT’s Plasma Science and Fusion Center (PFSC) on a range of initiatives, from validating the underlying plasma physics for the first demonstration machine to breaking records with a new kind of magnet to be used in commercial fusion power plants. Each piece of progress moves the U.S. closer to harnessing a revolutionary new energy source.CFS is currently completing development of its fusion demonstration machine, SPARC, at its headquarters in Devens, Massachusetts. SPARC is expected to produce its first plasma in 2026 and net fusion energy shortly after, demonstrating for the first time a commercially relevant design that will produce more power than it consumes. SPARC will pave the way for ARC, which is expected to deliver power to the grid in the early 2030s.“There’s more challenging engineering and science to be done in this field, and we’re very enthusiastic about the progress that CFS and the researchers on our campus are making on those problems,” Waitz says. “We’re in a ‘hockey stick’ moment in fusion energy, where things are moving incredibly quickly now. On the other hand, we can’t forget about the much longer part of that hockey stick, the sustained support for very complex, fundamental research that underlies great innovations. If we’re going to continue to lead the world in these cutting-edge technologies, continued investment in those areas will be crucial.” More

  • in

    New climate chemistry model finds “non-negligible” impacts of potential hydrogen fuel leakage

    As the world looks for ways to stop climate change, much discussion focuses on using hydrogen instead of fossil fuels, which emit climate-warming greenhouse gases (GHGs) when they’re burned. The idea is appealing. Burning hydrogen doesn’t emit GHGs to the atmosphere, and hydrogen is well-suited for a variety of uses, notably as a replacement for natural gas in industrial processes, power generation, and home heating.But while burning hydrogen won’t emit GHGs, any hydrogen that’s leaked from pipelines or storage or fueling facilities can indirectly cause climate change by affecting other compounds that are GHGs, including tropospheric ozone and methane, with methane impacts being the dominant effect. A much-cited 2022 modeling study analyzing hydrogen’s effects on chemical compounds in the atmosphere concluded that these climate impacts could be considerable. With funding from the MIT Energy Initiative’s Future Energy Systems Center, a team of MIT researchers took a more detailed look at the specific chemistry that poses the risks of using hydrogen as a fuel if it leaks.The researchers developed a model that tracks many more chemical reactions that may be affected by hydrogen and includes interactions among chemicals. Their open-access results, published Oct. 28 in Frontiers in Energy Research, showed that while the impact of leaked hydrogen on the climate wouldn’t be as large as the 2022 study predicted — and that it would be about a third of the impact of any natural gas that escapes today — leaked hydrogen will impact the climate. Leak prevention should therefore be a top priority as the hydrogen infrastructure is built, state the researchers.Hydrogen’s impact on the “detergent” that cleans our atmosphereGlobal three-dimensional climate-chemistry models using a large number of chemical reactions have also been used to evaluate hydrogen’s potential climate impacts, but results vary from one model to another, motivating the MIT study to analyze the chemistry. Most studies of the climate effects of using hydrogen consider only the GHGs that are emitted during the production of the hydrogen fuel. Different approaches may make “blue hydrogen” or “green hydrogen,” a label that relates to the GHGs emitted. Regardless of the process used to make the hydrogen, the fuel itself can threaten the climate. For widespread use, hydrogen will need to be transported, distributed, and stored — in short, there will be many opportunities for leakage. The question is, What happens to that leaked hydrogen when it reaches the atmosphere? The 2022 study predicting large climate impacts from leaked hydrogen was based on reactions between pairs of just four chemical compounds in the atmosphere. The results showed that the hydrogen would deplete a chemical species that atmospheric chemists call the “detergent of the atmosphere,” explains Candice Chen, a PhD candidate in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “It goes around zapping greenhouse gases, pollutants, all sorts of bad things in the atmosphere. So it’s cleaning our air.” Best of all, that detergent — the hydroxyl radical, abbreviated as OH — removes methane, which is an extremely potent GHG in the atmosphere. OH thus plays an important role in slowing the rate at which global temperatures rise. But any hydrogen leaked to the atmosphere would reduce the amount of OH available to clean up methane, so the concentration of methane would increase.However, chemical reactions among compounds in the atmosphere are notoriously complicated. While the 2022 study used a “four-equation model,” Chen and her colleagues — Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies and Chemistry; and Kane Stone, a research scientist in EAPS — developed a model that includes 66 chemical reactions. Analyses using their 66-equation model showed that the four-equation system didn’t capture a critical feedback involving OH — a feedback that acts to protect the methane-removal process.Here’s how that feedback works: As the hydrogen decreases the concentration of OH, the cleanup of methane slows down, so the methane concentration increases. However, that methane undergoes chemical reactions that can produce new OH radicals. “So the methane that’s being produced can make more of the OH detergent,” says Chen. “There’s a small countering effect. Indirectly, the methane helps produce the thing that’s getting rid of it.” And, says Chen, that’s a key difference between their 66-equation model and the four-equation one. “The simple model uses a constant value for the production of OH, so it misses that key OH-production feedback,” she says.To explore the importance of including that feedback effect, the MIT researchers performed the following analysis: They assumed that a single pulse of hydrogen was injected into the atmosphere and predicted the change in methane concentration over the next 100 years, first using four-equation model and then using the 66-equation model. With the four-equation system, the additional methane concentration peaked at nearly 2 parts per billion (ppb); with the 66-equation system, it peaked at just over 1 ppb.Because the four-equation analysis assumes only that the injected hydrogen destroys the OH, the methane concentration increases unchecked for the first 10 years or so. In contrast, the 66-equation analysis goes one step further: the methane concentration does increase, but as the system re-equilibrates, more OH forms and removes methane. By not accounting for that feedback, the four-equation analysis overestimates the peak increase in methane due to the hydrogen pulse by about 85 percent. Spread over time, the simple model doubles the amount of methane that forms in response to the hydrogen pulse.Chen cautions that the point of their work is not to present their result as “a solid estimate” of the impact of hydrogen. Their analysis is based on a simple “box” model that represents global average conditions and assumes that all the chemical species present are well mixed. Thus, the species can vary over time — that is, they can be formed and destroyed — but any species that are present are always perfectly mixed. As a result, a box model does not account for the impact of, say, wind on the distribution of species. “The point we’re trying to make is that you can go too simple,” says Chen. “If you’re going simpler than what we’re representing, you will get further from the right answer.” She goes on to note, “The utility of a relatively simple model like ours is that all of the knobs and levers are very clear. That means you can explore the system and see what affects a value of interest.”Leaked hydrogen versus leaked natural gas: A climate comparisonBurning natural gas produces fewer GHG emissions than does burning coal or oil; but as with hydrogen, any natural gas that’s leaked from wells, pipelines, and processing facilities can have climate impacts, negating some of the perceived benefits of using natural gas in place of other fossil fuels. After all, natural gas consists largely of methane, the highly potent GHG in the atmosphere that’s cleaned up by the OH detergent. Given its potency, even small leaks of methane can have a large climate impact.So when thinking about replacing natural gas fuel — essentially methane — with hydrogen fuel, it’s important to consider how the climate impacts of the two fuels compare if and when they’re leaked. The usual way to compare the climate impacts of two chemicals is using a measure called the global warming potential, or GWP. The GWP combines two measures: the radiative forcing of a gas — that is, its heat-trapping ability — with its lifetime in the atmosphere. Since the lifetimes of gases differ widely, to compare the climate impacts of two gases, the convention is to relate the GWP of each one to the GWP of carbon dioxide. But hydrogen and methane leakage cause increases in methane, and that methane decays according to its lifetime. Chen and her colleagues therefore realized that an unconventional procedure would work: they could compare the impacts of the two leaked gases directly. What they found was that the climate impact of hydrogen is about three times less than that of methane (on a per mass basis). So switching from natural gas to hydrogen would not only eliminate combustion emissions, but also potentially reduce the climate effects, depending on how much leaks.Key takeawaysIn summary, Chen highlights some of what she views as the key findings of the study. First on her list is the following: “We show that a really simple four-equation system is not what should be used to project out the atmospheric response to more hydrogen leakages in the future.” The researchers believe that their 66-equation model is a good compromise for the number of chemical reactions to include. It generates estimates for the GWP of methane “pretty much in line with the lower end of the numbers that most other groups are getting using much more sophisticated climate chemistry models,” says Chen. And it’s sufficiently transparent to use in exploring various options for protecting the climate. Indeed, the MIT researchers plan to use their model to examine scenarios that involve replacing other fossil fuels with hydrogen to estimate the climate benefits of making the switch in coming decades.The study also demonstrates a valuable new way to compare the greenhouse effects of two gases. As long as their effects exist on similar time scales, a direct comparison is possible — and preferable to comparing each with carbon dioxide, which is extremely long-lived in the atmosphere. In this work, the direct comparison generates a simple look at the relative climate impacts of leaked hydrogen and leaked methane — valuable information to take into account when considering switching from natural gas to hydrogen.Finally, the researchers offer practical guidance for infrastructure development and use for both hydrogen and natural gas. Their analyses determine that hydrogen fuel itself has a “non-negligible” GWP, as does natural gas, which is mostly methane. Therefore, minimizing leakage of both fuels will be necessary to achieve net-zero carbon emissions by 2050, the goal set by both the European Commission and the U.S. Department of State. Their paper concludes, “If used nearly leak-free, hydrogen is an excellent option. Otherwise, hydrogen should only be a temporary step in the energy transition, or it must be used in tandem with carbon-removal steps [elsewhere] to counter its warming effects.” More

  • in

    Enabling a circular economy in the built environment

    The amount of waste generated by the construction sector underscores an urgent need for embracing circularity — a sustainable model that aims to minimize waste and maximize material efficiency through recovery and reuse — in the built environment: 600 million tons of construction and demolition waste was produced in the United States alone in 2018, with 820 million tons reported in the European Union, and an excess of 2 billion tons annually in China.This significant resource loss embedded in our current industrial ecosystem marks a linear economy that operates on a “take-make-dispose” model of construction; in contrast, the “make-use-reuse” approach of a circular economy offers an important opportunity to reduce environmental impacts.A team of MIT researchers has begun to assess what may be needed to spur widespread circular transition within the built environment in a new open-access study that aims to understand stakeholders’ current perceptions of circularity and quantify their willingness to pay.“This paper acts as an initial endeavor into understanding what the industry may be motivated by, and how integration of stakeholder motivations could lead to greater adoption,” says lead author Juliana Berglund-Brown, PhD student in the Department of Architecture at MIT.Considering stakeholders’ perceptionsThree different stakeholder groups from North America, Europe, and Asia — material suppliers, design and construction teams, and real estate developers — were surveyed by the research team that also comprises Akrisht Pandey ’23; Fabio Duarte, associate director of the MIT Senseable City Lab; Raquel Ganitsky, fellow in the Sustainable Real Estate Development Action Program; Randolph Kirchain, co-director of MIT Concrete Sustainability Hub; and Siqi Zheng, the STL Champion Professor of Urban and Real Estate Sustainability at Department of Urban Studies and Planning.Despite growing awareness of reuse practice among construction industry stakeholders, circular practices have yet to be implemented at scale — attributable to many factors that influence the intersection of construction needs with government regulations and the economic interests of real estate developers.The study notes that perceived barriers to circular adoption differ based on industry role, with lack of both client interest and standardized structural assessment methods identified as the primary concern of design and construction teams, while the largest deterrents for material suppliers are logistics complexity, and supply uncertainty. Real estate developers, on the other hand, are chiefly concerned with higher costs and structural assessment. Yet encouragingly, respondents expressed willingness to absorb higher costs, with developers indicating readiness to pay an average of 9.6 percent higher construction costs for a minimum 52.9 percent reduction in embodied carbon — and all stakeholders highly favor the potential of incentives like tax exemptions to aid with cost premiums.Next steps to encourage circularityThe findings highlight the need for further conversation between design teams and developers, as well as for additional exploration into potential solutions to practical challenges. “The thing about circularity is that there is opportunity for a lot of value creation, and subsequently profit,” says Berglund-Brown. “If people are motivated by cost, let’s provide a cost incentive, or establish strategies that have one.”When it comes to motivating reasons to adopt circularity practices, the study also found trends emerging by industry role. Future net-zero goals influence developers as well as design and construction teams, with government regulation the third-most frequently named reason across all respondent types.“The construction industry needs a market driver to embrace circularity,” says Berglund-Brown, “Be it carrots or sticks, stakeholders require incentives for adoption.”The effect of policy to motivate change cannot be understated, with major strides being made in low operational carbon building design after policy restricting emissions was introduced, such as Local Law 97 in New York City and the Building Emissions Reduction and Disclosure Ordinance in Boston. These pieces of policy, and their results, can serve as models for embodied carbon reduction policy elsewhere.Berglund-Brown suggests that municipalities might initiate ordinances requiring buildings to be deconstructed, which would allow components to be reused, curbing demolition methods that result in waste rather than salvage. Top-down ordinances could be one way to trigger a supply chain shift toward reprocessing building materials that are typically deemed “end-of-life.”The study also identifies other challenges to the implementation of circularity at scale, including risk associated with how to reuse materials in new buildings, and disrupting status quo design practices.“Understanding the best way to motivate transition despite uncertainty is where our work comes in,” says Berglund-Brown. “Beyond that, researchers can continue to do a lot to alleviate risk — like developing standards for reuse.”Innovations that challenge the status quoDisrupting the status quo is not unusual for MIT researchers; other visionary work in construction circularity pioneered at MIT includes “a smart kit of parts” called Pixelframe. This system for modular concrete reuse allows building elements to be disassembled and rebuilt several times, aiding deconstruction and reuse while maintaining material efficiency and versatility.Developed by MIT Climate and Sustainability Consortium Associate Director Caitlin Mueller’s research team, Pixelframe is designed to accommodate a wide range of applications from housing to warehouses, with each piece of interlocking precast concrete modules, called Pixels, assigned a material passport to enable tracking through its many life cycles.Mueller’s work demonstrates that circularity can work technically and logistically at the scale of the built environment — by designing specifically for disassembly, configuration, versatility, and upfront carbon and cost efficiency.“This can be built today. This is building code-compliant today,” said Mueller of Pixelframe in a keynote speech at the recent MCSC Annual Symposium, which saw industry representatives and members of the MIT community coming together to discuss scalable solutions to climate and sustainability problems. “We currently have the potential for high-impact carbon reduction as a compelling alternative to the business-as-usual construction methods we are used to.”Pixelframe was recently awarded a grant by the Massachusetts Clean Energy Center (MassCEC) to pursue commercialization, an important next step toward integrating innovations like this into a circular economy in practice. “It’s MassCEC’s job to make sure that these climate leaders have the resources they need to turn their technologies into successful businesses that make a difference around the world,” said MassCEC CEO Emily Reichart, in a press release.Additional support for circular innovation has emerged thanks to a historic piece of climate legislation from the Biden administration. The Environmental Protection Agency recently awarded a federal grant on the topic of advancing steel reuse to Berglund-Brown — whose PhD thesis focuses on scaling the reuse of structural heavy-section steel — and John Ochsendorf, the Class of 1942 Professor of Civil and Environmental Engineering and Architecture at MIT.“There is a lot of exciting upcoming work on this topic,” says Berglund-Brown. “To any practitioners reading this who are interested in getting involved — please reach out.”The study is supported in part by the MIT Climate and Sustainability Consortium. More

  • in

    So you want to build a solar or wind farm? Here’s how to decide where.

    Deciding where to build new solar or wind installations is often left up to individual developers or utilities, with limited overall coordination. But a new study shows that regional-level planning using fine-grained weather data, information about energy use, and energy system modeling can make a big difference in the design of such renewable power installations. This also leads to more efficient and economically viable operations.The findings show the benefits of coordinating the siting of solar farms, wind farms, and storage systems, taking into account local and temporal variations in wind, sunlight, and energy demand to maximize the utilization of renewable resources. This approach can reduce the need for sizable investments in storage, and thus the total system cost, while maximizing availability of clean power when it’s needed, the researchers found.The study, appearing today in the journal Cell Reports Sustainability, was co-authored by Liying Qiu and Rahman Khorramfar, postdocs in MIT’s Department of Civil and Environmental Engineering, and professors Saurabh Amin and Michael Howland.Qiu, the lead author, says that with the team’s new approach, “we can harness the resource complementarity, which means that renewable resources of different types, such as wind and solar, or different locations can compensate for each other in time and space. This potential for spatial complementarity to improve system design has not been emphasized and quantified in existing large-scale planning.”Such complementarity will become ever more important as variable renewable energy sources account for a greater proportion of power entering the grid, she says. By coordinating the peaks and valleys of production and demand more smoothly, she says, “we are actually trying to use the natural variability itself to address the variability.”Typically, in planning large-scale renewable energy installations, Qiu says, “some work on a country level, for example saying that 30 percent of energy should be wind and 20 percent solar. That’s very general.” For this study, the team looked at both weather data and energy system planning modeling on a scale of less than 10-kilometer (about 6-mile) resolution. “It’s a way of determining where should we, exactly, build each renewable energy plant, rather than just saying this city should have this many wind or solar farms,” she explains.To compile their data and enable high-resolution planning, the researchers relied on a variety of sources that had not previously been integrated. They used high-resolution meteorological data from the National Renewable Energy Laboratory, which is publicly available at 2-kilometer resolution but rarely used in a planning model at such a fine scale. These data were combined with an energy system model they developed to optimize siting at a sub-10-kilometer resolution. To get a sense of how the fine-scale data and model made a difference in different regions, they focused on three U.S. regions — New England, Texas, and California — analyzing up to 138,271 possible siting locations simultaneously for a single region.By comparing the results of siting based on a typical method vs. their high-resolution approach, the team showed that “resource complementarity really helps us reduce the system cost by aligning renewable power generation with demand,” which should translate directly to real-world decision-making, Qiu says. “If an individual developer wants to build a wind or solar farm and just goes to where there is the most wind or solar resource on average, it may not necessarily guarantee the best fit into a decarbonized energy system.”That’s because of the complex interactions between production and demand for electricity, as both vary hour by hour, and month by month as seasons change. “What we are trying to do is minimize the difference between the energy supply and demand rather than simply supplying as much renewable energy as possible,” Qiu says. “Sometimes your generation cannot be utilized by the system, while at other times, you don’t have enough to match the demand.”In New England, for example, the new analysis shows there should be more wind farms in locations where there is a strong wind resource during the night, when solar energy is unavailable. Some locations tend to be windier at night, while others tend to have more wind during the day.These insights were revealed through the integration of high-resolution weather data and energy system optimization used by the researchers. When planning with lower resolution weather data, which was generated at a 30-kilometer resolution globally and is more commonly used in energy system planning, there was much less complementarity among renewable power plants. Consequently, the total system cost was much higher. The complementarity between wind and solar farms was enhanced by the high-resolution modeling due to improved representation of renewable resource variability.The researchers say their framework is very flexible and can be easily adapted to any region to account for the local geophysical and other conditions. In Texas, for example, peak winds in the west occur in the morning, while along the south coast they occur in the afternoon, so the two naturally complement each other.Khorramfar says that this work “highlights the importance of data-driven decision making in energy planning.” The work shows that using such high-resolution data coupled with carefully formulated energy planning model “can drive the system cost down, and ultimately offer more cost-effective pathways for energy transition.”One thing that was surprising about the findings, says Amin, who is a principal investigator in the MIT Laboratory of Information and Data Systems, is how significant the gains were from analyzing relatively short-term variations in inputs and outputs that take place in a 24-hour period. “The kind of cost-saving potential by trying to harness complementarity within a day was not something that one would have expected before this study,” he says.In addition, Amin says, it was also surprising how much this kind of modeling could reduce the need for storage as part of these energy systems. “This study shows that there is actually a hidden cost-saving potential in exploiting local patterns in weather, that can result in a monetary reduction in storage cost.”The system-level analysis and planning suggested by this study, Howland says, “changes how we think about where we site renewable power plants and how we design those renewable plants, so that they maximally serve the energy grid. It has to go beyond just driving down the cost of energy of individual wind or solar farms. And these new insights can only be realized if we continue collaborating across traditional research boundaries, by integrating expertise in fluid dynamics, atmospheric science, and energy engineering.”The research was supported by the MIT Climate and Sustainability Consortium and MIT Climate Grand Challenges. More

  • in

    A new biodegradable material to replace certain microplastics

    Microplastics are an environmental hazard found nearly everywhere on Earth, released by the breakdown of tires, clothing, and plastic packaging. Another significant source of microplastics is tiny beads that are added to some cleansers, cosmetics, and other beauty products.In an effort to cut off some of these microplastics at their source, MIT researchers have developed a class of biodegradable materials that could replace the plastic beads now used in beauty products. These polymers break down into harmless sugars and amino acids.“One way to mitigate the microplastics problem is to figure out how to clean up existing pollution. But it’s equally important to look ahead and focus on creating materials that won’t generate microplastics in the first place,” says Ana Jaklenec, a principal investigator at MIT’s Koch Institute for Integrative Cancer Research.These particles could also find other applications. In the new study, Jaklenec and her colleagues showed that the particles could be used to encapsulate nutrients such as vitamin A. Fortifying foods with encapsulated vitamin A and other nutrients could help some of the 2 billion people around the world who suffer from nutrient deficiencies.Jaklenec and Robert Langer, an MIT Institute Professor and member of the Koch Institute, are the senior authors of the paper, which appears today in Nature Chemical Engineering. The paper’s lead author is Linzixuan (Rhoda) Zhang, an MIT graduate student in chemical engineering.Biodegradable plasticsIn 2019, Jaklenec, Langer, and others reported a polymer material that they showed could be used to encapsulate vitamin A and other essential nutrients. They also found that people who consumed bread made from flour fortified with encapsulated iron showed increased iron levels.However, since then, the European Union has classified this polymer, known as BMC, as a microplastic and included it in a ban that went into effect in 2023. As a result, the Bill and Melinda Gates Foundation, which funded the original research, asked the MIT team if they could design an alternative that would be more environmentally friendly.The researchers, led by Zhang, turned to a type of polymer that Langer’s lab had previously developed, known as poly(beta-amino esters). These polymers, which have shown promise as vehicles for gene delivery and other medical applications, are biodegradable and break down into sugars and amino acids.By changing the composition of the material’s building blocks, researchers can tune properties such as hydrophobicity (ability to repel water), mechanical strength, and pH sensitivity. After creating five different candidate materials, the MIT team tested them and identified one that appeared to have the optimal composition for microplastic applications, including the ability to dissolve when exposed to acidic environments such as the stomach.The researchers showed that they could use these particles to encapsulate vitamin A, as well as vitamin D, vitamin E, vitamin C, zinc, and iron. Many of these nutrients are susceptible to heat and light degradation, but when encased in the particles, the researchers found that the nutrients could withstand exposure to boiling water for two hours.They also showed that even after being stored for six months at high temperature and high humidity, more than half of the encapsulated vitamins were undamaged.To demonstrate their potential for fortifying food, the researchers incorporated the particles into bouillon cubes, which are commonly consumed in many African countries. They found that when incorporated into bouillon, the nutrients remained intact after being boiled for two hours.“Bouillon is a staple ingredient in sub-Saharan Africa, and offers a significant opportunity to improve the nutritional status of many billions of people in those regions,” Jaklenec says.In this study, the researchers also tested the particles’ safety by exposing them to cultured human intestinal cells and measuring their effects on the cells. At the doses that would be used for food fortification, they found no damage to the cells.Better cleansingTo explore the particles’ ability to replace the microbeads that are often added to cleansers, the researchers mixed the particles with soap foam. This mixture, they found, could remove permanent marker and waterproof eyeliner from skin much more effectively than soap alone.Soap mixed with the new microplastic was also more effective than a cleanser that includes polyethylene microbeads, the researchers found. They also discovered that the new biodegradable particles did a better job of absorbing potentially toxic elements such as heavy metals.“We wanted to use this as a first step to demonstrate how it’s possible to develop a new class of materials, to expand from existing material categories, and then to apply it to different applications,” Zhang says.With a grant from Estée Lauder, the researchers are now working on further testing the microbeads as a cleanser and potentially other applications, and they plan to run a small human trial later this year. They are also gathering safety data that could be used to apply for GRAS (generally regarded as safe) classification from the U.S. Food and Drug Administration and are planning a clinical trial of foods fortified with the particles.The researchers hope their work could help to significantly reduce the amount of microplastic released into the environment from health and beauty products.“This is just one small part of the broader microplastics issue, but as a society we’re beginning to acknowledge the seriousness of the problem. This work offers a step forward in addressing it,” Jaklenec says. “Polymers are incredibly useful and essential in countless applications in our daily lives, but they come with downsides. This is an example of how we can reduce some of those negative aspects.”The research was funded by the Gates Foundation and the U.S. National Science Foundation. More

  • in

    Liquid on Mars was not necessarily all water

    Dry river channels and lake beds on Mars point to the long-ago presence of a liquid on the planet’s surface, and the minerals observed from orbit and from landers seem to many to prove that the liquid was ordinary water. Not so fast, the authors of a new Perspectives article in Nature Geoscience suggest. Water is only one of two possible liquids under what are thought to be the conditions present on ancient Mars. The other is liquid carbon dioxide (CO2), and it may actually have been easier for CO2 in the atmosphere to condense into a liquid under those conditions than for water ice to melt. While others have suggested that liquid CO2 (LCO2) might be the source of some of the river channels seen on Mars, the mineral evidence has seemed to point uniquely to water. However, the new paper cites recent studies of carbon sequestration, the process of burying liquefied CO2 recovered from Earth’s atmosphere deep in underground caverns, which show that similar mineral alteration can occur in liquid CO2 as in water, sometimes even more rapidly.The new paper is led by Michael Hecht, principal investigator of the MOXIE instrument aboard the NASA Mars Rover Perseverance. Hecht, a research scientist at MIT’s Haystack Observatory and a former associate director, says, “Understanding how sufficient liquid water was able to flow on early Mars to explain the morphology and mineralogy we see today is probably the greatest unsettled question of Mars science. There is likely no one right answer, and we are merely suggesting another possible piece of the puzzle.”In the paper, the authors discuss the compatibility of their proposal with current knowledge of Martian atmospheric content and implications for Mars surface mineralogy. They also explore the latest carbon sequestration research and conclude that “LCO2–mineral reactions are consistent with the predominant Mars alteration products: carbonates, phyllosilicates, and sulfates.” The argument for the probable existence of liquid CO2 on the Martian surface is not an all-or-nothing scenario; either liquid CO2, liquid water, or a combination may have brought about such geomorphological and mineralogical evidence for a liquid Mars.Three plausible cases for liquid CO2 on the Martian surface are proposed and discussed: stable surface liquid, basal melting under CO2 ice, and subsurface reservoirs. The likelihood of each depends on the actual inventory of CO2 at the time, as well as the temperature conditions on the surface.The authors acknowledge that the tested sequestration conditions, where the liquid CO2 is above room temperature at pressures of tens of atmospheres, are very different from the cold, relatively low-pressure conditions that might have produced liquid CO2 on early Mars. They call for further laboratory investigations under more realistic conditions to test whether the same chemical reactions occur.Hecht explains, “It’s difficult to say how likely it is that this speculation about early Mars is actually true. What we can say, and we are saying, is that the likelihood is high enough that the possibility should not be ignored.”  More

  • in

    A new catalyst can turn methane into something useful

    Although it is less abundant than carbon dioxide, methane gas contributes disproportionately to global warming because it traps more heat in the atmosphere than carbon dioxide, due to its molecular structure.MIT chemical engineers have now designed a new catalyst that can convert methane into useful polymers, which could help reduce greenhouse gas emissions.“What to do with methane has been a longstanding problem,” says Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT and the senior author of the study. “It’s a source of carbon, and we want to keep it out of the atmosphere but also turn it into something useful.”The new catalyst works at room temperature and atmospheric pressure, which could make it easier and more economical to deploy at sites of methane production, such as power plants and cattle barns.Daniel Lundberg PhD ’24 and MIT postdoc Jimin Kim are the lead authors of the study, which appears today in Nature Catalysis. Former postdoc Yu-Ming Tu and postdoc Cody Ritt also authors of the paper.Capturing methaneMethane is produced by bacteria known as methanogens, which are often highly concentrated in landfills, swamps, and other sites of decaying biomass. Agriculture is a major source of methane, and methane gas is also generated as a byproduct of transporting, storing, and burning natural gas. Overall, it is believed to account for about 15 percent of global temperature increases.At the molecular level, methane is made of a single carbon atom bound to four hydrogen atoms. In theory, this molecule should be a good building block for making useful products such as polymers. However, converting methane to other compounds has proven difficult because getting it to react with other molecules usually requires high temperature and high pressures.To achieve methane conversion without that input of energy, the MIT team designed a hybrid catalyst with two components: a zeolite and a naturally occurring enzyme. Zeolites are abundant, inexpensive clay-like minerals, and previous work has found that they can be used to catalyze the conversion of methane to carbon dioxide.In this study, the researchers used a zeolite called iron-modified aluminum silicate, paired with an enzyme called alcohol oxidase. Bacteria, fungi, and plants use this enzyme to oxidize alcohols.This hybrid catalyst performs a two-step reaction in which zeolite converts methane to methanol, and then the enzyme converts methanol to formaldehyde. That reaction also generates hydrogen peroxide, which is fed back into the zeolite to provide a source of oxygen for the conversion of methane to methanol.This series of reactions can occur at room temperature and doesn’t require high pressure. The catalyst particles are suspended in water, which can absorb methane from the surrounding air. For future applications, the researchers envision that it could be painted onto surfaces.“Other systems operate at high temperature and high pressure, and they use hydrogen peroxide, which is an expensive chemical, to drive the methane oxidation. But our enzyme produces hydrogen peroxide from oxygen, so I think our system could be very cost-effective and scalable,” Kim says.Creating a system that incorporates both enzymes and artificial catalysts is a “smart strategy,” says Damien Debecker, a professor at the Institute of Condensed Matter and Nanosciences at the University of Louvain, Belgium.“Combining these two families of catalysts is challenging, as they tend to operate in rather distinct operation conditions. By unlocking this constraint and mastering the art of chemo-enzymatic cooperation, hybrid catalysis becomes key-enabling: It opens new perspectives to run complex reaction systems in an intensified way,” says Debecker, who was not involved in the research.Building polymersOnce formaldehyde is produced, the researchers showed they could use that molecule to generate polymers by adding urea, a nitrogen-containing molecule found in urine. This resin-like polymer, known as urea-formaldehyde, is now used in particle board, textiles and other products.The researchers envision that this catalyst could be incorporated into pipes used to transport natural gas. Within those pipes, the catalyst could generate a polymer that could act as a sealant to heal cracks in the pipes, which are a common source of methane leakage. The catalyst could also be applied as a film to coat surfaces that are exposed to methane gas, producing polymers that could be collected for use in manufacturing, the researchers say.Strano’s lab is now working on catalysts that could be used to remove carbon dioxide from the atmosphere and combine it with nitrate to produce urea. That urea could then be mixed with the formaldehyde produced by the zeolite-enzyme catalyst to produce urea-formaldehyde.The research was funded by the U.S. Department of Energy. More