More stories

  • in

    Small eddies play a big role in feeding ocean microbes

    Subtropical gyres are enormous rotating ocean currents that generate sustained circulations in the Earth’s subtropical regions just to the north and south of the equator. These gyres are slow-moving whirlpools that circulate within massive basins around the world, gathering up nutrients, organisms, and sometimes trash, as the currents rotate from coast to coast.

    For years, oceanographers have puzzled over conflicting observations within subtropical gyres. At the surface, these massive currents appear to host healthy populations of phytoplankton — microbes that feed the rest of the ocean food chain and are responsible for sucking up a significant portion of the atmosphere’s carbon dioxide.

    But judging from what scientists know about the dynamics of gyres, they estimated the currents themselves wouldn’t be able to maintain enough nutrients to sustain the phytoplankton they were seeing. How, then, were the microbes able to thrive?

    Now, MIT researchers have found that phytoplankton may receive deliveries of nutrients from outside the gyres, and that the delivery vehicle is in the form of eddies — much smaller currents that swirl at the edges of a gyre. These eddies pull nutrients in from high-nutrient equatorial regions and push them into the center of a gyre, where the nutrients are then taken up by other currents and pumped to the surface to feed phytoplankton.

    Ocean eddies, the team found, appear to be an important source of nutrients in subtropical gyres. Their replenishing effect, which the researchers call a “nutrient relay,” helps maintain populations of phytoplankton, which play a central role in the ocean’s ability to sequester carbon from the atmosphere. While climate models tend to project a decline in the ocean’s ability to sequester carbon over the coming decades, this “nutrient relay” could help sustain carbon storage over the subtropical oceans.

    “There’s a lot of uncertainty about how the carbon cycle of the ocean will evolve as climate continues to change, ” says Mukund Gupta, a postdoc at Caltech who led the study as a graduate student at MIT. “As our paper shows, getting the carbon distribution right is not straightforward, and depends on understanding the role of eddies and other fine-scale motions in the ocean.”

    Gupta and his colleagues report their findings this week in the Proceedings of the National Academy of Sciences. The study’s co-authors are Jonathan Lauderdale, Oliver Jahn, Christopher Hill, Stephanie Dutkiewicz, and Michael Follows at MIT, and Richard Williams at the University of Liverpool.

    A snowy puzzle

    A cross-section of an ocean gyre resembles a stack of nesting bowls that is stratified by density: Warmer, lighter layers lie at the surface, while colder, denser waters make up deeper layers. Phytoplankton live within the ocean’s top sunlit layers, where the microbes require sunlight, warm temperatures, and nutrients to grow.

    When phytoplankton die, they sink through the ocean’s layers as “marine snow.” Some of this snow releases nutrients back into the current, where they are pumped back up to feed new microbes. The rest of the snow sinks out of the gyre, down to the deepest layers of the ocean. The deeper the snow sinks, the more difficult it is for it to be pumped back to the surface. The snow is then trapped, or sequestered, along with any unreleased carbon and nutrients.

    Oceanographers thought that the main source of nutrients in subtropical gyres came from recirculating marine snow. But as a portion of this snow inevitably sinks to the bottom, there must be another source of nutrients to explain the healthy populations of phytoplankton at the surface. Exactly what that source is “has left the oceanography community a little puzzled for some time,” Gupta says.

    Swirls at the edge

    In their new study, the team sought to simulate a subtropical gyre to see what other dynamics may be at work. They focused on the North Pacific gyre, one of the Earth’s five major gyres, which circulates over most of the North Pacific Ocean, and spans more than 20 million square kilometers. 

    The team started with the MITgcm, a general circulation model that simulates the physical circulation patterns in the atmosphere and oceans. To reproduce the North Pacific gyre’s dynamics as realistically as possible, the team used an MITgcm algorithm, previously developed at NASA and MIT, which tunes the model to match actual observations of the ocean, such as ocean currents recorded by satellites, and temperature and salinity measurements taken by ships and drifters.  

    “We use a simulation of the physical ocean that is as realistic as we can get, given the machinery of the model and the available observations,” Lauderdale says.

    Play video

    An animation of the North Pacific Ocean shows phosphate nutrient concentrations at 500 meters below the ocean surface. The swirls represent small eddies transporting phosphate from the nutrient-rich equator (lighter colors), northward toward the nutrient-depleted subtropics (darker colors). This nutrient relay mechanism helps sustain biological activity and carbon sequestration in the subtropical ocean. Credit: Oliver Jahn

    The realistic model captured finer details, at a resolution of less than 20 kilometers per pixel, compared to other models that have a more limited resolution. The team combined the simulation of the ocean’s physical behavior with the Darwin model — a simulation of microbe communities such as phytoplankton, and how they grow and evolve with ocean conditions.

    The team ran the combined simulation of the North Pacific gyre over a decade, and created animations to visualize the pattern of currents and the nutrients they carried, in and around the gyre. What emerged were small eddies that ran along the edges of the enormous gyre and appeared to be rich in nutrients.

    “We were picking up on little eddy motions, basically like weather systems in the ocean,” Lauderdale says. “These eddies were carrying packets of high-nutrient waters, from the equator, north into the center of the gyre and downwards along the sides of the bowls. We wondered if these eddy transfers made an important delivery mechanism.”

    Surprisingly, the nutrients first move deeper, away from the sunlight, before being returned upwards where the phytoplankton live. The team found that ocean eddies could supply up to 50 percent of the nutrients in subtropical gyres.

    “That is very significant,” Gupta says. “The vertical process that recycles nutrients from marine snow is only half the story. The other half is the replenishing effect of these eddies. As subtropical gyres contribute a significant part of the world’s oceans, we think this nutrient relay is of global importance.”

    This research was supported, in part, by the Simons Foundation and NASA. More

  • in

    Cracking the carbon removal challenge

    By most measures, MIT chemical engineering spinoff Verdox has been enjoying an exceptional year. The carbon capture and removal startup, launched in 2019, announced $80 million in funding in February from a group of investors that included Bill Gates’ Breakthrough Energy Ventures. Then, in April — after recognition as one of the year’s top energy pioneers by Bloomberg New Energy Finance — the company and partner Carbfix won a $1 million XPRIZE Carbon Removal milestone award. This was the first round in the Musk Foundation’s four-year, $100 million-competition, the largest prize offered in history.

    “While our core technology has been validated by the significant improvement of performance metrics, this external recognition further verifies our vision,” says Sahag Voskian SM ’15, PhD ’19, co-founder and chief technology officer at Verdox. “It shows that the path we’ve chosen is the right one.”

    The search for viable carbon capture technologies has intensified in recent years, as scientific models show with increasing certainty that any hope of avoiding catastrophic climate change means limiting CO2 concentrations below 450 parts per million by 2100. Alternative energies will only get humankind so far, and a vast removal of CO2 will be an important tool in the race to remove the gas from the atmosphere.

    Voskian began developing the company’s cost-effective and scalable technology for carbon capture in the lab of T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering at MIT. “It feels exciting to see ideas move from the lab to potential commercial production,” says Hatton, a co-founder of the company and scientific advisor, adding that Verdox has speedily overcome the initial technical hiccups encountered by many early phase companies. “This recognition enhances the credibility of what we’re doing, and really validates our approach.”

    At the heart of this approach is technology Voskian describes as “elegant and efficient.” Most attempts to grab carbon from an exhaust flow or from air itself require a great deal of energy. Voskian and Hatton came up with a design whose electrochemistry makes carbon capture appear nearly effortless. Their invention is a kind of battery: conductive electrodes coated with a compound called polyanthraquinone, which has a natural chemical attraction to carbon dioxide under certain conditions, and no affinity for CO2 when these conditions are relaxed. When activated by a low-level electrical current, the battery charges, reacting with passing molecules of CO2 and pulling them onto its surface. Once the battery becomes saturated, the CO2 can be released with a flip of voltage as a pure gas stream.

    “We showed that our technology works in a wide range of CO2 concentrations, from the 20 percent or higher found in cement and steel industry exhaust streams, down to the very diffuse 0.04 percent in air itself,” says Hatton. Climate change science suggests that removing CO2 directly from air “is an important component of the whole mitigation strategy,” he adds.

    “This was an academic breakthrough,” says Brian Baynes PhD ’04, CEO and co-founder of Verdox. Baynes, a chemical engineering alumnus and a former associate of Hatton’s, has many startups to his name, and a history as a venture capitalist and mentor to young entrepreneurs. When he first encountered Hatton and Voskian’s research in 2018, he was “impressed that their technology showed it could reduce energy consumption for certain kinds of carbon capture by 70 percent compared to other technologies,” he says. “I was encouraged and impressed by this low-energy footprint, and recommended that they start a company.”

    Neither Hatton nor Voskian had commercialized a product before, so they asked Baynes to help them get going. “I normally decline these requests, because the costs are generally greater than the upside,” Baynes says. “But this innovation had the potential to move the needle on climate change, and I saw it as a rare opportunity.”

    The Verdox team has no illusions about the challenge ahead. “The scale of the problem is enormous,” says Voskian. “Our technology must be in a position to capture mega- and gigatons of CO2 from air and emission sources.” Indeed, the International Panel on Climate Change estimates the world must remove 10 gigatons of CO2 per year by 2050 in order to keep global temperature rise under 2 degrees Celsius.

    To scale up successfully and at a pace that could meet the world’s climate challenge, Verdox must become “a business that works in a technoeconomic sense,” as Baynes puts it. This means, for instance, ensuring its carbon capture system offers clear and competitive cost benefits when deployed. Not a problem, says Voskian: “Our technology, because it uses electric energy, can be easily integrated into the grid, working with solar and wind on a plug-and-play basis.” The Verdox team believes their carbon footprint will beat that of competitors by orders of magnitude.

    The company is pushing past a series of technical obstacles as it ramps up: enabling the carbon capture battery to run hundreds of thousands of cycles before its performance wanes, and enhancing the polyanthraquinone chemistry so that the device is even more selective for CO2.

    After hurtling past critical milestones, Verdox is now working with its first announced commercial client: Norwegian aluminum company Hydro, which aims to eliminate CO2 from the exhaust of its smelters as it transitions to zero-carbon production.

    Verdox is also developing systems that can efficiently pull CO2 out of ambient air. “We’re designing units that would look like rows and rows of big fans that bring the air into boxes containing our batteries,” he says. Such approaches might prove especially useful in locations such as airfields, where there are higher-than-normal concentrations of CO2 emissions present.

    All this captured carbon needs to go somewhere. With XPRIZE partner Carbfix, which has a decade-old, proven method for mineralizing captured CO2 and depositing it in deep underground caverns, Verdox will have a final resting place for CO2 that cannot immediately be reused for industrial applications such as new fuels or construction materials.

    With its clients and partners, the team appears well-positioned for the next round of the carbon removal XPRIZE competition, which will award up to $50 million to the group that best demonstrates a working solution at a scale of at least 1,000 tons removed per year, and can present a viable blueprint for scaling to gigatons of removal per year.

    Can Verdox meaningfully reduce the planet’s growing CO2 burden? Voskian is sure of it. “Going at our current momentum, and seeing the world embrace carbon capture, this is the right path forward,” he says. “With our partners, deploying manufacturing facilities on a global scale, we will make a dent in the problem in our lifetime.” More

  • in

    3Q: How MIT is working to reduce carbon emissions on our campus

    Fast Forward: MIT’s Climate Action Plan for the Decade, launched in May 2021, charges MIT to eliminate its direct carbon emissions by 2050. Setting an interim goal of net zero emissions by 2026 is an important step to getting there. Joe Higgins, vice president for campus services and stewardship, speaks here about the coordinated, multi-team effort underway to address the Institute’s carbon-reduction goals, the challenges and opportunities in getting there, and creating a blueprint for a carbon-free campus in 2050.

    Q: The Fast Forward plan laid out specific goals for MIT to address its own carbon footprint. What has been the strategy to tackle these priorities?

    A: The launch of the Fast Forward Climate Action Plan empowered teams at MIT to expand the scope of our carbon reduction tasks beyond the work we’ve been doing to date. The on-campus activities called for in the plan range from substantially expanding our electric vehicle infrastructure on campus, to increasing our rooftop solar installations, to setting impact goals for food, water, and waste systems. Another strategy utilizes artificial intelligence to further reduce energy consumption and emissions from our buildings. When fully implemented, these systems will adjust a building’s temperature setpoints throughout the day while maintaining occupant comfort, and will use occupancy data, weather forecasts, and carbon intensity projections from the grid to make more efficient use of energy. 

    We have tremendous momentum right now thanks to the progress made over the past decade by our teams — which include planners, designers, engineers, construction managers, and sustainability and operations experts. Since 2014, our efforts to advance energy efficiency and incorporate renewable energy have reduced net emissions on campus by 20% (from a 2014 baseline) despite significant campus growth. One of our current goals is to further reduce energy use in high-intensity research buildings — 20 of our campus buildings consume more than 50% of our energy. To reduce energy usage in these buildings we have major energy retrofit projects in design or in planning for buildings 32, 46, 68, 76, E14, and E25, and we expect this work will reduce overall MIT emissions by an additional 10 to 15%.

    Q: The Fast Forward plan acknowledges the challenges we face in our efforts to reach our campus emission reduction goals, in part due to the current state of New England’s electrical grid. How does MIT’s district energy system factor into our approach? 

    A: MIT’s district energy system is a network of underground pipes and power lines that moves energy from the Central Utilities Plant (CUP) around to the vast majority of Institute buildings to provide electricity, heating, and air conditioning. Using a closed-loop, central-source system like this enables MIT to operate more efficiently by using less energy to heat and cool its buildings and labs, and by maintaining better load control to accommodate seasonal variations in peak demand.

    When the new MIT campus was built in Cambridge in 1916, it included a centralized state-of-the-art steam and electrical power plant that would service the campus buildings. This central district energy approach allowed MIT to avoid having individual furnaces in each building and to easily incorporate progressively cleaner fuel sources campus-wide over the years. After starting with coal as a primary energy source, MIT transitioned to fuel oil, then to natural gas, and then to cogeneration in 1995 — and each step has made the campus more energy efficient. Our continuous investment in a centralized infrastructure has facilitated our ability to improve energy efficiency while adding capacity; as new technologies become available, we can implement them across the entire campus. Our district energy system is very adaptable to seasonal variations in demand for cooling, heating and electricity, and builds upon decades of centralized investments in energy-efficient infrastructure.

    This past year, MIT completed a major upgrade of the district energy system whereby the majority of buildings on campus now benefit from the most advanced cogeneration technology for combined heating, cooling, and power delivery. This system generates electrical power that produces 15 to 25% less carbon than the current New England grid. We also have the ability to export power during times when the grid is most stressed, which contributes to the resiliency of local energy systems. On the flip side, any time the grid is a cleaner option, MIT is able to import a higher amount of electricity from the utility by distributing this energy through our centralized system. In fact, it’s important to note that we have the ability to import 100% of our electrical energy from the grid as it becomes cleaner. We anticipate that this will happen as the next major wave of technology innovation unfolds and the abundance of offshore wind and other renewable resources increases as anticipated by the end of this decade. As the grid gets greener, our adaptable district energy system will bring us closer to meeting our decarbonization goals.

    MIT’s ability to adapt its system and use new technologies is crucial right now as we work in collaboration with faculty, students, industry experts, peer institutions, and the cities of Cambridge and Boston to evaluate various strategies, opportunities, and constraints. In terms of evolving into a next-generation district energy system, we are reviewing options such as electric steam boilers and industrial-scale heat pumps, thermal batteries, geothermal exchange, micro-reactors, bio-based fuels, and green hydrogen produced from renewable energy. We are preparing to incorporate the most beneficial technologies into a blueprint that will get us to our 2050 goal.

    Q: What is MIT doing in the near term to reach the carbon-reduction goals of the climate action plan?

    A: In the near term, we are exploring several options, including enabling large-scale renewable energy projects and investing in verified carbon offset projects that reduce, avoid, or sequester carbon. In 2016, MIT joined a power purchase agreement (PPA) partnership that enabled the construction of a 650-acre solar farm in North Carolina and resulted in the early retirement of a nearby coal plant. We’ve documented a huge emissions savings from this, and we’re exploring how to do something similar on a much larger scale with a broader group of partners. As we seek out collaborative opportunities that enable the development of new renewable energy sources, we hope to provide a model for other institutions and organizations, as the original PPA did. Because PPAs accelerate the de-carbonization of regional electricity grids, they can have an enormous and far-reaching impact. We see these partnerships as an important component of achieving net zero emissions on campus as well as accelerating the de-carbonization of regional power grids — a transformation that must take place to reach zero emissions by 2050.

    Other near-term initiatives include enabling community solar power projects in Massachusetts to support the state’s renewable energy goals and provide opportunities for more property owners (municipalities, businesses, homeowners, etc.) to purchase affordable renewable energy. MIT is engaged with three of these projects; one of them is in operation today in Middleton, and the two others are scheduled to be built soon on Cape Cod.

    We’re joining the commonwealth and its cities, its organizations and utility providers on an unprecedented journey — the global transition to a clean energy system. Along the way, everything is going to change as technologies and the grid continue to evolve. Our focus is on both the near term and the future, as we plan a path into the next energy era. More

  • in

    Using nature’s structures in wooden buildings

    Concern about climate change has focused significant attention on the buildings sector, in particular on the extraction and processing of construction materials. The concrete and steel industries together are responsible for as much as 15 percent of global carbon dioxide emissions. In contrast, wood provides a natural form of carbon sequestration, so there’s a move to use timber instead. Indeed, some countries are calling for public buildings to be made at least partly from timber, and large-scale timber buildings have been appearing around the world.

    Observing those trends, Caitlin Mueller ’07, SM ’14, PhD ’14, an associate professor of architecture and of civil and environmental engineering in the Building Technology Program at MIT, sees an opportunity for further sustainability gains. As the timber industry seeks to produce wooden replacements for traditional concrete and steel elements, the focus is on harvesting the straight sections of trees. Irregular sections such as knots and forks are turned into pellets and burned, or ground up to make garden mulch, which will decompose within a few years; both approaches release the carbon trapped in the wood to the atmosphere.

    For the past four years, Mueller and her Digital Structures research group have been developing a strategy for “upcycling” those waste materials by using them in construction — not as cladding or finishes aimed at improving appearance, but as structural components. “The greatest value you can give to a material is to give it a load-bearing role in a structure,” she says. But when builders use virgin materials, those structural components are the most emissions-intensive parts of buildings due to their large volume of high-strength materials. Using upcycled materials in place of those high-carbon systems is therefore especially impactful in reducing emissions.

    Mueller and her team focus on tree forks — that is, spots where the trunk or branch of a tree divides in two, forming a Y-shaped piece. In architectural drawings, there are many similar Y-shaped nodes where straight elements come together. In such cases, those units must be strong enough to support critical loads.

    “Tree forks are naturally engineered structural connections that work as cantilevers in trees, which means that they have the potential to transfer force very efficiently thanks to their internal fiber structure,” says Mueller. “If you take a tree fork and slice it down the middle, you see an unbelievable network of fibers that are intertwining to create these often three-dimensional load transfer points in a tree. We’re starting to do the same thing using 3D printing, but we’re nowhere near what nature does in terms of complex fiber orientation and geometry.”

    She and her team have developed a five-step “design-to-fabrication workflow” that combines natural structures such as tree forks with the digital and computational tools now used in architectural design. While there’s long been a “craft” movement to use natural wood in railings and decorative features, the use of computational tools makes it possible to use wood in structural roles — without excessive cutting, which is costly and may compromise the natural geometry and internal grain structure of the wood.

    Given the wide use of digital tools by today’s architects, Mueller believes that her approach is “at least potentially scalable and potentially achievable within our industrialized materials processing systems.” In addition, by combining tree forks with digital design tools, the novel approach can also support the trend among architects to explore new forms. “Many iconic buildings built in the past two decades have unexpected shapes,” says Mueller. “Tree branches have a very specific geometry that sometimes lends itself to an irregular or nonstandard architectural form — driven not by some arbitrary algorithm but by the material itself.”

    Step 0: Find a source, set goals

    Before starting their design-to-fabrication process, the researchers needed to locate a source of tree forks. Mueller found help in the Urban Forestry Division of the City of Somerville, Massachusetts, which maintains a digital inventory of more than 2,000 street trees — including more than 20 species — and records information about the location, approximate trunk diameter, and condition of each tree.

    With permission from the forestry division, the team was on hand in 2018 when a large group of trees was cut down near the site of the new Somerville High School. Among the heavy equipment on site was a chipper, poised to turn all the waste wood into mulch. Instead, the workers obligingly put the waste wood into the researchers’ truck to be brought to MIT.

    In their project, the MIT team sought not only to upcycle that waste material but also to use it to create a structure that would be valued by the public. “Where I live, the city has had to take down a lot of trees due to damage from an invasive species of beetle,” Mueller explains. “People get really upset — understandably. Trees are an important part of the urban fabric, providing shade and beauty.” She and her team hoped to reduce that animosity by “reinstalling the removed trees in the form of a new functional structure that would recreate the atmosphere and spatial experience previously provided by the felled trees.”

    With their source and goals identified, the researchers were ready to demonstrate the five steps in their design-to-fabrication workflow for making spatial structures using an inventory of tree forks.

    Step 1: Create a digital material library

    The first task was to turn their collection of tree forks into a digital library. They began by cutting off excess material to produce isolated tree forks. They then created a 3D scan of each fork. Mueller notes that as a result of recent progress in photogrammetry (measuring objects using photographs) and 3D scanning, they could create high-resolution digital representations of the individual tree forks with relatively inexpensive equipment, even using apps that run on a typical smartphone.

    In the digital library, each fork is represented by a “skeletonized” version showing three straight bars coming together at a point. The relative geometry and orientation of the branches are of particular interest because they determine the internal fiber orientation that gives the component its strength.

    Step 2: Find the best match between the initial design and the material library

    Like a tree, a typical architectural design is filled with Y-shaped nodes where three straight elements meet up to support a critical load. The goal was therefore to match the tree forks in the material library with the nodes in a sample architectural design.

    First, the researchers developed a “mismatch metric” for quantifying how well the geometries of a particular tree fork aligned with a given design node. “We’re trying to line up the straight elements in the structure with where the branches originally were in the tree,” explains Mueller. “That gives us the optimal orientation for load transfer and maximizes use of the inherent strength of the wood fiber.” The poorer the alignment, the higher the mismatch metric.

    The goal was to get the best overall distribution of all the tree forks among the nodes in the target design. Therefore, the researchers needed to try different fork-to-node distributions and, for each distribution, add up the individual fork-to-node mismatch errors to generate an overall, or global, matching score. The distribution with the best matching score would produce the most structurally efficient use of the total tree fork inventory.

    Since performing that process manually would take far too long to be practical, they turned to the “Hungarian algorithm,” a technique developed in 1955 for solving such problems. “The brilliance of the algorithm is solving that [matching] problem very quickly,” Mueller says. She notes that it’s a very general-use algorithm. “It’s used for things like marriage match-making. It can be used any time you have two collections of things that you’re trying to find unique matches between. So, we definitely didn’t invent the algorithm, but we were the first to identify that it could be used for this problem.”

    The researchers performed repeated tests to show possible distributions of the tree forks in their inventory and found that the matching score improved as the number of forks available in the material library increased — up to a point. In general, the researchers concluded that the mismatch score was lowest, and thus best, when there were about three times as many forks in the material library as there were nodes in the target design.

    Step 3: Balance designer intention with structural performance

    The next step in the process was to incorporate the intention or preference of the designer. To permit that flexibility, each design includes a limited number of critical parameters, such as bar length and bending strain. Using those parameters, the designer can manually change the overall shape, or geometry, of the design or can use an algorithm that automatically changes, or “morphs,” the geometry. And every time the design geometry changes, the Hungarian algorithm recalculates the optimal fork-to-node matching.

    “Because the Hungarian algorithm is extremely fast, all the morphing and the design updating can be really fluid,” notes Mueller. In addition, any change to a new geometry is followed by a structural analysis that checks the deflections, strain energy, and other performance measures of the structure. On occasion, the automatically generated design that yields the best matching score may deviate far from the designer’s initial intention. In such cases, an alternative solution can be found that satisfactorily balances the design intention with a low matching score.

    Step 4: Automatically generate the machine code for fast cutting

    When the structural geometry and distribution of tree forks have been finalized, it’s time to think about actually building the structure. To simplify assembly and maintenance, the researchers prepare the tree forks by recutting their end faces to better match adjoining straight timbers and cutting off any remaining bark to reduce susceptibility to rot and fire.

    To guide that process, they developed a custom algorithm that automatically computes the cuts needed to make a given tree fork fit into its assigned node and to strip off the bark. The goal is to remove as little material as possible but also to avoid a complex, time-consuming machining process. “If we make too few cuts, we’ll cut off too much of the critical structural material. But we don’t want to make a million tiny cuts because it will take forever,” Mueller explains.

    The team uses facilities at the Autodesk Boston Technology Center Build Space, where the robots are far larger than any at MIT and the processing is all automated. To prepare each tree fork, they mount it on a robotic arm that pushes the joint through a traditional band saw in different orientations, guided by computer-generated instructions. The robot also mills all the holes for the structural connections. “That’s helpful because it ensures that everything is aligned the way you expect it to be,” says Mueller.

    Step 5: Assemble the available forks and linear elements to build the structure

    The final step is to assemble the structure. The tree-fork-based joints are all irregular, and combining them with the precut, straight wooden elements could be difficult. However, they’re all labeled. “All the information for the geometry is embedded in the joint, so the assembly process is really low-tech,” says Mueller. “It’s like a child’s toy set. You just follow the instructions on the joints to put all the pieces together.”

    They installed their final structure temporarily on the MIT campus, but Mueller notes that it was only a portion of the structure they plan to eventually build. “It had 12 nodes that we designed and fabricated using our process,” she says, adding that the team’s work was “a little interrupted by the pandemic.” As activity on campus resumes, the researchers plan to finish designing and building the complete structure, which will include about 40 nodes and will be installed as an outdoor pavilion on the site of the felled trees in Somerville.

    In addition, they will continue their research. Plans include working with larger material libraries, some with multibranch forks, and replacing their 3D-scanning technique with computerized tomography scanning technologies that can automatically generate a detailed geometric representation of a tree fork, including its precise fiber orientation and density. And in a parallel project, they’ve been exploring using their process with other sources of materials, with one case study focusing on using material from a demolished wood-framed house to construct more than a dozen geodesic domes.

    To Mueller, the work to date already provides new guidance for the architectural design process. With digital tools, it has become easy for architects to analyze the embodied carbon or future energy use of a design option. “Now we have a new metric of performance: How well am I using available resources?” she says. “With the Hungarian algorithm, we can compute that metric basically in real time, so we can work rapidly and creatively with that as another input to the design process.”

    This research was supported by MIT’s School of Architecture and Planning via the HASS Award.

    This article appears in the Autumn 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    MIT entrepreneurs think globally, act locally

    Born and raised amid the natural beauty of the Dominican Republic, Andrés Bisonó León feels a deep motivation to help solve a problem that has been threatening the Caribbean island nation’s tourism industry, its economy, and its people.

    As Bisonó León discussed with his long-time friend and mentor, the Walter M. May and A. Hazel May Professor of Mechanical Engineering (MechE) Alexander Slocum Sr., ugly mats of toxic sargassum seaweed have been encroaching on the Dominican Republic’s pristine beaches and other beaches in the Caribbean region, and public and private organizations have fought a losing battle using expensive, environmentally damaging methods to clean it up. Slocum, who was on the U.S. Department of Energy’s Deepwater Horizon team, has extensive experience with systems that operate in the ocean.

    “In the last 10 years,” says Bisonó León, now an MBA candidate in the MIT Sloan School of Management, “sargassum, a toxic seaweed invasion, has cost the Caribbean as much as $120 million a year in cleanup and has meant a 30 to 35 percent tourism reduction, affecting not only the tourism industry, but also the environment, marine life, local economies, and human health.”

    One of Bisonó León’s discussions with Slocum took place within earshot of MechE alumnus Luke Gray ’18, SM ’20, who had worked with Slocum on other projects and was at the time was about to begin his master’s program.

    “Professor Slocum and Andrés happened to be discussing the sargassum problem in Andrés’ home country,” Gray says. “A week later I was on a plane to the DR to collect sargassum samples and survey the problem in Punta Cana. When I returned, my master’s program was underway, and I already had my thesis project!”

    Gray also had started a working partnership with Bisonó León, which both say proceeded seamlessly right from the first moment.

    “I feel that Luke right away understood the magnitude of the problem and the value we could create in the Dominican Republic and across the Caribbean by teaming up,” Bisonó León says.

    Both Bisonó León and Gray also say they felt a responsibility to work toward helping the global environment.

    “All of my major projects up until now have involved machines for climate restoration and/or adaptation,” says Gray.

    The technologies Bisonó León and Gray arrived at after 18 months of R&D were designed to provide solutions both locally and globally.

    Their Littoral Collection Module (LCM) skims sargassum seaweed off the surface of the water with nets that can be mounted on any boat. The device sits across the boat, with two large hoops holding the nets open, one on each side. As the boat travels forward, it cuts through the seaweed, which flows to the sides of the vessel and through the hoops into the nets. Effective at sweeping the seaweed from the water, the device can be employed by anyone with a boat, including local fishermen whose livelihoods have been disrupted by the seaweed’s damaging effect on tourism and the local economy.

    The sargassum can then be towed out to sea, where Bisonó León’s and Gray’s second technology can come into play. By pumping the seaweed into very deep water, where it then sinks to the bottom of the ocean, the carbon in the seaweed can be sequestered. Other methods for disposing of the seaweed generally involve putting it into landfills, where it emits greenhouse gases such as methane and carbon dioxide as it breaks down. Although some seaweed can be put to other uses, including as fertilizer, sargassum has been found to contain hard-to-remove toxic substances such as arsenic and heavy metals.

    In spring 2020, Bisonó León and Gray formed a company, SOS (Sargassum Ocean Sequestration) Carbon.

    Bisonó León says he comes from a long line of entrepreneurs who often expressed much commitment to social impact. His family has been involved in several different industries, his grandfather and great uncles having opened the first cigar factory in the Dominican Republic in 1903.

    Gray says internships with startup companies and the undergraduate projects he did with Slocum developed his interest in entrepreneurship, and his involvement with the sargassum problem only reinforced that inclination. During his master’s program, he says he became “obsessed” with finding a solution.

    “Professor Slocum let me think extremely big, and so it was almost inevitable that the distillation of our two years of work would continue in some form, and starting a company happened to be the right path. My master’s experience of taking an essentially untouched problem like sargassum and then one year later designing, building, and sending 15,000 pounds of custom equipment to test for three months on a Dominican Navy ship made me realize I had discovered a recipe I could repeat — and machine design had become my core competency,” Gray says.

    During the initial research and development of their technologies, Bisonó León and Gray raised $258,000 from 20 different organizations. Between June and December 2021, they succeeded in removing 3.5 million pounds of sargassum and secured contracts with Grupo Puntacana, which operates several tourist resorts, and with other hotels such as Club Med in Punta Cana. The company subcontracts with the association of fishermen in Punta Cana, employing 15 fishermen who operate LCMs and training 35 others to join as the operation expands.

    Their success so far demonstrates “’mens et manus’ at work,” says Slocum, referring to MIT’s motto, which is Latin for “mind and hand.” “Geeks hear about a very real problem that affects very real people who have no other option for their livelihoods, and they respond by inventing a solution so elegant that it can be readily deployed by those most hurt by the problem to address the problem.

    “The team was always focused on the numbers, from physics to finance, and did not let hype or doubts deter their determination to rationally solve this huge problem.”

    Slocum says he could predict Bisonó León and Gray would work well together “because they started out as good, smart people with complementary skills whose hearts and minds were in the right place.”

    “We are working on having a global impact to reduce millions of tons of CO2 per year,” says Bisonó León. “With training from Sloan and cross-disciplinary collaborative spirit, we will be able to further expand environmental and social impact platforms much needed in the Caribbean to be able to drive real change regionally and globally.”

    “I hope SOS Carbon can serve as a model and inspire similar entrepreneurial efforts,” Gray says. More

  • in

    A dirt cheap solution? Common clay materials may help curb methane emissions

    Methane is a far more potent greenhouse gas than carbon dioxide, and it has a pronounced effect within first two decades of its presence in the atmosphere. In the recent international climate negotiations in Glasgow, abatement of methane emissions was identified as a major priority in attempts to curb global climate change quickly.

    Now, a team of researchers at MIT has come up with a promising approach to controlling methane emissions and removing it from the air, using an inexpensive and abundant type of clay called zeolite. The findings are described in the journal ACS Environment Au, in a paper by doctoral student Rebecca Brenneis, Associate Professor Desiree Plata, and two others.

    Although many people associate atmospheric methane with drilling and fracking for oil and natural gas, those sources only account for about 18 percent of global methane emissions, Plata says. The vast majority of emitted methane comes from such sources as slash-and-burn agriculture, dairy farming, coal and ore mining, wetlands, and melting permafrost. “A lot of the methane that comes into the atmosphere is from distributed and diffuse sources, so we started to think about how you could take that out of the atmosphere,” she says.

    The answer the researchers found was something dirt cheap — in fact, a special kind of “dirt,” or clay. They used zeolite clays, a material so inexpensive that it is currently used to make cat litter. Treating the zeolite with a small amount of copper, the team found, makes the material very effective at absorbing methane from the air, even at extremely low concentrations.

    The system is simple in concept, though much work remains on the engineering details. In their lab tests, tiny particles of the copper-enhanced zeolite material, similar to cat litter, were packed into a reaction tube, which was then heated from the outside as the stream of gas, with methane levels ranging from just 2 parts per million up to 2 percent concentration, flowed through the tube. That range covers everything that might exist in the atmosphere, down to subflammable levels that cannot be burned or flared directly.

    The process has several advantages over other approaches to removing methane from air, Plata says. Other methods tend to use expensive catalysts such as platinum or palladium, require high temperatures of at least 600 degrees Celsius, and tend to require complex cycling between methane-rich and oxygen-rich streams, making the devices both more complicated and more risky, as methane and oxygen are highly combustible on their own and in combination.

    “The 600 degrees where they run these reactors makes it almost dangerous to be around the methane,” as well as the pure oxygen, Brenneis says. “They’re solving the problem by just creating a situation where there’s going to be an explosion.” Other engineering complications also arise from the high operating temperatures. Unsurprisingly, such systems have not found much use.

    As for the new process, “I think we’re still surprised at how well it works,” says Plata, who is the Gilbert W. Winslow Associate Professor of Civil and Environmental Engineering. The process seems to have its peak effectiveness at about 300 degrees Celsius, which requires far less energy for heating than other methane capture processes. It also can work at concentrations of methane lower than other methods can address, even small fractions of 1 percent, which most methods cannot remove, and does so in air rather than pure oxygen, a major advantage for real-world deployment.

    The method converts the methane into carbon dioxide. That might sound like a bad thing, given the worldwide efforts to combat carbon dioxide emissions. “A lot of people hear ‘carbon dioxide’ and they panic; they say ‘that’s bad,’” Plata says. But she points out that carbon dioxide is much less impactful in the atmosphere than methane, which is about 80 times stronger as a greenhouse gas over the first 20 years, and about 25 times stronger for the first century. This effect arises from that fact that methane turns into carbon dioxide naturally over time in the atmosphere. By accelerating that process, this method would drastically reduce the near-term climate impact, she says. And, even converting half of the atmosphere’s methane to carbon dioxide would increase levels of the latter by less than 1 part per million (about 0.2 percent of today’s atmospheric carbon dioxide) while saving about 16 percent of total radiative warming.

    The ideal location for such systems, the team concluded, would be in places where there is a relatively concentrated source of methane, such as dairy barns and coal mines. These sources already tend to have powerful air-handling systems in place, since a buildup of methane can be a fire, health, and explosion hazard. To surmount the outstanding engineering details, the team has just been awarded a $2 million grant from the U.S. Department of Energy to continue to develop specific equipment for methane removal in these types of locations.

    “The key advantage of mining air is that we move a lot of it,” she says. “You have to pull fresh air in to enable miners to breathe, and to reduce explosion risks from enriched methane pockets. So, the volumes of air that are moved in mines are enormous.” The concentration of methane is too low to ignite, but it’s in the catalysts’ sweet spot, she says.

    Adapting the technology to specific sites should be relatively straightforward. The lab setup the team used in their tests consisted of  “only a few components, and the technology you would put in a cow barn could be pretty simple as well,” Plata says. However, large volumes of gas do not flow that easily through clay, so the next phase of the research will focus on ways of structuring the clay material in a multiscale, hierarchical configuration that will aid air flow.

    “We need new technologies for oxidizing methane at concentrations below those used in flares and thermal oxidizers,” says Rob Jackson, a professor of earth systems science at Stanford University, who was not involved in this work. “There isn’t a cost-effective technology today for oxidizing methane at concentrations below about 2,000 parts per million.”

    Jackson adds, “Many questions remain for scaling this and all similar work: How quickly will the catalyst foul under field conditions? Can we get the required temperatures closer to ambient conditions? How scaleable will such technologies be when processing large volumes of air?”

    One potential major advantage of the new system is that the chemical process involved releases heat. By catalytically oxidizing the methane, in effect the process is a flame-free form of combustion. If the methane concentration is above 0.5 percent, the heat released is greater than the heat used to get the process started, and this heat could be used to generate electricity.

    The team’s calculations show that “at coal mines, you could potentially generate enough heat to generate electricity at the power plant scale, which is remarkable because it means that the device could pay for itself,” Plata says. “Most air-capture solutions cost a lot of money and would never be profitable. Our technology may one day be a counterexample.”

    Using the new grant money, she says, “over the next 18 months we’re aiming to demonstrate a proof of concept that this can work in the field,” where conditions can be more challenging than in the lab. Ultimately, they hope to be able to make devices that would be compatible with existing air-handling systems and could simply be an extra component added in place. “The coal mining application is meant to be at a stage that you could hand to a commercial builder or user three years from now,” Plata says.

    In addition to Plata and Brenneis, the team included Yale University PhD student Eric Johnson and former MIT postdoc Wenbo Shi. The work was supported by the Gerstner Philanthropies, Vanguard Charitable Trust, the Betty Moore Inventor Fellows Program, and MIT’s Research Support Committee. More

  • in

    Global warming begets more warming, new paleoclimate study finds

    It is increasingly clear that the prolonged drought conditions, record-breaking heat, sustained wildfires, and frequent, more extreme storms experienced in recent years are a direct result of rising global temperatures brought on by humans’ addition of carbon dioxide to the atmosphere. And a new MIT study on extreme climate events in Earth’s ancient history suggests that today’s planet may become more volatile as it continues to warm.

    The study, appearing today in Science Advances, examines the paleoclimate record of the last 66 million years, during the Cenozoic era, which began shortly after the extinction of the dinosaurs. The scientists found that during this period, fluctuations in the Earth’s climate experienced a surprising “warming bias.” In other words, there were far more warming events — periods of prolonged global warming, lasting thousands to tens of thousands of years — than cooling events. What’s more, warming events tended to be more extreme, with greater shifts in temperature, than cooling events.

    The researchers say a possible explanation for this warming bias may lie in a “multiplier effect,” whereby a modest degree of warming — for instance from volcanoes releasing carbon dioxide into the atmosphere — naturally speeds up certain biological and chemical processes that enhance these fluctuations, leading, on average, to still more warming.

    Interestingly, the team observed that this warming bias disappeared about 5 million years ago, around the time when ice sheets started forming in the Northern Hemisphere. It’s unclear what effect the ice has had on the Earth’s response to climate shifts. But as today’s Arctic ice recedes, the new study suggests that a multiplier effect may kick back in, and the result may be a further amplification of human-induced global warming.

    “The Northern Hemisphere’s ice sheets are shrinking, and could potentially disappear as a long-term consequence of human actions” says the study’s lead author Constantin Arnscheidt, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “Our research suggests that this may make the Earth’s climate fundamentally more susceptible to extreme, long-term global warming events such as those seen in the geologic past.”

    Arnscheidt’s study co-author is Daniel Rothman, professor of geophysics at MIT, and  co-founder and co-director of MIT’s Lorenz Center.

    A volatile push

    For their analysis, the team consulted large databases of sediments containing deep-sea benthic foraminifera — single-celled organisms that have been around for hundreds of millions of years and whose hard shells are preserved in sediments. The composition of these shells is affected by the ocean temperatures as organisms are growing; the shells are therefore considered a reliable proxy for the Earth’s ancient temperatures.

    For decades, scientists have analyzed the composition of these shells, collected from all over the world and dated to various time periods, to track how the Earth’s temperature has fluctuated over millions of years. 

    “When using these data to study extreme climate events, most studies have focused on individual large spikes in temperature, typically of a few degrees Celsius warming,” Arnscheidt says. “Instead, we tried to look at the overall statistics and consider all the fluctuations involved, rather than picking out the big ones.”

    The team first carried out a statistical analysis of the data and observed that, over the last 66 million years, the distribution of global temperature fluctuations didn’t resemble a standard bell curve, with symmetric tails representing an equal probability of extreme warm and extreme cool fluctuations. Instead, the curve was noticeably lopsided, skewed toward more warm than cool events. The curve also exhibited a noticeably longer tail, representing warm events that were more extreme, or of higher temperature, than the most extreme cold events.

    “This indicates there’s some sort of amplification relative to what you would otherwise have expected,” Arnscheidt says. “Everything’s pointing to something fundamental that’s causing this push, or bias toward warming events.”

    “It’s fair to say that the Earth system becomes more volatile, in a warming sense,” Rothman adds.

    A warming multiplier

    The team wondered whether this warming bias might have been a result of “multiplicative noise” in the climate-carbon cycle. Scientists have long understood that higher temperatures, up to a point, tend to speed up biological and chemical processes. Because the carbon cycle, which is a key driver of long-term climate fluctuations, is itself composed of such processes, increases in temperature may lead to larger fluctuations, biasing the system towards extreme warming events.

    In mathematics, there exists a set of equations that describes such general amplifying, or multiplicative effects. The researchers applied this multiplicative theory to their analysis to see whether the equations could predict the asymmetrical distribution, including the degree of its skew and the length of its tails.

    In the end, they found that the data, and the observed bias toward warming, could be explained by the multiplicative theory. In other words, it’s very likely that, over the last 66 million years, periods of modest warming were on average further enhanced by multiplier effects, such as the response of biological and chemical processes that further warmed the planet.

    As part of the study, the researchers also looked at the correlation between past warming events and changes in Earth’s orbit. Over hundreds of thousands of years, Earth’s orbit around the sun regularly becomes more or less elliptical. But scientists have wondered why many past warming events appeared to coincide with these changes, and why these events feature outsized warming compared with what the change in Earth’s orbit could have wrought on its own.

    So, Arnscheidt and Rothman incorporated the Earth’s orbital changes into the multiplicative model and their analysis of Earth’s temperature changes, and found that multiplier effects could predictably amplify, on average, the modest temperature rises due to changes in Earth’s orbit.

    “Climate warms and cools in synchrony with orbital changes, but the orbital cycles themselves would predict only modest changes in climate,” Rothman says. “But if we consider a multiplicative model, then modest warming, paired with this multiplier effect, can result in extreme events that tend to occur at the same time as these orbital changes.”

    “Humans are forcing the system in a new way,” Arnscheidt adds. “And this study is showing that, when we increase temperature, we’re likely going to interact with these natural, amplifying effects.”

    This research was supported, in part, by MIT’s School of Science. More