More stories

  • in

    Setting carbon management in stone

    Keeping global temperatures within limits deemed safe by the Intergovernmental Panel on Climate Change means doing more than slashing carbon emissions. It means reversing them.

    “If we want to be anywhere near those limits [of 1.5 or 2 C], then we have to be carbon neutral by 2050, and then carbon negative after that,” says Matěj Peč, a geoscientist and the Victor P. Starr Career Development Assistant Professor in the Department of Earth, Atmospheric, and Planetary Sciences (EAPS).

    Going negative will require finding ways to radically increase the world’s capacity to capture carbon from the atmosphere and put it somewhere where it will not leak back out. Carbon capture and storage projects already suck in tens of million metric tons of carbon each year. But putting a dent in emissions will mean capturing many billions of metric tons more. Today, people emit around 40 billion tons of carbon each year globally, mainly by burning fossil fuels.

    Because of the need for new ideas when it comes to carbon storage, Peč has created a proposal for the MIT Climate Grand Challenges competition — a bold and sweeping effort by the Institute to support paradigm-shifting research and innovation to address the climate crisis. Called the Advanced Carbon Mineralization Initiative, his team’s proposal aims to bring geologists, chemists, and biologists together to make permanently storing carbon underground workable under different geological conditions. That means finding ways to speed-up the process by which carbon pumped underground is turned into rock, or mineralized.

    “That’s what the geology has to offer,” says Peč, who is a lead on the project, along with Ed Boyden, professor of biological engineering, brain and cognitive sciences, and media arts and sciences, and Yogesh Surendranath, professor of chemistry. “You look for the places where you can safely and permanently store these huge volumes of CO2.”

    Peč‘s proposal is one of 27 finalists selected from a pool of almost 100 Climate Grand Challenge proposals submitted by collaborators from across the Institute. Each finalist team received $100,000 to further develop their research proposals. A subset of finalists will be announced in April, making up a portfolio of multiyear “flagship” projects receiving additional funding and support.

    Building industries capable of going carbon negative presents huge technological, economic, environmental, and political challenges. For one, it’s expensive and energy-intensive to capture carbon from the air with existing technologies, which are “hellishly complicated,” says Peč. Much of the carbon capture underway today focuses on more concentrated sources like coal- or gas-burning power plants.

    It’s also difficult to find geologically suitable sites for storage. To keep it in the ground after it has been captured, carbon must either be trapped in airtight reservoirs or turned to stone.

    One of the best places for carbon capture and storage (CCS) is Iceland, where a number of CCS projects are up and running. The island’s volcanic geology helps speed up the mineralization process, as carbon pumped underground interacts with basalt rock at high temperatures. In that ideal setting, says Peč, 95 percent of carbon injected underground is mineralized after just two years — a geological flash.

    But Iceland’s geology is unusual. Elsewhere requires deeper drilling to reach suitable rocks at suitable temperature, which adds costs to already expensive projects. Further, says Peč, there’s not a complete understanding of how different factors influence the speed of mineralization.

    Peč‘s Climate Grand Challenge proposal would study how carbon mineralizes under different conditions, as well as explore ways to make mineralization happen more rapidly by mixing the carbon dioxide with different fluids before injecting it underground. Another idea — and the reason why there are biologists on the team — is to learn from various organisms adept at turning carbon into calcite shells, the same stuff that makes up limestone.

    Two other carbon management proposals, led by EAPS Cecil and Ida Green Professor Bradford Hager, were also selected as Climate Grand Challenge finalists. They focus on both the technologies necessary for capturing and storing gigatons of carbon as well as the logistical challenges involved in such an enormous undertaking.

    That involves everything from choosing suitable sites for storage, to regulatory and environmental issues, as well as how to bring disparate technologies together to improve the whole pipeline. The proposals emphasize CCS systems that can be powered by renewable sources, and can respond dynamically to the needs of different hard-to-decarbonize industries, like concrete and steel production.

    “We need to have an industry that is on the scale of the current oil industry that will not be doing anything but pumping CO2 into storage reservoirs,” says Peč.

    For a problem that involves capturing enormous amounts of gases from the atmosphere and storing it underground, it’s no surprise EAPS researchers are so involved. The Earth sciences have “everything” to offer, says Peč, including the good news that the Earth has more than enough places where carbon might be stored.

    “Basically, the Earth is really, really large,” says Peč. “The reasonably accessible places, which are close to the continents, store somewhere on the order of tens of thousands to hundreds thousands of gigatons of carbon. That’s orders of magnitude more than we need to put back in.” More

  • in

    Microbes and minerals may have set off Earth’s oxygenation

    For the first 2 billion years of Earth’s history, there was barely any oxygen in the air. While some microbes were photosynthesizing by the latter part of this period, oxygen had not yet accumulated at levels that would impact the global biosphere.

    But somewhere around 2.3 billion years ago, this stable, low-oxygen equilibrium shifted, and oxygen began building up in the atmosphere, eventually reaching the life-sustaining levels we breathe today. This rapid infusion is known as the Great Oxygenation Event, or GOE. What triggered the event and pulled the planet out of its low-oxygen funk is one of the great mysteries of science.

    A new hypothesis, proposed by MIT scientists, suggests that oxygen finally started accumulating in the atmosphere thanks to interactions between certain marine microbes and minerals in ocean sediments. These interactions helped prevent oxygen from being consumed, setting off a self-amplifying process where more and more oxygen was made available to accumulate in the atmosphere.

    The scientists have laid out their hypothesis using mathematical and evolutionary analyses, showing that there were indeed microbes that existed before the GOE and evolved the ability to interact with sediment in the way that the researchers have proposed.

    Their study, appearing today in Nature Communications, is the first to connect the co-evolution of microbes and minerals to Earth’s oxygenation.

    “Probably the most important biogeochemical change in the history of the planet was oxygenation of the atmosphere,” says study author Daniel Rothman, professor of geophysics in MIT’s Department of Earth, Atmospheric, and Planetary Sciences (EAPS). “We show how the interactions of microbes, minerals, and the geochemical environment acted in concert to increase oxygen in the atmosphere.”

    The study’s co-authors include lead author Haitao Shang, a former MIT graduate student, and Gregory Fournier, associate professor of geobiology in EAPS.

    A step up

    Today’s oxygen levels in the atmosphere are a stable balance between processes that produce oxygen and those that consume it. Prior to the GOE, the atmosphere maintained a different kind of equilibrium, with producers and consumers of oxygen  in balance, but in a way that didn’t leave much extra oxygen for the atmosphere.

    What could have pushed the planet out of one stable, oxygen-deficient state to another stable, oxygen-rich state?

    “If you look at Earth’s history, it appears there were two jumps, where you went from a steady state of low oxygen to a steady state of much higher oxygen, once in the Paleoproterozoic, once in the Neoproterozoic,” Fournier notes. “These jumps couldn’t have been because of a gradual increase in excess oxygen. There had to have been some feedback loop that caused this step-change in stability.”

    He and his colleagues wondered whether such a positive feedback loop could have come from a process in the ocean that made some organic carbon unavailable to its consumers. Organic carbon is mainly consumed through oxidation, usually accompanied by the consumption of oxygen — a process by which microbes in the ocean use oxygen to break down organic matter, such as detritus that has settled in sediment. The team wondered: Could there have been some process by which the presence of oxygen stimulated its further accumulation?

    Shang and Rothman worked out a mathematical model that made the following prediction: If microbes possessed the ability to only partially oxidize organic matter, the partially-oxidized matter, or “POOM,” would effectively become “sticky,” and chemically bind to minerals in sediment in a way that would protect the material from further oxidation. The oxygen that would otherwise have been consumed to fully degrade the material would instead be free to build up in the atmosphere. This process, they found, could serve as a positive feedback, providing a natural pump to push the atmosphere into a new, high-oxygen equilibrium.

    “That led us to ask, is there a microbial metabolism out there that produced POOM?” Fourier says.

    In the genes

    To answer this, the team searched through the scientific literature and identified a group of microbes that partially oxidizes organic matter in the deep ocean today. These microbes belong to the bacterial group SAR202, and their partial oxidation is carried out through an enzyme, Baeyer-Villiger monooxygenase, or BVMO.

    The team carried out a phylogenetic analysis to see how far back the microbe, and the gene for the enzyme, could be traced. They found that the bacteria did indeed have ancestors dating back before the GOE, and that the gene for the enzyme could be traced across various microbial species, as far back as pre-GOE times.

    What’s more, they found that the gene’s diversification, or the number of species that acquired the gene, increased significantly during times when the atmosphere experienced spikes in oxygenation, including once during the GOE’s Paleoproterozoic, and again in the Neoproterozoic.

    “We found some temporal correlations between diversification of POOM-producing genes, and the oxygen levels in the atmosphere,” Shang says. “That supports our overall theory.”

    To confirm this hypothesis will require far more follow-up, from experiments in the lab to surveys in the field, and everything in between. With their new study, the team has introduced a new suspect in the age-old case of what oxygenated Earth’s atmosphere.

    “Proposing a novel method, and showing evidence for its plausibility, is the first but important step,” Fournier says. “We’ve identified this as a theory worthy of study.”

    This work was supported in part by the mTerra Catalyst Fund and the National Science Foundation. More

  • in

    Study: Ice flow is more sensitive to stress than previously thought

    The rate of glacier ice flow is more sensitive to stress than previously calculated, according to a new study by MIT researchers that upends a decades-old equation used to describe ice flow.

    Stress in this case refers to the forces acting on Antarctic glaciers, which are primarily influenced by gravity that drags the ice down toward lower elevations. Viscous glacier ice flows “really similarly to honey,” explains Joanna Millstein, a PhD student in the Glacier Dynamics and Remote Sensing Group and lead author of the study. “If you squeeze honey in the center of a piece of toast, and it piles up there before oozing outward, that’s the exact same motion that’s happening for ice.”

    The revision to the equation proposed by Millstein and her colleagues should improve models for making predictions about the ice flow of glaciers. This could help glaciologists predict how Antarctic ice flow might contribute to future sea level rise, although Millstein said the equation change is unlikely to raise estimates of sea level rise beyond the maximum levels already predicted under climate change models.

    “Almost all our uncertainties about sea level rise coming from Antarctica have to do with the physics of ice flow, though, so this will hopefully be a constraint on that uncertainty,” she says.

    Other authors on the paper, published in Nature Communications Earth and Environment, include Brent Minchew, the Cecil and Ida Green Career Development Professor in MIT’s Department of Earth, Atmospheric, and Planetary Sciences, and Samuel Pegler, a university academic fellow at the University of Leeds.

    Benefits of big data

    The equation in question, called Glen’s Flow Law, is the most widely used equation to describe viscous ice flow. It was developed in 1958 by British scientist J.W. Glen, one of the few glaciologists working on the physics of ice flow in the 1950s, according to Millstein.

    With relatively few scientists working in the field until recently, along with the remoteness and inaccessibility of most large glacier ice sheets, there were few attempts to calibrate Glen’s Flow Law outside the lab until recently. In the recent study, Millstein and her colleagues took advantage of a new wealth of satellite imagery over Antarctic ice shelves, the floating extensions of the continent’s ice sheet, to revise the stress exponent of the flow law.

    “In 2002, this major ice shelf [Larsen B] collapsed in Antarctica, and all we have from that collapse is two satellite images that are a month apart,” she says. “Now, over that same area we can get [imagery] every six days.”

    The new analysis shows that “the ice flow in the most dynamic, fastest-changing regions of Antarctica — the ice shelves, which basically hold back and hug the interior of the continental ice — is more sensitive to stress than commonly assumed,” Millstein says. She’s optimistic that the growing record of satellite data will help capture rapid changes on Antarctica in the future, providing insights into the underlying physical processes of glaciers.   

    But stress isn’t the only thing that affects ice flow, the researchers note. Other parts of the flow law equation represent differences in temperature, ice grain size and orientation, and impurities and water contained in the ice — all of which can alter flow velocity. Factors like temperature could be especially important in understanding how ice flow impacts sea level rise in the future, Millstein says.

    Cracking under strain

    Millstein and colleagues are also studying the mechanics of ice sheet collapse, which involves different physical models than those used to understand the ice flow problem. “The cracking and breaking of ice is what we’re working on now, using strain rate observations,” Millstein says.

    The researchers use InSAR, radar images of the Earth’s surface collected by satellites, to observe deformations of the ice sheets that can be used to make precise measurements of strain. By observing areas of ice with high strain rates, they hope to better understand the rate at which crevasses and rifts propagate to trigger collapse.

    The research was supported by the National Science Foundation. More

  • in

    Using soap to remove micropollutants from water

    Imagine millions of soapy sponges the size of human cells that can clean water by soaking up contaminants. This simplistic model is used to describe technology that MIT chemical engineers have recently developed to remove micropollutants from water — a concerning, worldwide problem.

    Patrick S. Doyle, the Robert T. Haslam Professor of Chemical Engineering, PhD student Devashish Pratap Gokhale, and undergraduate Ian Chen recently published their research on micropollutant removal in the journal ACS Applied Polymer Materials. The work is funded by MIT’s Abdul Latif Jameel Water and Food Systems Lab (J-WAFS).

    In spite of their low concentrations (about 0.01–100 micrograms per liter), micropollutants can be hazardous to the ecosystem and to human health. They come from a variety of sources and have been detected in almost all bodies of water, says Gokhale. Pharmaceuticals passing through people and animals, for example, can end up as micropollutants in the water supply. Others, like endocrine disruptor bisphenol A (BPA), can leach from plastics during industrial manufacturing. Pesticides, dyes, petrochemicals, and per-and polyfluoroalkyl substances, more commonly known as PFAS, are also examples of micropollutants, as are some heavy metals like lead and arsenic. These are just some of the kinds of micropollutants, all of which can be toxic to humans and animals over time, potentially causing cancer, organ damage, developmental defects, or other adverse effects.

    Micropollutants are numerous but since their collective mass is small, they are difficult to remove from water. Currently, the most common practice for removing micropollutants from water is activated carbon adsorption. In this process, water passes through a carbon filter, removing only 30 percent of micropollutants. Activated carbon requires high temperatures to produce and regenerate, requiring specialized equipment and consuming large amounts of energy. Reverse osmosis can also be used to remove micropollutants from water; however, “it doesn’t lead to good elimination of this class of molecules, because of both their concentration and their molecular structure,” explains Doyle.

    Inspired by soap

    When devising their solution for how to remove micropollutants from water, the MIT researchers were inspired by a common household cleaning supply — soap. Soap cleans everything from our hands and bodies to dirty dishes to clothes, so perhaps the chemistry of soap could also be applied to sanitizing water. Soap has molecules called surfactants which have both hydrophobic (water-hating) and hydrophilic (water-loving) components. When water comes in contact with soap, the hydrophobic parts of the surfactant stick together, assembling into spherical structures called micelles with the hydrophobic portions of the molecules in the interior. The hydrophobic micelle cores trap and help carry away oily substances like dirt. 

    Doyle’s lab synthesized micelle-laden hydrogel particles to essentially cleanse water. Gokhale explains that they used microfluidics which “involve processing fluids on very small, micron-like scales” to generate uniform polymeric hydrogel particles continuously and reproducibly. These hydrogels, which are porous and absorbent, incorporate a surfactant, a photoinitiator (a molecule that creates reactive species), and a cross-linking agent known as PEGDA. The surfactant assembles into micelles that are chemically bonded to the hydrogel using ultraviolet light. When water flows through this micro-particle system, micropollutants latch onto the micelles and separate from the water. The physical interaction used in the system is strong enough to pull micropollutants from water, but weak enough that the hydrogel particles can be separated from the micropollutants, restabilized, and reused. Lab testing shows that both the speed and extent of pollutant removal increase when the amount of surfactant incorporated into the hydrogels is increased.

    “We’ve shown that in terms of rate of pullout, which is what really matters when you scale this up for industrial use, that with our initial format, we can already outperform the activated carbon,” says Doyle. “We can actually regenerate these particles very easily at room temperature. Nearly 10 regeneration cycles with minimal change in performance,” he adds.

    Regeneration of the particles occurs by soaking the micelles in 90 percent ethanol, whereby “all the pollutants just come out of the particles and back into the ethanol” says Gokhale. Ethanol is biosafe at low concentrations, inexpensive, and combustible, allowing for safe and economically feasible disposal. The recycling of the hydrogel particles makes this technology sustainable, which is a large advantage over activated carbon. The hydrogels can also be tuned to any hydrophobic micropollutant, making this system a novel, flexible approach to water purification.

    Scaling up

    The team experimented in the lab using 2-naphthol, a micropollutant that is an organic pollutant of concern and known to be difficult to remove by using conventional water filtration methods. They hope to continue testing with real water samples. 

    “Right now, we spike one micropollutant into pure lab water. We’d like to get water samples from the natural environment, that we can study and look at experimentally,” says Doyle. 

    By using microfluidics to increase particle production, Doyle and his lab hope to make household-scale filters to be tested with real wastewater. They then anticipate scaling up to municipal water treatment or even industrial wastewater treatment. 

    The lab recently filed an international patent application for their hydrogel technology that uses immobilized micelles. They plan to continue this work by experimenting with different kinds of hydrogels for the removal of heavy metal contaminants like lead from water. 

    Societal impacts

    Funded by a 2019 J-WAFS seed grant that is currently ongoing, this research has the potential to improve the speed, precision, efficiency, and environmental sustainability of water purification systems across the world. 

    “I always wanted to do work which had a social impact, and I was also always interested in water, because I think it’s really cool,” says Gokhale. He notes, “it’s really interesting how water sort of fits into different kinds of fields … we have to consider the cultures of peoples, how we’re going to use this, and then just the equity of these water processes.” Originally from India, Gokhale says he’s seen places that have barely any water at all and others that have floods year after year. “There’s a lot of interesting work to be done, and I think it’s work in this area that’s really going to impact a lot of people’s lives in years to come,” Gokhale says.

    Doyle adds, “water is the most important thing, perhaps for the next decades to come, so it’s very fulfilling to work on something that is so important to the whole world.” More

  • in

    Using nature’s structures in wooden buildings

    Concern about climate change has focused significant attention on the buildings sector, in particular on the extraction and processing of construction materials. The concrete and steel industries together are responsible for as much as 15 percent of global carbon dioxide emissions. In contrast, wood provides a natural form of carbon sequestration, so there’s a move to use timber instead. Indeed, some countries are calling for public buildings to be made at least partly from timber, and large-scale timber buildings have been appearing around the world.

    Observing those trends, Caitlin Mueller ’07, SM ’14, PhD ’14, an associate professor of architecture and of civil and environmental engineering in the Building Technology Program at MIT, sees an opportunity for further sustainability gains. As the timber industry seeks to produce wooden replacements for traditional concrete and steel elements, the focus is on harvesting the straight sections of trees. Irregular sections such as knots and forks are turned into pellets and burned, or ground up to make garden mulch, which will decompose within a few years; both approaches release the carbon trapped in the wood to the atmosphere.

    For the past four years, Mueller and her Digital Structures research group have been developing a strategy for “upcycling” those waste materials by using them in construction — not as cladding or finishes aimed at improving appearance, but as structural components. “The greatest value you can give to a material is to give it a load-bearing role in a structure,” she says. But when builders use virgin materials, those structural components are the most emissions-intensive parts of buildings due to their large volume of high-strength materials. Using upcycled materials in place of those high-carbon systems is therefore especially impactful in reducing emissions.

    Mueller and her team focus on tree forks — that is, spots where the trunk or branch of a tree divides in two, forming a Y-shaped piece. In architectural drawings, there are many similar Y-shaped nodes where straight elements come together. In such cases, those units must be strong enough to support critical loads.

    “Tree forks are naturally engineered structural connections that work as cantilevers in trees, which means that they have the potential to transfer force very efficiently thanks to their internal fiber structure,” says Mueller. “If you take a tree fork and slice it down the middle, you see an unbelievable network of fibers that are intertwining to create these often three-dimensional load transfer points in a tree. We’re starting to do the same thing using 3D printing, but we’re nowhere near what nature does in terms of complex fiber orientation and geometry.”

    She and her team have developed a five-step “design-to-fabrication workflow” that combines natural structures such as tree forks with the digital and computational tools now used in architectural design. While there’s long been a “craft” movement to use natural wood in railings and decorative features, the use of computational tools makes it possible to use wood in structural roles — without excessive cutting, which is costly and may compromise the natural geometry and internal grain structure of the wood.

    Given the wide use of digital tools by today’s architects, Mueller believes that her approach is “at least potentially scalable and potentially achievable within our industrialized materials processing systems.” In addition, by combining tree forks with digital design tools, the novel approach can also support the trend among architects to explore new forms. “Many iconic buildings built in the past two decades have unexpected shapes,” says Mueller. “Tree branches have a very specific geometry that sometimes lends itself to an irregular or nonstandard architectural form — driven not by some arbitrary algorithm but by the material itself.”

    Step 0: Find a source, set goals

    Before starting their design-to-fabrication process, the researchers needed to locate a source of tree forks. Mueller found help in the Urban Forestry Division of the City of Somerville, Massachusetts, which maintains a digital inventory of more than 2,000 street trees — including more than 20 species — and records information about the location, approximate trunk diameter, and condition of each tree.

    With permission from the forestry division, the team was on hand in 2018 when a large group of trees was cut down near the site of the new Somerville High School. Among the heavy equipment on site was a chipper, poised to turn all the waste wood into mulch. Instead, the workers obligingly put the waste wood into the researchers’ truck to be brought to MIT.

    In their project, the MIT team sought not only to upcycle that waste material but also to use it to create a structure that would be valued by the public. “Where I live, the city has had to take down a lot of trees due to damage from an invasive species of beetle,” Mueller explains. “People get really upset — understandably. Trees are an important part of the urban fabric, providing shade and beauty.” She and her team hoped to reduce that animosity by “reinstalling the removed trees in the form of a new functional structure that would recreate the atmosphere and spatial experience previously provided by the felled trees.”

    With their source and goals identified, the researchers were ready to demonstrate the five steps in their design-to-fabrication workflow for making spatial structures using an inventory of tree forks.

    Step 1: Create a digital material library

    The first task was to turn their collection of tree forks into a digital library. They began by cutting off excess material to produce isolated tree forks. They then created a 3D scan of each fork. Mueller notes that as a result of recent progress in photogrammetry (measuring objects using photographs) and 3D scanning, they could create high-resolution digital representations of the individual tree forks with relatively inexpensive equipment, even using apps that run on a typical smartphone.

    In the digital library, each fork is represented by a “skeletonized” version showing three straight bars coming together at a point. The relative geometry and orientation of the branches are of particular interest because they determine the internal fiber orientation that gives the component its strength.

    Step 2: Find the best match between the initial design and the material library

    Like a tree, a typical architectural design is filled with Y-shaped nodes where three straight elements meet up to support a critical load. The goal was therefore to match the tree forks in the material library with the nodes in a sample architectural design.

    First, the researchers developed a “mismatch metric” for quantifying how well the geometries of a particular tree fork aligned with a given design node. “We’re trying to line up the straight elements in the structure with where the branches originally were in the tree,” explains Mueller. “That gives us the optimal orientation for load transfer and maximizes use of the inherent strength of the wood fiber.” The poorer the alignment, the higher the mismatch metric.

    The goal was to get the best overall distribution of all the tree forks among the nodes in the target design. Therefore, the researchers needed to try different fork-to-node distributions and, for each distribution, add up the individual fork-to-node mismatch errors to generate an overall, or global, matching score. The distribution with the best matching score would produce the most structurally efficient use of the total tree fork inventory.

    Since performing that process manually would take far too long to be practical, they turned to the “Hungarian algorithm,” a technique developed in 1955 for solving such problems. “The brilliance of the algorithm is solving that [matching] problem very quickly,” Mueller says. She notes that it’s a very general-use algorithm. “It’s used for things like marriage match-making. It can be used any time you have two collections of things that you’re trying to find unique matches between. So, we definitely didn’t invent the algorithm, but we were the first to identify that it could be used for this problem.”

    The researchers performed repeated tests to show possible distributions of the tree forks in their inventory and found that the matching score improved as the number of forks available in the material library increased — up to a point. In general, the researchers concluded that the mismatch score was lowest, and thus best, when there were about three times as many forks in the material library as there were nodes in the target design.

    Step 3: Balance designer intention with structural performance

    The next step in the process was to incorporate the intention or preference of the designer. To permit that flexibility, each design includes a limited number of critical parameters, such as bar length and bending strain. Using those parameters, the designer can manually change the overall shape, or geometry, of the design or can use an algorithm that automatically changes, or “morphs,” the geometry. And every time the design geometry changes, the Hungarian algorithm recalculates the optimal fork-to-node matching.

    “Because the Hungarian algorithm is extremely fast, all the morphing and the design updating can be really fluid,” notes Mueller. In addition, any change to a new geometry is followed by a structural analysis that checks the deflections, strain energy, and other performance measures of the structure. On occasion, the automatically generated design that yields the best matching score may deviate far from the designer’s initial intention. In such cases, an alternative solution can be found that satisfactorily balances the design intention with a low matching score.

    Step 4: Automatically generate the machine code for fast cutting

    When the structural geometry and distribution of tree forks have been finalized, it’s time to think about actually building the structure. To simplify assembly and maintenance, the researchers prepare the tree forks by recutting their end faces to better match adjoining straight timbers and cutting off any remaining bark to reduce susceptibility to rot and fire.

    To guide that process, they developed a custom algorithm that automatically computes the cuts needed to make a given tree fork fit into its assigned node and to strip off the bark. The goal is to remove as little material as possible but also to avoid a complex, time-consuming machining process. “If we make too few cuts, we’ll cut off too much of the critical structural material. But we don’t want to make a million tiny cuts because it will take forever,” Mueller explains.

    The team uses facilities at the Autodesk Boston Technology Center Build Space, where the robots are far larger than any at MIT and the processing is all automated. To prepare each tree fork, they mount it on a robotic arm that pushes the joint through a traditional band saw in different orientations, guided by computer-generated instructions. The robot also mills all the holes for the structural connections. “That’s helpful because it ensures that everything is aligned the way you expect it to be,” says Mueller.

    Step 5: Assemble the available forks and linear elements to build the structure

    The final step is to assemble the structure. The tree-fork-based joints are all irregular, and combining them with the precut, straight wooden elements could be difficult. However, they’re all labeled. “All the information for the geometry is embedded in the joint, so the assembly process is really low-tech,” says Mueller. “It’s like a child’s toy set. You just follow the instructions on the joints to put all the pieces together.”

    They installed their final structure temporarily on the MIT campus, but Mueller notes that it was only a portion of the structure they plan to eventually build. “It had 12 nodes that we designed and fabricated using our process,” she says, adding that the team’s work was “a little interrupted by the pandemic.” As activity on campus resumes, the researchers plan to finish designing and building the complete structure, which will include about 40 nodes and will be installed as an outdoor pavilion on the site of the felled trees in Somerville.

    In addition, they will continue their research. Plans include working with larger material libraries, some with multibranch forks, and replacing their 3D-scanning technique with computerized tomography scanning technologies that can automatically generate a detailed geometric representation of a tree fork, including its precise fiber orientation and density. And in a parallel project, they’ve been exploring using their process with other sources of materials, with one case study focusing on using material from a demolished wood-framed house to construct more than a dozen geodesic domes.

    To Mueller, the work to date already provides new guidance for the architectural design process. With digital tools, it has become easy for architects to analyze the embodied carbon or future energy use of a design option. “Now we have a new metric of performance: How well am I using available resources?” she says. “With the Hungarian algorithm, we can compute that metric basically in real time, so we can work rapidly and creatively with that as another input to the design process.”

    This research was supported by MIT’s School of Architecture and Planning via the HASS Award.

    This article appears in the Autumn 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    MIT ReACT welcomes first Afghan cohort to its largest-yet certificate program

    Through the championing support of the faculty and leadership of the MIT Afghan Working Group convened last September by Provost Martin Schmidt and chaired by Associate Provost for International Activities Richard Lester, MIT has come together to support displaced Afghan learners and scholars in a time of crisis. The MIT Refugee Action Hub (ReACT) has opened opportunities for 25 talented Afghan learners to participate in the hub’s certificate program in computer and data science (CDS), now in its fourth year, welcoming its largest and most diverse cohort to date — 136 learners from 29 countries.

    ”Even in the face of extreme disruption, education and scholarship must continue, and MIT is committed to providing resources and safe forums for displaced scholars,” says Lester. “We greatly appreciate MIT ReACT’s work to create learning opportunities for Afghan students whose lives have been upended by the crisis in their homeland.”

    Currently, more than 3.5 million Afghans are internally displaced, while 2.5 million are registered refugees residing in other parts of the world. With millions in Afghanistan facing famine, poverty, and civil unrest in what has become the world’s largest humanitarian crisis, the United Nations predicts the number of Afghans forced to flee their homes will continue to rise. 

    “Forced displacement is on the rise, fueled not only by constant political, economical, and social turmoil worldwide, but also by the ongoing climate change crisis, which threatens costly disruptions to society and has potential to create unprecedented displacement internationally,” says associate professor of civil and environmental engineering and ReACT’s faculty founder Admir Masic. During the orientation for the new CDS cohort in January, Masic emphasized the great need for educational programs like ReACT’s that address the specific challenges refugees and displaced learners face.

    A former Bosnian refugee, Masic spent his teenage years in Croatia, where educational opportunities were limited for young people with refugee status. His experience motivated him to found ReACT, which launched in 2017. Housed within Open Learning, ReACT is an MIT-wide effort to deliver global education and professional development programs to underserved communities, including refugees and migrants. ReACT’s signature program, CDS is a year-long, online program that combines MITx courses in programming and data science, personal and professional development workshops including MIT Bootcamps, and opportunities for practical experience.

    ReACT’s group of 25 learners from Afghanistan, 52 percent of whom are women, joins the larger CDS cohort in the program. They will receive support from their new colleagues as well as members of ReACT’s mentor and alumni network. While the majority of the group are residing around the world, including in Europe, North America, and neighboring countries, several still remain in Afghanistan. With the support of the Afghan Working Group, ReACT is working to connect with communities from the region to provide safe and inclusive learning environments for the cohort. ​​

    Building community and confidence

    Selected from more than 1,000 applicants, the new CDS cohort reflected on their personal and professional goals during a weeklong orientation.

    “I am here because I want to change my career and learn basics in this field to then obtain networks that I wouldn’t have got if it weren’t for this program,” said Samiullah Ajmal, who is joining the program from Afghanistan.

    Interactive workshops on topics such as leadership development and virtual networking rounded out the week’s events. Members of ReACT’s greater community — which has grown in recent years to include a network of external collaborators including nonprofits, philanthropic supporters, universities, and alumni — helped facilitate these workshops and other orientation activities.

    For instance, Na’amal, a social enterprise that connects refugees to remote work opportunities, introduced the CDS learners to strategies for making career connections remotely. “We build confidence while doing,” says Susan Mulholland, a leadership and development coach with Na’amal who led the networking workshop.

    Along with the CDS program’s cohort-based model, ReACT also uses platforms that encourage regular communication between participants and with the larger ReACT network — making connections a critical component of the program.

    “I not only want to meet new people and make connections for my professional career, but I also want to test my communication and social skills,” says Pablo Andrés Uribe, a learner who lives in Colombia, describing ReACT’s emphasis on community-building. 

    Over the last two years, ReACT has expanded its geographic presence, growing from a hub in Jordan into a robust global community of many hubs, including in Colombia and Uganda. These regional sites connect talented refugees and displaced learners to internships and employment, startup networks and accelerators, and pathways to formal undergraduate and graduate education.

    This expansion is thanks to the generous support internally from the MIT Office of the Provost and Associate Provost Richard Lester and external organizations including the Western Union Foundation. ReACT will build new hubs this year in Greece, Uruguay, and Afghanistan, as a result of gifts from the Hatsopoulos family and the Pfeffer family.

    Holding space to learn from each other

    In addition to establishing new global hubs, ReACT plans to expand its network of internship and experiential learning opportunities, increasing outreach to new collaborators such as nongovernmental organizations (NGOs), companies, and universities. Jointly with Na’amal and Paper Airplanes, a nonprofit that connects conflict-affected individuals with personal language tutors, ReACT will host the first Migration Summit. Scheduled for April 2022, the month-long global convening invites a broad range of participants, including displaced learners, universities, companies, nonprofits and NGOs, social enterprises, foundations, philanthropists, researchers, policymakers, employers, and governments, to address the key challenges and opportunities for refugee and migrant communities. The theme of the summit is “Education and Workforce Development in Displacement.”

    “The MIT Migration Summit offers a platform to discuss how new educational models, such as those employed in ReACT, can help solve emerging challenges in providing quality education and career opportunities to forcibly displaced and marginalized people around the world,” says Masic. 

    A key goal of the convening is to center the voices of those most directly impacted by displacement, such as ReACT’s learners from Afghanistan and elsewhere, in solution-making. More

  • in

    MIT Center for Real Estate launches the Asia Real Estate Initiative

    To appreciate the explosive urbanization taking place in Asia, consider this analogy: Every 40 days, a city the equivalent size of Boston is built in Asia. Of the $24.7 trillion real estate investment opportunities predicted by 2030 in emerging cities, $17.8 trillion (72 percent) will be in Asia. While this growth is exciting to the real estate industry, it brings with it the attendant social and environmental issues.

    To promote a sustainable and innovative approach to this growth, leadership at the MIT Center for Real Estate (MIT CRE) recently established the Asia Real Estate Initiative (AREI), which aims to become a platform for industry leaders, entrepreneurs, and the academic community to find solutions to the practical concerns of real estate development across these countries.

    “Behind the creation of this initiative is the understanding that Asia is a living lab for the study of future global urban development,” says Hashim Sarkis, dean of the MIT School of Architecture and Planning.

    An investment in cities of the future

    One of the areas in AREI’s scope of focus is connecting sustainability and technology in real estate.

    “We believe the real estate sector should work cooperatively with the energy, science, and technology sectors to solve the climate challenges,” says Richard Lester, the Institute’s associate provost for international activities. “AREI will engage academics and industry leaders, nongovernment organizations, and civic leaders globally and in Asia, to advance sharing knowledge and research.”

    In its effort to understand how trends and new technologies will impact the future of real estate, AREI has received initial support from a prominent alumnus of MIT CRE who wishes to remain anonymous. The gift will support a cohort of researchers working on innovative technologies applicable to advancing real estate sustainability goals, with a special focus on the global and Asia markets. The call for applications is already under way, with AREI seeking to collaborate with scholars who have backgrounds in economics, finance, urban planning, technology, engineering, and other disciplines.

    “The research on real estate sustainability and technology could transform this industry and help invent global real estate of the future,” says Professor Siqi Zheng, faculty director of MIT CRE and AREI faculty chair. “The pairing of real estate and technology often leads to innovative and differential real estate development strategies such as buildings that are green, smart, and healthy.”

    The initiative arrives at a key time to make a significant impact and cement a leadership role in real estate development across Asia. MIT CRE is positioned to help the industry increase its efficiency and social responsibility, with nearly 40 years of pioneering research in the field. Zheng, an established scholar with expertise on urban growth in fast-urbanizing regions, is the former president of the Asia Real Estate Society and sits on the Board of American Real Estate and Urban Economics Association. Her research has been supported by international institutions including the World Bank, the Asian Development Bank, and the Lincoln Institute of Land Policy.

    “The researchers in AREI are now working on three interrelated themes: the future of real estate and live-work-play dynamics; connecting sustainability and technology in real estate; and innovations in real estate finance and business,” says Zheng.

    The first theme has already yielded a book — “Toward Urban Economic Vibrancy: Patterns and Practices in Asia’s New Cities” — recently published by SA+P Press.

    Engaging thought leaders and global stakeholders

    AREI also plans to collaborate with counterparts in Asia to contribute to research, education, and industry dialogue to meet the challenges of sustainable city-making across the continent and identify areas for innovation. Traditionally, real estate has been a very local business with a lengthy value chain, according to Zhengzhen Tan, director of AREI. Most developers focused their career on one particular product type in one particular regional market. AREI is working to change that dynamic.

    “We want to create a cross-border dialogue within Asia and among Asia, North America, and European leaders to exchange knowledge and practices,” says Tan. “The real estate industry’s learning costs are very high compared to other sectors. Collective learning will reduce the cost of failure and have a significant impact on these global issues.”

    The 2021 United Nations Climate Change Conference in Glasgow shed additional light on environmental commitments being made by governments in Asia. With real estate representing 40 percent of global greenhouse gas emissions, the Asian real estate market is undergoing an urgent transformation to deliver on this commitment.

    “One of the most pressing calls is to get to net-zero emissions for real estate development and operation,” says Tan. “Real estate investors and developers are making short- and long-term choices that are locking in environmental footprints for the ‘decisive decade.’ We hope to inspire developers and investors to think differently and get out of their comfort zone.” More

  • in

    New maps show airplane contrails over the U.S. dropped steeply in 2020

    As Covid-19’s initial wave crested around the world, travel restrictions and a drop in passengers led to a record number of grounded flights in 2020. The air travel reduction cleared the skies of not just jets but also the fluffy white contrails they produce high in the atmosphere.

    MIT engineers have mapped the contrails that were generated over the United States in 2020, and compared the results to prepandemic years. They found that on any given day in 2018, and again in 2019, contrails covered a total area equal to Massachusetts and Connecticut combined. In 2020, this contrail coverage shrank by about 20 percent, mirroring a similar drop in U.S. flights.  

    While 2020’s contrail dip may not be surprising, the findings are proof that the team’s mapping technique works. Their study marks the first time researchers have captured the fine and ephemeral details of contrails over a large continental scale.

    Now, the researchers are applying the technique to predict where in the atmosphere contrails are likely to form. The cloud-like formations are known to play a significant role in aviation-related global warming. The team is working with major airlines to forecast regions in the atmosphere where contrails may form, and to reroute planes around these regions to minimize contrail production.

    “This kind of technology can help divert planes to prevent contrails, in real time,” says Steven Barrett, professor and associate head of MIT’s Department of Aeronautics and Astronautics. “There’s an unusual opportunity to halve aviation’s climate impact by eliminating most of the contrails produced today.”

    Barrett and his colleagues have published their results today in the journal Environmental Research Letters. His co-authors at MIT include graduate student Vincent Meijer, former graduate student Luke Kulik, research scientists Sebastian Eastham, Florian Allroggen, and Raymond Speth, and LIDS Director and professor Sertac Karaman.

    Trail training

    About half of the aviation industry’s contribution to global warming comes directly from planes’ carbon dioxide emissions. The other half is thought to be a consequence of their contrails. The signature white tails are produced when a plane’s hot, humid exhaust mixes with cool humid air high in the atmosphere. Emitted in thin lines, contrails quickly spread out and can act as blankets that trap the Earth’s outgoing heat.

    While a single contrail may not have much of a warming effect, taken together contrails have a significant impact. But the estimates of this effect are uncertain and based on computer modeling as well as limited satellite data. What’s more, traditional computer vision algorithms that analyze contrail data have a hard time discerning the wispy tails from natural clouds.

    To precisely pick out and track contrails over a large scale, the MIT team looked to images taken by NASA’s GOES-16, a geostationary satellite that hovers over the same swath of the Earth, including the United States, taking continuous, high-resolution images.

    The team first obtained about 100 images taken by the satellite, and trained a set of people to interpret remote sensing data and label each image’s pixel as either part of a contrail or not. They used this labeled dataset to train a computer-vision algorithm to discern a contrail from a cloud or other image feature.

    The researchers then ran the algorithm on about 100,000 satellite images, amounting to nearly 6 trillion pixels, each pixel representing an area of about 2 square kilometers. The images covered the contiguous U.S., along with parts of Canada and Mexico, and were taken about every 15 minutes, between Jan. 1, 2018, and Dec. 31, 2020.

    The algorithm automatically classified each pixel as either a contrail or not a contrail, and generated daily maps of contrails over the United States. These maps mirrored the major flight paths of most U.S. airlines, with some notable differences. For instance, contrail “holes” appeared around major airports, which reflects the fact that planes landing and taking off around airports are generally not high enough in the atmosphere for contrails to form.

    “The algorithm knows nothing about where planes fly, and yet when processing the satellite imagery, it resulted in recognizable flight routes,” Barrett says. “That’s one piece of evidence that says this method really does capture contrails over a large scale.”

    Cloudy patterns

    Based on the algorithm’s maps, the researchers calculated the total area covered each day by contrails in the US. On an average day in 2018 and in 2019, U.S. contrails took up about 43,000 square kilometers. This coverage dropped by 20 percent in March of 2020 as the pandemic set in. From then on, contrails slowly reappeared as air travel resumed through the year.

    The team also observed daily and seasonal patterns. In general, contrails appeared to peak in the morning and decline in the afternoon. This may be a training artifact: As natural cirrus clouds are more likely to form in the afternoon, the algorithm may have trouble discerning contrails amid the clouds later in the day. But it might also be an important indication about when contrails form most. Contrails also peaked in late winter and early spring, when more of the air is naturally colder and more conducive for contrail formation.

    The team has now adapted the technique to predict where contrails are likely to form in real time. Avoiding these regions, Barrett says, could take a significant, almost immediate chunk out of aviation’s global warming contribution.  

    “Most measures to make aviation sustainable take a long time,” Barrett says. “(Contrail avoidance) could be accomplished in a few years, because it requires small changes to how aircraft are flown, with existing airplanes and observational technology. It’s a near-term way of reducing aviation’s warming by about half.”

    The team is now working towards this objective of large-scale contrail avoidance using realtime satellite observations.

    This research was supported in part by NASA and the MIT Environmental Solutions Initiative. More