More stories

  • in

    Building communities, founding a startup with people in mind

    MIT postdoc Francesco Benedetti admits he wasn’t always a star student. But the people he met along his educational journey inspired him to strive, which led him to conduct research at MIT, launch a startup, and even lead the team that won the 2021 MIT $100K Entrepreneurship Competition. Now he is determined to make sure his company, Osmoses, succeeds in boosting the energy efficiency of traditional and renewable natural gas processing, hydrogen production, and carbon capture — thus helping to address climate change.

    “I can’t be grateful enough to MIT for bringing together a community of people who want to change the world,” Benedetti says. “Now we have a technology that can solve one of the big problems of our society.”

    Benedetti and his team have developed an innovative way to separate molecules using a membrane fine enough to extract impurities such as carbon dioxide or hydrogen sulfide from raw natural gas to obtain higher-quality fuel, fulfilling a crucial need in the energy industry. “Natural gas now provides about 40 percent of the energy used to power homes and industry in the United States,” Benedetti says. Using his team’s technology to upgrade natural gas more efficiently could reduce emissions of greenhouse gases while saving enough energy to power the equivalent of 7 million additional U.S. homes for a year, he adds.

    The MIT community

    Benedetti first came to MIT in 2017 as a visiting student from the University of Bologna in Italy, where he was working on membranes for gas separation for his PhD in chemical engineering. Having completed a master’s thesis on water desalination at the University of Texas (UT) at Austin, he connected with UT alumnus Zachary P. Smith, the Robert N. Noyce Career Development Professor of Chemical Engineering at MIT, and the two discovered they shared a vision. “We found ourselves very much aligned on the need for new technology in industry to lower the energy consumption of separating components,” Benedetti says.

    Although Benedetti had always been interested in making a positive impact on the world, particularly the environment, he says it was his university studies that first sparked his interest in more efficient separation technologies. “When you study chemical engineering, you understand hundreds of ways the field can have a positive impact in the world. But we learn very early that 15 percent of the world’s energy is wasted because of inefficient chemical separation — because we still rely on centuries-old technology,” he says. Most separation processes still use heat or toxic solvents to separate components, he explains.

    Still, Benedetti says, his main drive comes from the joy of working with terrific mentors and colleagues. “It’s the people I’ve met that really inspired me to tackle the biggest challenges and find that intrinsic motivation,” he says.

    To help build his community at MIT and provide support for international students, Benedetti co-founded the MIT Visiting Student Association (VISTA) in September 2017. By February 2018, the organization had hundreds of members and official Institute recognition. In May 2018, the group won two Institute awards, including the Golden Beaver Award for enhancing the campus environment. “VISTA gave me a sense of belonging; I loved it,” Benedetti says.

    Membrane technology

    Benedetti also published two papers on membrane research during his stint as a visiting student at MIT, so he was delighted to return in 2019 for postdoctoral work through the MIT Energy Initiative, where he was a 2019-20 ExxonMobil-MIT Energy Fellow. “I came back because the research was extremely exciting, but also because I got extremely passionate about the energy I found on campus and with the people,” he says.

    Returning to MIT enabled Benedetti to continue his work with Smith and Holden Lai, both of whom helped co-found Osmoses. Lai, a recent Stanford PhD in chemistry who was also a visiting student at MIT in 2018, is now the chief technology officer at Osmoses. Co-founder Katherine Mizrahi Rodriguez ’17, an MIT PhD candidate, joined the team more recently.

    Together, the Osmoses team has developed polymer membranes with microporosities capable of filtering gases by separating out molecules that differ by as little as a fraction of an angstrom — a unit of length equal to one hundred-millionth of a centimeter. “We can get up to five times higher selectivity than commercially available technology for methane upgrading, and this has been observed operating the membranes in industrially relevant environments,” Benedetti says.

    Today, methane upgrading — removing carbon dioxide (CO2) from raw natural gas to obtain a higher-grade fuel — is often accomplished using amine absorption, a process that uses toxic solvents to capture CO2 and burns methane to fuel the regeneration of those solvents for reuse. Using Osmoses’ filters would eliminate the need for such solvents while reducing CO2 emissions by up to 16 million metric tons per year in the United States alone, Benedetti says.

    The technology has a wide range of applications — in oxygen and nitrogen generation, hydrogen purification, and carbon capture, for example — but Osmoses plans to start with the $5 billion market for natural gas upgrading because the need to bring innovation and sustainability to that space is urgent, says Benedetti, who received guidance in bringing technology to market from MIT’s Deshpande Center for Technological Innovation. The Osmoses team has also received support from the MIT Sandbox Innovation Fund Program.

    The next step for the startup is to build an industrial-scale prototype, and Benedetti says the company got a huge boost toward that goal in May when it won the MIT $100K Entrepreneurship Competition, a student-run contest that has launched more than 160 companies since it began in 1990. Ninety teams began the competition by pitching their startup ideas; 20 received mentorship and development funding; then eight finalists presented business plans to compete for the $100,000 prize. “Because of this, we’re getting a lot of interest from venture capital firms, investors, companies, corporate funds, et cetera, that want to partner with us or to use our product,” he says. In June, the Osmoses team received a two-year Activate Fellowship, which will support moving its research to market; in October, it won the Northeast Regional and Carbon Sequestration Prizes at the Cleantech Open Accelerator; and in November, the team closed a $3 million pre-seed round of financing.

    FAIL!

    Naturally, Benedetti hopes Osmoses is on the path to success, but he wants everyone to know that there is no shame in failures that come from best efforts. He admits it took him three years longer than usual to finish his undergraduate and master’s degrees, and he says, “I have experienced the pressure you feel when society judges you like a book by its cover and how much a lack of inspired leaders and a supportive environment can kill creativity and the will to try.”

    That’s why in 2018 he, along with other MIT students and VISTA members, started FAIL!–Inspiring Resilience, an organization that provides a platform for sharing unfiltered stories and the lessons leaders have gleaned from failure. “We wanted to help de-stigmatize failure, appreciate vulnerabilities, and inspire humble leadership, eventually creating better communities,” Benedetti says. “If we can make failures, big and small, less intimidating and all-consuming, individuals with great potential will be more willing to take risks, think outside the box, and try things that may push new boundaries. In this way, more breakthrough discoveries are likely to follow, without compromising anyone’s mental health.”

    Benedetti says he will strive to create a supportive culture at Osmoses, because people are central to success. “What drives me every day is the people. I would have no story without the people around me,” he says. “The moment you lose touch with people, you lose the opportunity to create something special.”

    This article appears in the Autumn 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    How to clean solar panels without water

    Solar power is expected to reach 10 percent of global power generation by the year 2030, and much of that is likely to be located in desert areas, where sunlight is abundant. But the accumulation of dust on solar panels or mirrors is already a significant issue — it can reduce the output of photovoltaic panels by as much as 30 percent in just one month — so regular cleaning is essential for such installations.

    But cleaning solar panels currently is estimated to use about 10 billion gallons of water per year — enough to supply drinking water for up to 2 million people. Attempts at waterless cleaning are labor intensive and tend to cause irreversible scratching of the surfaces, which also reduces efficiency. Now, a team of researchers at MIT has devised a way of automatically cleaning solar panels, or the mirrors of solar thermal plants, in a waterless, no-contact system that could significantly reduce the dust problem, they say.

    The new system uses electrostatic repulsion to cause dust particles to detach and virtually leap off the panel’s surface, without the need for water or brushes. To activate the system, a simple electrode passes just above the solar panel’s surface, imparting an electrical charge to the dust particles, which are then repelled by a charge applied to the panel itself. The system can be operated automatically using a simple electric motor and guide rails along the side of the panel. The research is described today in the journal Science Advances, in a paper by MIT graduate student Sreedath Panat and professor of mechanical engineering Kripa Varanasi.

    Play video

    Despite concerted efforts worldwide to develop ever more efficient solar panels, Varanasi says, “a mundane problem like dust can actually put a serious dent in the whole thing.” Lab tests conducted by Panat and Varanasi showed that the dropoff of energy output from the panels happens steeply at the very beginning of the process of dust accumulation and can easily reach 30 percent reduction after just one month without cleaning. Even a 1 percent reduction in power, for a 150-megawatt solar installation, they calculated, could result in a $200,000 loss in annual revenue. The researchers say that globally, a 3 to 4 percent reduction in power output from solar plants would amount to a loss of between $3.3 billion and $5.5 billion.

    “There is so much work going on in solar materials,” Varanasi says. “They’re pushing the boundaries, trying to gain a few percent here and there in improving the efficiency, and here you have something that can obliterate all of that right away.”

    Many of the largest solar power installations in the world, including ones in China, India, the U.A.E., and the U.S., are located in desert regions. The water used for cleaning these solar panels using pressurized water jets has to be trucked in from a distance, and it has to be very pure to avoid leaving behind deposits on the surfaces. Dry scrubbing is sometimes used but is less effective at cleaning the surfaces and can cause permanent scratching that also reduces light transmission.

    Water cleaning makes up about 10 percent of the operating costs of solar installations. The new system could potentially reduce these costs while improving the overall power output by allowing for more frequent automated cleanings, the researchers say.

    “The water footprint of the solar industry is mind boggling,” Varanasi says, and it will be increasing as these installations continue to expand worldwide. “So, the industry has to be very careful and thoughtful about how to make this a sustainable solution.”

    Other groups have tried to develop electrostatic based solutions, but these have relied on a layer called an electrodynamic screen, using interdigitated electrodes. These screens can have defects that allow moisture in and cause them to fail, Varanasi says. While they might be useful on a place like Mars, he says, where moisture is not an issue, even in desert environments on Earth this can be a serious problem.

    The new system they developed only requires an electrode, which can be a simple metal bar, to pass over the panel, producing an electric field that imparts a charge to the dust particles as it goes. An opposite charge applied to a transparent conductive layer just a few nanometers thick deposited on the glass covering of the the solar panel then repels the particles, and by calculating the right voltage to apply, the researchers were able to find a voltage range sufficient to overcome the pull of gravity and adhesion forces, and cause the dust to lift away.

    Using specially prepared laboratory samples of dust with a range of particle sizes, experiments proved that the process works effectively on a laboratory-scale test installation, Panat says. The tests showed that humidity in the air provided a thin coating of water on the particles, which turned out to be crucial to making the effect work. “We performed experiments at varying humidities from 5 percent to 95 percent,” Panat says. “As long as the ambient humidity is greater than 30 percent, you can remove almost all of the particles from the surface, but as humidity decreases, it becomes harder.”

    Varanasi says that “the good news is that when you get to 30 percent humidity, most deserts actually fall in this regime.” And even those that are typically drier than that tend to have higher humidity in the early morning hours, leading to dew formation, so the cleaning could be timed accordingly.

    “Moreover, unlike some of the prior work on electrodynamic screens, which actually do not work at high or even moderate humidity, our system can work at humidity even as high as 95 percent, indefinitely,” Panat says.

    In practice, at scale, each solar panel could be fitted with railings on each side, with an electrode spanning across the panel. A small electric motor, perhaps using a tiny portion of the output from the panel itself, would drive a belt system to move the electrode from one end of the panel to the other, causing all the dust to fall away. The whole process could be automated or controlled remotely. Alternatively, thin strips of conductive transparent material could be permanently arranged above the panel, eliminating the need for moving parts.

    By eliminating the dependency on trucked-in water, by eliminating the buildup of dust that can contain corrosive compounds, and by lowering the overall operational costs, such systems have the potential to significantly improve the overall efficiency and reliability of solar installations, Varanasi says.

    The research was supported by Italian energy firm Eni. S.p.A. through the MIT Energy Initiative. More

  • in

    Using nature’s structures in wooden buildings

    Concern about climate change has focused significant attention on the buildings sector, in particular on the extraction and processing of construction materials. The concrete and steel industries together are responsible for as much as 15 percent of global carbon dioxide emissions. In contrast, wood provides a natural form of carbon sequestration, so there’s a move to use timber instead. Indeed, some countries are calling for public buildings to be made at least partly from timber, and large-scale timber buildings have been appearing around the world.

    Observing those trends, Caitlin Mueller ’07, SM ’14, PhD ’14, an associate professor of architecture and of civil and environmental engineering in the Building Technology Program at MIT, sees an opportunity for further sustainability gains. As the timber industry seeks to produce wooden replacements for traditional concrete and steel elements, the focus is on harvesting the straight sections of trees. Irregular sections such as knots and forks are turned into pellets and burned, or ground up to make garden mulch, which will decompose within a few years; both approaches release the carbon trapped in the wood to the atmosphere.

    For the past four years, Mueller and her Digital Structures research group have been developing a strategy for “upcycling” those waste materials by using them in construction — not as cladding or finishes aimed at improving appearance, but as structural components. “The greatest value you can give to a material is to give it a load-bearing role in a structure,” she says. But when builders use virgin materials, those structural components are the most emissions-intensive parts of buildings due to their large volume of high-strength materials. Using upcycled materials in place of those high-carbon systems is therefore especially impactful in reducing emissions.

    Mueller and her team focus on tree forks — that is, spots where the trunk or branch of a tree divides in two, forming a Y-shaped piece. In architectural drawings, there are many similar Y-shaped nodes where straight elements come together. In such cases, those units must be strong enough to support critical loads.

    “Tree forks are naturally engineered structural connections that work as cantilevers in trees, which means that they have the potential to transfer force very efficiently thanks to their internal fiber structure,” says Mueller. “If you take a tree fork and slice it down the middle, you see an unbelievable network of fibers that are intertwining to create these often three-dimensional load transfer points in a tree. We’re starting to do the same thing using 3D printing, but we’re nowhere near what nature does in terms of complex fiber orientation and geometry.”

    She and her team have developed a five-step “design-to-fabrication workflow” that combines natural structures such as tree forks with the digital and computational tools now used in architectural design. While there’s long been a “craft” movement to use natural wood in railings and decorative features, the use of computational tools makes it possible to use wood in structural roles — without excessive cutting, which is costly and may compromise the natural geometry and internal grain structure of the wood.

    Given the wide use of digital tools by today’s architects, Mueller believes that her approach is “at least potentially scalable and potentially achievable within our industrialized materials processing systems.” In addition, by combining tree forks with digital design tools, the novel approach can also support the trend among architects to explore new forms. “Many iconic buildings built in the past two decades have unexpected shapes,” says Mueller. “Tree branches have a very specific geometry that sometimes lends itself to an irregular or nonstandard architectural form — driven not by some arbitrary algorithm but by the material itself.”

    Step 0: Find a source, set goals

    Before starting their design-to-fabrication process, the researchers needed to locate a source of tree forks. Mueller found help in the Urban Forestry Division of the City of Somerville, Massachusetts, which maintains a digital inventory of more than 2,000 street trees — including more than 20 species — and records information about the location, approximate trunk diameter, and condition of each tree.

    With permission from the forestry division, the team was on hand in 2018 when a large group of trees was cut down near the site of the new Somerville High School. Among the heavy equipment on site was a chipper, poised to turn all the waste wood into mulch. Instead, the workers obligingly put the waste wood into the researchers’ truck to be brought to MIT.

    In their project, the MIT team sought not only to upcycle that waste material but also to use it to create a structure that would be valued by the public. “Where I live, the city has had to take down a lot of trees due to damage from an invasive species of beetle,” Mueller explains. “People get really upset — understandably. Trees are an important part of the urban fabric, providing shade and beauty.” She and her team hoped to reduce that animosity by “reinstalling the removed trees in the form of a new functional structure that would recreate the atmosphere and spatial experience previously provided by the felled trees.”

    With their source and goals identified, the researchers were ready to demonstrate the five steps in their design-to-fabrication workflow for making spatial structures using an inventory of tree forks.

    Step 1: Create a digital material library

    The first task was to turn their collection of tree forks into a digital library. They began by cutting off excess material to produce isolated tree forks. They then created a 3D scan of each fork. Mueller notes that as a result of recent progress in photogrammetry (measuring objects using photographs) and 3D scanning, they could create high-resolution digital representations of the individual tree forks with relatively inexpensive equipment, even using apps that run on a typical smartphone.

    In the digital library, each fork is represented by a “skeletonized” version showing three straight bars coming together at a point. The relative geometry and orientation of the branches are of particular interest because they determine the internal fiber orientation that gives the component its strength.

    Step 2: Find the best match between the initial design and the material library

    Like a tree, a typical architectural design is filled with Y-shaped nodes where three straight elements meet up to support a critical load. The goal was therefore to match the tree forks in the material library with the nodes in a sample architectural design.

    First, the researchers developed a “mismatch metric” for quantifying how well the geometries of a particular tree fork aligned with a given design node. “We’re trying to line up the straight elements in the structure with where the branches originally were in the tree,” explains Mueller. “That gives us the optimal orientation for load transfer and maximizes use of the inherent strength of the wood fiber.” The poorer the alignment, the higher the mismatch metric.

    The goal was to get the best overall distribution of all the tree forks among the nodes in the target design. Therefore, the researchers needed to try different fork-to-node distributions and, for each distribution, add up the individual fork-to-node mismatch errors to generate an overall, or global, matching score. The distribution with the best matching score would produce the most structurally efficient use of the total tree fork inventory.

    Since performing that process manually would take far too long to be practical, they turned to the “Hungarian algorithm,” a technique developed in 1955 for solving such problems. “The brilliance of the algorithm is solving that [matching] problem very quickly,” Mueller says. She notes that it’s a very general-use algorithm. “It’s used for things like marriage match-making. It can be used any time you have two collections of things that you’re trying to find unique matches between. So, we definitely didn’t invent the algorithm, but we were the first to identify that it could be used for this problem.”

    The researchers performed repeated tests to show possible distributions of the tree forks in their inventory and found that the matching score improved as the number of forks available in the material library increased — up to a point. In general, the researchers concluded that the mismatch score was lowest, and thus best, when there were about three times as many forks in the material library as there were nodes in the target design.

    Step 3: Balance designer intention with structural performance

    The next step in the process was to incorporate the intention or preference of the designer. To permit that flexibility, each design includes a limited number of critical parameters, such as bar length and bending strain. Using those parameters, the designer can manually change the overall shape, or geometry, of the design or can use an algorithm that automatically changes, or “morphs,” the geometry. And every time the design geometry changes, the Hungarian algorithm recalculates the optimal fork-to-node matching.

    “Because the Hungarian algorithm is extremely fast, all the morphing and the design updating can be really fluid,” notes Mueller. In addition, any change to a new geometry is followed by a structural analysis that checks the deflections, strain energy, and other performance measures of the structure. On occasion, the automatically generated design that yields the best matching score may deviate far from the designer’s initial intention. In such cases, an alternative solution can be found that satisfactorily balances the design intention with a low matching score.

    Step 4: Automatically generate the machine code for fast cutting

    When the structural geometry and distribution of tree forks have been finalized, it’s time to think about actually building the structure. To simplify assembly and maintenance, the researchers prepare the tree forks by recutting their end faces to better match adjoining straight timbers and cutting off any remaining bark to reduce susceptibility to rot and fire.

    To guide that process, they developed a custom algorithm that automatically computes the cuts needed to make a given tree fork fit into its assigned node and to strip off the bark. The goal is to remove as little material as possible but also to avoid a complex, time-consuming machining process. “If we make too few cuts, we’ll cut off too much of the critical structural material. But we don’t want to make a million tiny cuts because it will take forever,” Mueller explains.

    The team uses facilities at the Autodesk Boston Technology Center Build Space, where the robots are far larger than any at MIT and the processing is all automated. To prepare each tree fork, they mount it on a robotic arm that pushes the joint through a traditional band saw in different orientations, guided by computer-generated instructions. The robot also mills all the holes for the structural connections. “That’s helpful because it ensures that everything is aligned the way you expect it to be,” says Mueller.

    Step 5: Assemble the available forks and linear elements to build the structure

    The final step is to assemble the structure. The tree-fork-based joints are all irregular, and combining them with the precut, straight wooden elements could be difficult. However, they’re all labeled. “All the information for the geometry is embedded in the joint, so the assembly process is really low-tech,” says Mueller. “It’s like a child’s toy set. You just follow the instructions on the joints to put all the pieces together.”

    They installed their final structure temporarily on the MIT campus, but Mueller notes that it was only a portion of the structure they plan to eventually build. “It had 12 nodes that we designed and fabricated using our process,” she says, adding that the team’s work was “a little interrupted by the pandemic.” As activity on campus resumes, the researchers plan to finish designing and building the complete structure, which will include about 40 nodes and will be installed as an outdoor pavilion on the site of the felled trees in Somerville.

    In addition, they will continue their research. Plans include working with larger material libraries, some with multibranch forks, and replacing their 3D-scanning technique with computerized tomography scanning technologies that can automatically generate a detailed geometric representation of a tree fork, including its precise fiber orientation and density. And in a parallel project, they’ve been exploring using their process with other sources of materials, with one case study focusing on using material from a demolished wood-framed house to construct more than a dozen geodesic domes.

    To Mueller, the work to date already provides new guidance for the architectural design process. With digital tools, it has become easy for architects to analyze the embodied carbon or future energy use of a design option. “Now we have a new metric of performance: How well am I using available resources?” she says. “With the Hungarian algorithm, we can compute that metric basically in real time, so we can work rapidly and creatively with that as another input to the design process.”

    This research was supported by MIT’s School of Architecture and Planning via the HASS Award.

    This article appears in the Autumn 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    New maps show airplane contrails over the U.S. dropped steeply in 2020

    As Covid-19’s initial wave crested around the world, travel restrictions and a drop in passengers led to a record number of grounded flights in 2020. The air travel reduction cleared the skies of not just jets but also the fluffy white contrails they produce high in the atmosphere.

    MIT engineers have mapped the contrails that were generated over the United States in 2020, and compared the results to prepandemic years. They found that on any given day in 2018, and again in 2019, contrails covered a total area equal to Massachusetts and Connecticut combined. In 2020, this contrail coverage shrank by about 20 percent, mirroring a similar drop in U.S. flights.  

    While 2020’s contrail dip may not be surprising, the findings are proof that the team’s mapping technique works. Their study marks the first time researchers have captured the fine and ephemeral details of contrails over a large continental scale.

    Now, the researchers are applying the technique to predict where in the atmosphere contrails are likely to form. The cloud-like formations are known to play a significant role in aviation-related global warming. The team is working with major airlines to forecast regions in the atmosphere where contrails may form, and to reroute planes around these regions to minimize contrail production.

    “This kind of technology can help divert planes to prevent contrails, in real time,” says Steven Barrett, professor and associate head of MIT’s Department of Aeronautics and Astronautics. “There’s an unusual opportunity to halve aviation’s climate impact by eliminating most of the contrails produced today.”

    Barrett and his colleagues have published their results today in the journal Environmental Research Letters. His co-authors at MIT include graduate student Vincent Meijer, former graduate student Luke Kulik, research scientists Sebastian Eastham, Florian Allroggen, and Raymond Speth, and LIDS Director and professor Sertac Karaman.

    Trail training

    About half of the aviation industry’s contribution to global warming comes directly from planes’ carbon dioxide emissions. The other half is thought to be a consequence of their contrails. The signature white tails are produced when a plane’s hot, humid exhaust mixes with cool humid air high in the atmosphere. Emitted in thin lines, contrails quickly spread out and can act as blankets that trap the Earth’s outgoing heat.

    While a single contrail may not have much of a warming effect, taken together contrails have a significant impact. But the estimates of this effect are uncertain and based on computer modeling as well as limited satellite data. What’s more, traditional computer vision algorithms that analyze contrail data have a hard time discerning the wispy tails from natural clouds.

    To precisely pick out and track contrails over a large scale, the MIT team looked to images taken by NASA’s GOES-16, a geostationary satellite that hovers over the same swath of the Earth, including the United States, taking continuous, high-resolution images.

    The team first obtained about 100 images taken by the satellite, and trained a set of people to interpret remote sensing data and label each image’s pixel as either part of a contrail or not. They used this labeled dataset to train a computer-vision algorithm to discern a contrail from a cloud or other image feature.

    The researchers then ran the algorithm on about 100,000 satellite images, amounting to nearly 6 trillion pixels, each pixel representing an area of about 2 square kilometers. The images covered the contiguous U.S., along with parts of Canada and Mexico, and were taken about every 15 minutes, between Jan. 1, 2018, and Dec. 31, 2020.

    The algorithm automatically classified each pixel as either a contrail or not a contrail, and generated daily maps of contrails over the United States. These maps mirrored the major flight paths of most U.S. airlines, with some notable differences. For instance, contrail “holes” appeared around major airports, which reflects the fact that planes landing and taking off around airports are generally not high enough in the atmosphere for contrails to form.

    “The algorithm knows nothing about where planes fly, and yet when processing the satellite imagery, it resulted in recognizable flight routes,” Barrett says. “That’s one piece of evidence that says this method really does capture contrails over a large scale.”

    Cloudy patterns

    Based on the algorithm’s maps, the researchers calculated the total area covered each day by contrails in the US. On an average day in 2018 and in 2019, U.S. contrails took up about 43,000 square kilometers. This coverage dropped by 20 percent in March of 2020 as the pandemic set in. From then on, contrails slowly reappeared as air travel resumed through the year.

    The team also observed daily and seasonal patterns. In general, contrails appeared to peak in the morning and decline in the afternoon. This may be a training artifact: As natural cirrus clouds are more likely to form in the afternoon, the algorithm may have trouble discerning contrails amid the clouds later in the day. But it might also be an important indication about when contrails form most. Contrails also peaked in late winter and early spring, when more of the air is naturally colder and more conducive for contrail formation.

    The team has now adapted the technique to predict where contrails are likely to form in real time. Avoiding these regions, Barrett says, could take a significant, almost immediate chunk out of aviation’s global warming contribution.  

    “Most measures to make aviation sustainable take a long time,” Barrett says. “(Contrail avoidance) could be accomplished in a few years, because it requires small changes to how aircraft are flown, with existing airplanes and observational technology. It’s a near-term way of reducing aviation’s warming by about half.”

    The team is now working towards this objective of large-scale contrail avoidance using realtime satellite observations.

    This research was supported in part by NASA and the MIT Environmental Solutions Initiative. More

  • in

    Q&A: Climate Grand Challenges finalists on building equity and fairness into climate solutions

    Note: This is the first in a four-part interview series that will highlight the work of the Climate Grand Challenges finalists, ahead of the April announcement of several multiyear, flagship projects.

    The finalists in MIT’s first-ever Climate Grand Challenges competition each received $100,000 to develop bold, interdisciplinary research and innovation plans designed to attack some of the world’s most difficult and unresolved climate problems. The 27 teams are addressing four Grand Challenge problem areas: building equity and fairness into climate solutions; decarbonizing complex industries and processes; removing, managing, and storing greenhouse gases; and using data and science for improved climate risk forecasting.  

    In a conversation prepared for MIT News, faculty from three of the teams in the competition’s “Building equity and fairness into climate solutions” category share their thoughts on the need for inclusive solutions that prioritize disadvantaged and vulnerable populations, and discuss how they are working to accelerate their research to achieve the greatest impact. The following responses have been edited for length and clarity.

    The Equitable Resilience Framework

    Any effort to solve the most complex global climate problems must recognize the unequal burdens borne by different groups, communities, and societies — and should be equitable as well as effective. Janelle Knox-Hayes, associate professor in the Department of Urban Studies and Planning, leads a team that is developing processes and practices for equitable resilience, starting with a local pilot project in Boston over the next five years and extending to other cities and regions of the country. The Equitable Resilience Framework (ERF) is designed to create long-term economic, social, and environmental transformations by increasing the capacity of interconnected systems and communities to respond to a broad range of climate-related events. 

    Q: What is the problem you are trying to solve?

    A: Inequity is one of the severe impacts of climate change and resonates in both mitigation and adaptation efforts. It is important for climate strategies to address challenges of inequity and, if possible, to design strategies that enhance justice, equity, and inclusion, while also enhancing the efficacy of mitigation and adaptation efforts. Our framework offers a blueprint for how communities, cities, and regions can begin to undertake this work.

    Q: What are the most significant barriers that have impacted progress to date?

    A: There is considerable inertia in policymaking. Climate change requires a rethinking, not only of directives but pathways and techniques of policymaking. This is an obstacle and part of the reason our project was designed to scale up from local pilot projects. Another consideration is that the private sector can be more adaptive and nimble in its adoption of creative techniques. Working with the MIT Climate and Sustainability Consortium there may be ways in which we could modify the ERF to help companies address similar internal adaptation and resilience challenges.

    Protecting and enhancing natural carbon sinks

    Deforestation and forest degradation of strategic ecosystems in the Amazon, Central Africa, and Southeast Asia continue to reduce capacity to capture and store carbon through natural systems and threaten even the most aggressive decarbonization plans. John Fernandez, professor in the Department of Architecture and director of the Environmental Solutions Initiative, reflects on his work with Daniela Rus, professor of electrical engineering and computer science and director of the Computer Science and Artificial Intelligence Laboratory, and Joann de Zegher, assistant professor of Operations Management at MIT Sloan, to protect tropical forests by deploying a three-part solution that integrates targeted technology breakthroughs, deep community engagement, and innovative bioeconomic opportunities. 

    Q: Why is the problem you seek to address a “grand challenge”?

    A: We are trying to bring the latest technology to monitoring, assessing, and protecting tropical forests, as well as other carbon-rich and highly biodiverse ecosystems. This is a grand challenge because natural sinks around the world are threatening to release enormous quantities of stored carbon that could lead to runaway global warming. When combined with deep community engagement, particularly with indigenous and afro-descendant communities, this integrated approach promises to deliver substantially enhanced efficacy in conservation coupled to robust and sustainable local development.

    Q: What is known about this problem and what questions remain unanswered?

    A: Satellites, drones, and other technologies are acquiring more data about natural carbon sinks than ever before. The problem is well-described in certain locations such as the eastern Amazon, which has shifted from a net carbon sink to now a net positive carbon emitter. It is also well-known that indigenous peoples are the most effective stewards of the ecosystems that store the greatest amounts of carbon. One of the key questions that remains to be answered is determining the bioeconomy opportunities inherent within the natural wealth of tropical forests and other important ecosystems that are important to sustained protection and conservation.

    Reducing group-based disparities in climate adaptation

    Race, ethnicity, caste, religion, and nationality are often linked to vulnerability to the adverse effects of climate change, and if left unchecked, threaten to exacerbate long standing inequities. A team led by Evan Lieberman, professor of political science and director of the MIT Global Diversity Lab and MIT International Science and Technology Initiatives, Danielle Wood, assistant professor in the Program in Media Arts and Sciences and the Department of Aeronautics and Astronautics, and Siqi Zheng, professor of urban and real estate sustainability in the Center for Real Estate and the Department of Urban Studies and Planning, is seeking to  reduce ethnic and racial group-based disparities in the capacity of urban communities to adapt to the changing climate. Working with partners in nine coastal cities, they will measure the distribution of climate-related burdens and resiliency through satellites, a custom mobile app, and natural language processing of social media, to help design and test communication campaigns that provide accurate information about risks and remediation to impacted groups. 

    Q: How has this problem evolved?

    A: Group-based disparities continue to intensify within and across countries, owing in part to some randomness in the location of adverse climate events, as well as deep legacies of unequal human development. In turn, economically and politically privileged groups routinely hoard resources for adaptation. In a few cases — notably the United States, Brazil, and with respect to climate-related migrancy, in South Asia — there has been a great deal of research documenting the extent of such disparities. However, we lack common metrics, and for the most part, such disparities are only understood where key actors have politicized the underlying problems. In much of the world, relatively vulnerable and excluded groups may not even be fully aware of the nature of the challenges they face or the resources they require.

    Q: Who will benefit most from your research? 

    A: The greatest beneficiaries will be members of those vulnerable groups who lack the resources and infrastructure to withstand adverse climate shocks. We believe that it will be important to develop solutions such that relatively privileged groups do not perceive them as punitive or zero-sum, but rather as long-term solutions for collective benefit that are both sound and just. More

  • in

    Can the world meet global climate targets without coordinated global action?

    Like many of its predecessors, the 2021 United Nations Climate Change Conference (COP26) in Glasgow, Scotland concluded with bold promises on international climate action aimed at keeping global warming well below 2 degrees Celsius, but few concrete plans to ensure that those promises will be kept. While it’s not too late for the Paris Agreement’s nearly 200 signatory nations to take concerted action to cap global warming at 2 C — if not 1.5 C — there is simply no guarantee that they will do so. If they fail, how much warming is the Earth likely to see in the 21st century and beyond?

    A new study by researchers at the MIT Joint Program on the Science and Policy of Global Change and the Shell Scenarios Team projects that without a globally coordinated mitigation effort to reduce greenhouse gas emissions, the planet’s average surface temperature will reach 2.8 C, much higher than the “well below 2 C” level to which the Paris Agreement aspires, but a lot lower than what many widely used “business-as-usual” scenarios project.  

    Recognizing the limitations of such scenarios, which generally assume that historical trends in energy technology choices and climate policy inaction will persist for decades to come, the researchers have designed a “Growing Pressures” scenario that accounts for mounting social, technological, business, and political pressures that are driving a transition away from fossil-fuel use and toward a low-carbon future. Such pressures have already begun to expand low-carbon technology and policy options, which, in turn, have escalated demand to utilize those options — a trend that’s expected to self-reinforce. Under this scenario, an array of future actions and policies cause renewable energy and energy storage costs to decline; fossil fuels to be phased out; electrification to proliferate; and emissions from agriculture and industry to be sharply reduced.

    Incorporating these growing pressures in the MIT Joint Program’s integrated model of Earth and human systems, the study’s co-authors project future energy use, greenhouse gas emissions, and global average surface temperatures in a world that fails to implement coordinated, global climate mitigation policies, and instead pursues piecemeal actions at mostly local and national levels.

    “Few, if any, previous studies explore scenarios of how piecemeal climate policies might plausibly unfold into the future and impact global temperature,” says MIT Joint Program research scientist Jennifer Morris, the study’s lead author. “We offer such a scenario, considering a future in which the increasingly visible impacts of climate change drive growing pressure from voters, shareholders, consumers, and investors, which in turn drives piecemeal action by governments and businesses that steer investments away from fossil fuels and toward low-carbon alternatives.”

    In the study’s central case (representing the mid-range climate response to greenhouse gas emissions), fossil fuels persist in the global energy mix through 2060 and then slowly decline toward zero by 2130; global carbon dioxide emissions reach near-zero levels by 2130 (total greenhouse gas emissions decline to near-zero by 2150); and global surface temperatures stabilize at 2.8 C by 2150, 2.5 C lower than a widely used “business-as-usual” projection. The results appear in the journal Environmental Economics and Policy Studies.

    Such a transition could bring the global energy system to near-zero emissions, but more aggressive climate action would be needed to keep global temperatures well below 2 C in alignment with the Paris Agreement.

    “While we fully support the need to decarbonize as fast as possible, it is critical to assess realistic alternative scenarios of world development,” says Joint Program Deputy Director Sergey Paltsev, a co-author of the study. “We investigate plausible actions that could bring society closer to the long-term goals of the Paris Agreement. To actually meet those goals will require an accelerated transition away from fossil energy through a combination of R&D, technology deployment, infrastructure development, policy incentives, and business practices.”

    The study was funded by government, foundation, and industrial sponsors of the MIT Joint Program, including Shell International Ltd. More

  • in

    New power sources

    In the mid-1990s, a few energy activists in Massachusetts had a vision: What if citizens had choice about the energy they consumed? Instead of being force-fed electricity sources selected by a utility company, what if cities, towns, and groups of individuals could purchase power that was cleaner and cheaper?

    The small group of activists — including a journalist, the head of a small nonprofit, a local county official, and a legislative aide — drafted model legislation along these lines that reached the state Senate in 1995. The measure stalled out. In 1997, they tried again. Massachusetts legislators were busy passing a bill to reform the state power industry in other ways, and this time the activists got their low-profile policy idea included in it — as a provision so marginal it only got a brief mention in The Boston Globe’s coverage of the bill.

    Today, this idea, often known as Community Choice Aggregation (CCA), is used by roughly 36 million people in the U.S., or 11 percent of the population. Local residents, as a bloc, purchase energy with certain specifications attached, and over 1,800 communities have adopted CCA in six states, with others testing CCA pilot programs. From such modest beginnings, CCA has become a big deal.

    “It started small, then had a profound impact,” says David Hsu, an associate professor at MIT who studies energy policy issues. Indeed, the trajectory of CCA is so striking that Hsu has researched its origins, combing through a variety of archival sources and interviewing the principals. He has now written a journal article examining the lessons and implications of this episode.

    Hsu’s paper, “Straight out of Cape Cod: The origin of community choice aggregation and its spread to other states,” appears in advance online form in the journal Energy Research and Social Science, and in the April print edition of the publication.

    “I wanted to show people that a small idea could take off into something big,” Hsu says. “For me that’s a really hopeful democratic story, where people could do something without feeling they had to take on a whole giant system that wouldn’t immediately respond to only one person.”

    Local control

    Aggregating consumers to purchase energy was not a novelty in the 1990s. Companies within many industries have long joined forces to gain purchasing power for energy. And Rhode Island tried a form of CCA slightly earlier than Massachusetts did.

    However, it is the Massachusetts model that has been adopted widely: Cities or towns can require power purchases from, say, renewable sources, while individual citizens can opt out of those agreements. More state funding (for things like efficiency improvements) is redirected to cities and towns as well.

    In both ways, CCA policies provide more local control over energy delivery. They have been adopted in California, Illinois, New Jersey, New York, and Ohio. Meanwhile, Maryland, New Hampshire, and Virginia have recently passed similar legislation (also known as municipal or government aggregation, or community choice energy).

    For cities and towns, Hsu says, “Maybe you don’t own outright the whole energy system, but let’s take away one particular function of the utility, which is procurement.”

    That vision motivated a handful of Massachusetts activists and policy experts in the 1990s, including journalist Scott Ridley, who co-wrote a 1986 book, “Power Struggle,” with the University of Massachusetts historian Richard Rudolph and had spent years thinking about ways to reconfigure the energy system; Matt Patrick, chair of a local nonprofit focused on energy efficiency; Rob O’Leary, a local official in Barnstable County, on Cape Cod; and Paul Fenn, a staff aide to the state senator who chaired the legislature’s energy committee.

    “It started with these political activists,” Hsu says.

    Hsu’s research emphasizes several lessons to be learned from the fact the legislation first failed in 1995, before unexpectedly passing in 1997. Ridley remained an author and public figure; Patrick and O’Leary would each eventually be elected to the state legislature, but only after 2000; and Fenn had left his staff position by 1995 and worked with the group long-distance from California (where he became a long-term advocate about the issue). Thus, at the time CCA passed in 1997, none of its main advocates held an insider position in state politics. How did it succeed?

    Lessons of the legislation

    In the first place, Hsu believes, a legislative process resembles what the political theorist John Kingdon has called a “multiple streams framework,” in which “many elements of the policymaking process are separate, meandering, and uncertain.” Legislation isn’t entirely controlled by big donors or other interest groups, and “policy entrepreneurs” can find success in unpredictable windows of opportunity.

    “It’s the most true-to-life theory,” says Hsu.  

    Second, Hsu emphasizes, finding allies is crucial. In the case of CCA, that came about in a few ways. Many towns in Massachusetts have a town-level legislature known as Town Meeting; the activists got those bodies in about 20 towns to pass nonbinding resolutions in favor of community choice. O’Leary helped create a regional county commission in Barnstable County, while Patrick crafted an energy plan for it. High electricity rates were affecting all of Cape Cod at the time, so community choice also served as an economic benefit for Cape Cod’s working-class service-industry employees. The activists also found that adding an opt-out clause to the 1997 version appealed to legislators, who would support CCA if their constituents were not all bound to it.

    “You really have to stick with it, and you have to look for coalition partners,” Hsu says. “It’s fun to hear them [the activists] talk about going to Town Meetings, and how they tried to build grassroots support. If you look for allies, you can get things done. [I hope] the people can see [themselves] in other people’s activism even if they’re not exactly the same as you are.”

    By 1997, the CCA legislation had more geographic support, was understood as both an economic and environmental benefit for voters, and would not force membership upon anyone. The activists, while giving media interviews, and holding conferences, had found additional traction in the principle of citizen choice.

    “It’s interesting to me how the rhetoric of [citizen] choice and the rhetoric of democracy proves to be effective,” Hsu says. “Legislators feel like they have to give everyone some choice. And it expresses a collective desire for a choice that the utilities take away by being monopolies.”

    He adds: “We need to set out principles that shape systems, rather than just taking the system as a given and trying to justify principles that are 150 years old.”

    One last element in CCA passage was good timing. The governor and legislature in Massachusetts were already seeking a “grand bargain” to restructure electricity delivery and loosen the grip of utilities; the CCA fit in as part of this larger reform movement. Still, CCA adoption has been gradual; about one-third of Massachusetts towns with CCA have only adopted it within the last five years.

    CCA’s growth does not mean it’s invulnerable to repeal or utility-funded opposition efforts — “In California there’s been pretty intense pushback,” Hsu notes. Still, Hsu concludes, the fact that a handful of activists could start a national energy-policy movement is a useful reminder that everyone’s actions can make a difference.

    “It wasn’t like they went charging through a barricade, they just found a way around it,” Hsu says. “I want my students to know you can organize and rethink the future. It takes some commitment and work over a long time.” More

  • in

    First-ever Climate Grand Challenges recognizes 27 finalists

    All-carbon buildings, climate-resilient crops, and new tools to improve the prediction of extreme weather events are just a few of the 27 bold, interdisciplinary research projects selected as finalists from a field of almost 100 proposals in the first MIT Climate Grand Challenges competition. Each of the finalist teams received $100,000 to develop a comprehensive research and innovation plan.

    A subset of the finalists will make up a portfolio of multiyear projects that will receive additional funding and other support to develop high-impact, science-based mitigation and adaptation solutions on an accelerated basis. These flagship projects, which will be announced later this spring, will augment the work of the many MIT units already pursuing climate-related research activities.

    “Climate change poses a suite of challenges of immense urgency, complexity and scale. At MIT, we are bringing our particular strengths to bear through our community — a rare concentration of ingenuity and determination, rooted in a vibrant innovation ecosystem,” President L. Rafael Reif says. “Through MIT’s Climate Grand Challenges, we are engaging hundreds of our brilliant faculty and researchers in the search for solutions with enormous potential for impact.”

    The Climate Grand Challenges launched in July 2020 with the goal of mobilizing the entire MIT research community around developing solutions to some of the most complex unsolved problems in emissions reduction, climate change adaptation and resilience, risk forecasting, carbon removal, and understanding the human impacts of climate change.

    An event in April will showcase the flagship projects, bringing together public and private sector partners with the MIT teams to begin assembling the necessary resources for developing, implementing, and scaling these solutions rapidly.

    A whole-of-MIT effort

    Part of a wide array of major climate programs outlined last year in “Fast Forward: MIT’s Climate Action Plan for the Decade,” the Climate Grand Challenges focuses on problems where progress depends on the application of forefront knowledge in the physical, life, and social sciences and the advancement of cutting-edge technologies.

    “We don’t have the luxury of time in responding to the intensifying climate crisis,” says Vice President for Research Maria Zuber, who oversees the implementation of MIT’s climate action plan. “The Climate Grand Challenges are about marshaling the wide and deep knowledge and methods of the MIT community around transformative research that can help accelerate our collective response to climate change.”

    If successful, the solutions will have tangible effects, changing the way people live and work. Examples of these new approaches range from developing cost-competitive long-term energy-storage systems to using drone technologies and artificial intelligence to study the role of the deep ocean in the climate crisis. Many projects also aim to increase the humanistic understanding of these phenomena, recognizing that technological advances alone will not address the widespread impacts of climate change, and a comparable behavioral and cultural shift is needed to stave off future threats.

    “To achieve net-zero emissions later this century we must deploy the tools and technologies we already have,” says Richard Lester, associate provost for international activities. “But we’re still far from having everything needed to get there in ways that are equitable and affordable. Nor do we have the solutions in hand that will allow communities — especially the most vulnerable ones — to adapt to the disruptions that will occur even if the world does get to net-zero. Climate Grand Challenges is creating a new opportunity for the MIT research community to attack some of these hard, unsolved problems, and to engage with partners in industry, government, and the nonprofit sector to accelerate the whole cycle of activities needed to implement solutions at scale.” 

    Selecting the finalist projects

    A 24-person faculty committee convened by Lester and Zuber with members from all five of MIT’s schools and the MIT Schwarzman College of Computing led the planning and initial call for ideas. A smaller group of committee members was charged with evaluating nearly 100 letters of interest, representing 90 percent of MIT departments and ​​involving almost 400 MIT faculty members and senior researchers as well as colleagues from other research institutions.

    “Effectively confronting the climate emergency requires risk taking and sustained investment over a period of many decades,” says Anantha Chandrakasan, dean of the School of Engineering. “We have a responsibility to use our incredible resources and expertise to tackle some of the most challenging problems in climate mitigation and adaptation, and the opportunity to make major advances globally.”

    Lester and Zuber charged a second faculty committee with organizing a rigorous and thorough evaluation of the plans developed by the 27 finalist teams. Drawing on an extensive review process involving international panels of prominent experts, MIT will announce a small group of flagship Grand Challenge projects in April. 

    Each of the 27 finalist teams is addressing one of four broad Grand Challenge problems:

    Building equity and fairness into climate solutions

    Policy innovation and experimentation for effective and equitable climate solutions, led by Abhijit Banerjee, Iqbal Dhaliwal, and Claire Walsh
    Protecting and enhancing natural carbon sinks – Natural Climate and Community Solutions (NCCS), led by John Fernandez, Daniela Rus, and Joann de Zegher
    Reducing group-based disparities in climate adaptation, led by Evan Lieberman, Danielle Wood, and Siqi Zheng
    Reinventing climate change adaptation – The Climate Resilience Early Warning System (CREWSnet), led by John Aldridge and Elfatih Eltahir
    The Deep Listening Project: Communication infrastructure for collaborative adaptation, led by Eric Gordon, Yihyun Lim, and James Paradis
    The Equitable Resilience Framework, led by Janelle Knox-Hayes

    Decarbonizing complex industries and processes

    Carbon >Building, led by Mark Goulthorpe
    Center for Electrification and Decarbonization of Industry, led by Yet-Ming Chiang and Bilge Yildiz
    Decarbonizing and strengthening the global energy infrastructure using nuclear batteries, led by Jacopo Buongiorno
    Emissions reduction through innovation in the textile industry, led by Yuly Fuentes-Medel and Greg Rutledge
    Rapid decarbonization of freight mobility, led by Yossi Sheffi and Matthias Winkenbach
    Revolutionizing agriculture with low-emissions, resilient crops, led by Christopher Voigt
    Solar fuels as a vector for climate change mitigation, led by Yuriy Román-Leshkov and Yogesh Surendranath
    The MIT Low-Carbon Co-Design Institute, led by Audun Botterud, Dharik Mallapragada, and Robert Stoner
    Tough to Decarbonize Transportation, led by Steven Barrett and William Green

    Removing, managing, and storing greenhouse gases

    Demonstrating safe, globally distributed geological CO2 storage at scale, led by Bradford Hager, Howard Herzog, and Ruben Juanes
    Deploying versatile carbon capture technologies and storage at scale, led by Betar Gallant, Bradford Hager, and T. Alan Hatton
    Directed Evolution of Biological Carbon Fixation Working Group at MIT (DEBC-MIT), led by Edward Boyden and Matthew Shoulders
    Managing sources and sinks of carbon in terrestrial and coastal ecosystems, led by Charles Harvey, Tami Lieberman, and Heidi Nepf
    Strategies to Reduce Atmospheric Methane, led by Desiree Plata

    The Advanced Carbon Mineralization Initiative, led by Edward Boyden, Matěj Peč, and Yogesh Surendranath

    Using data and science to forecast climate-related risk

    Bringing computation to the climate challenge, led by Noelle Eckley Selin and Raffaele Ferrari
    Ocean vital signs, led by Christopher Hill and Ryan Woosley
    Preparing for a new world of weather and climate extremes, led by Kerry Emanuel, Miho Mazereeuw, and Paul O’Gorman
    Quantifying and managing the risks of sea-level rise, led by Brent Minchew
    Stratospheric Airborne Climate Observatory System to initiate a climate risk forecasting revolution, led by R. John Hansman and Brent Minchew
    The future of coasts – Changing flood risk for coastal communities in the developing world, led by Dara Entekhabi, Miho Mazereeuw, and Danielle Wood

    To learn more about the MIT Climate Grand Challenges, visit climategrandchallenges.mit.edu. More