More stories

  • in

    A better way to separate gases

    Industrial processes for chemical separations, including natural gas purification and the production of oxygen and nitrogen for medical or industrial uses, are collectively responsible for about 15 percent of the world’s energy use. They also contribute a corresponding amount to the world’s greenhouse gas emissions. Now, researchers at MIT and Stanford University have developed a new kind of membrane for carrying out these separation processes with roughly 1/10 the energy use and emissions.

    Using membranes for separation of chemicals is known to be much more efficient than processes such as distillation or absorption, but there has always been a tradeoff between permeability — how fast gases can penetrate through the material — and selectivity — the ability to let the desired molecules pass through while blocking all others. The new family of membrane materials, based on “hydrocarbon ladder” polymers, overcomes that tradeoff, providing both high permeability and extremely good selectivity, the researchers say.

    The findings are reported today in the journal Science, in a paper by Yan Xia, an associate professor of chemistry at Stanford; Zachary Smith, an assistant professor of chemical engineering at MIT; Ingo Pinnau, a professor at King Abdullah University of Science and Technology, and five others.

    Gas separation is an important and widespread industrial process whose uses include removing impurities and undesired compounds from natural gas or biogas, separating oxygen and nitrogen from air for medical and industrial purposes, separating carbon dioxide from other gases for carbon capture, and producing hydrogen for use as a carbon-free transportation fuel. The new ladder polymer membranes show promise for drastically improving the performance of such separation processes. For example, separating carbon dioxide from methane, these new membranes have five times the selectivity and 100 times the permeability of existing cellulosic membranes for that purpose. Similarly, they are 100 times more permeable and three times as selective for separating hydrogen gas from methane.

    The new type of polymers, developed over the last several years by the Xia lab, are referred to as ladder polymers because they are formed from double strands connected by rung-like bonds, and these linkages provide a high degree of rigidity and stability to the polymer material. These ladder polymers are synthesized via an efficient and selective chemistry the Xia lab developed called CANAL, an acronym for catalytic arene-norbornene annulation, which stitches readily available chemicals into ladder structures with hundreds or even thousands of rungs. The polymers are synthesized in a solution, where they form rigid and kinked ribbon-like strands that can easily be made into a thin sheet with sub-nanometer-scale pores by using industrially available polymer casting processes. The sizes of the resulting pores can be tuned through the choice of the specific hydrocarbon starting compounds. “This chemistry and choice of chemical building blocks allowed us to make very rigid ladder polymers with different configurations,” Xia says.

    To apply the CANAL polymers as selective membranes, the collaboration made use of Xia’s expertise in polymers and Smith’s specialization in membrane research. Holden Lai, a former Stanford doctoral student, carried out much of the development and exploration of how their structures impact gas permeation properties. “It took us eight years from developing the new chemistry to finding the right polymer structures that bestow the high separation performance,” Xia says.

    The Xia lab spent the past several years varying the structures of CANAL polymers to understand how their structures affect their separation performance. Surprisingly, they found that adding additional kinks to their original CANAL polymers significantly improved the mechanical robustness of their membranes and boosted their selectivity  for molecules of similar sizes, such as oxygen and nitrogen gases, without losing permeability of the more permeable gas. The selectivity actually improves as the material ages. The combination of high selectivity and high permeability makes these materials outperform all other polymer materials in many gas separations, the researchers say.

    Today, 15 percent of global energy use goes into chemical separations, and these separation processes are “often based on century-old technologies,” Smith says. “They work well, but they have an enormous carbon footprint and consume massive amounts of energy. The key challenge today is trying to replace these nonsustainable processes.” Most of these processes require high temperatures for boiling and reboiling solutions, and these often are the hardest processes to electrify, he adds.

    For the separation of oxygen and nitrogen from air, the two molecules only differ in size by about 0.18 angstroms (ten-billionths of a meter), he says. To make a filter capable of separating them efficiently “is incredibly difficult to do without decreasing throughput.” But the new ladder polymers, when manufactured into membranes produce tiny pores that achieve high selectivity, he says. In some cases, 10 oxygen molecules permeate for every nitrogen, despite the razor-thin sieve needed to access this type of size selectivity. These new membrane materials have “the highest combination of permeability and selectivity of all known polymeric materials for many applications,” Smith says.

    “Because CANAL polymers are strong and ductile, and because they are soluble in certain solvents, they could be scaled for industrial deployment within a few years,” he adds. An MIT spinoff company called Osmoses, led by authors of this study, recently won the MIT $100K entrepreneurship competition and has been partly funded by The Engine to commercialize the technology.

    There are a variety of potential applications for these materials in the chemical processing industry, Smith says, including the separation of carbon dioxide from other gas mixtures as a form of emissions reduction. Another possibility is the purification of biogas fuel made from agricultural waste products in order to provide carbon-free transportation fuel. Hydrogen separation for producing a fuel or a chemical feedstock, could also be carried out efficiently, helping with the transition to a hydrogen-based economy.

    The close-knit team of researchers is continuing to refine the process to facilitate the development from laboratory to industrial scale, and to better understand the details on how the macromolecular structures and packing result in the ultrahigh selectivity. Smith says he expects this platform technology to play a role in multiple decarbonization pathways, starting with hydrogen separation and carbon capture, because there is such a pressing need for these technologies in order to transition to a carbon-free economy.

    “These are impressive new structures that have outstanding gas separation performance,” says Ryan Lively, am associate professor of chemical and biomolecular engineering at Georgia Tech, who was not involved in this work. “Importantly, this performance is improved during membrane aging and when the membranes are challenged with concentrated gas mixtures. … If they can scale these materials and fabricate membrane modules, there is significant potential practical impact.”

    The research team also included Jun Myun Ahn and Ashley Robinson at Stanford, Francesco Benedetti at MIT, now the chief executive officer at Osmoses, and Yingge Wang at King Abdullah University of Science and Technology in Saudi Arabia. The work was supported by the Stanford Natural Gas Initiative, the Sloan Research Fellowship, the U.S. Department of Energy Office of Basic Energy Sciences, and the National Science Foundation. More

  • in

    Finding her way to fusion

    “I catch myself startling people in public.”

    Zoe Fisher’s animated hands carry part of the conversation as she describes how her naturally loud and expressive laughter turned heads in the streets of Yerevan. There during MIT’s Independent Activities period (IAP), she was helping teach nuclear science at the American University of Armenia, before returning to MIT to pursue fusion research at the Plasma Science and Fusion Center (PSFC).

    Startling people may simply be in Fisher’s DNA. She admits that when she first arrived at MIT, knowing nothing about nuclear science and engineering (NSE), she chose to join that department’s Freshman Pre-Orientation Program (FPOP) “for the shock value.” It was a choice unexpected by family, friends, and mostly herself. Now in her senior year, a 2021 recipient of NSE’s Irving Kaplan Award for academic achievements by a junior and entering a fifth-year master of science program in nuclear fusion, Fisher credits that original spontaneous impulse for introducing her to a subject she found so compelling that, after exploring multiple possibilities, she had to return to it.

    Fisher’s venture to Armenia, under the guidance of NSE associate professor Areg Danagoulian, is not the only time she has taught oversees with MISTI’s Global Teaching Labs, though it is the first time she has taught nuclear science, not to mention thermodynamics and materials science. During IAP 2020 she was a student teacher at a German high school, teaching life sciences, mathematics, and even English to grades five through 12. And after her first year she explored the transportation industry with a mechanical engineering internship in Tuscany, Italy.

    By the time she was ready to declare her NSE major she had sampled the alternatives both overseas and at home, taking advantage of MIT’s Undergraduate Research Opportunities Program (UROP). Drawn to fusion’s potential as an endless source of carbon-free energy on earth, she decided to try research at the PSFC, to see if the study was a good fit. 

    Much fusion research at MIT has favored heating hydrogen fuel inside a donut-shaped device called a tokamak, creating plasma that is hot and dense enough for fusion to occur. Because plasma will follow magnetic field lines, these devices are wrapped with magnets to keep the hot fuel from damaging the chamber walls.

    Fisher was assigned to SPARC, the PSFC’s new tokamak collaboration with MIT startup Commonwealth Fusion Systems (CSF), which uses a game-changing high-temperature superconducting (HTS) tape to create fusion magnets that minimize tokamak size and maximize performance. Working on a database reference book for SPARC materials, she was finding purpose even in the most repetitive tasks. “Which is how I knew I wanted to stay in fusion,” she laughs.

    Fisher’s latest UROP assignment takes her — literally — deeper into SPARC research. She works in a basement laboratory in building NW13 nicknamed “The Vault,” on a proton accelerator whose name conjures an underworld: DANTE. Supervised by PSFC Director Dennis Whyte and postdoc David Fischer, she is exploring the effects of radiation damage on the thin HTS tape that is key to SPARC’s design, and ultimately to the success of ARC, a prototype working fusion power plant.

    Because repetitive bombardment with neutrons produced during the fusion process can diminish the superconducting properties of the HTS, it is crucial to test the tape repeatedly. Fisher assists in assembling and testing the experimental setups for irradiating the HTS samples. Fisher recalls her first project was installing a “shutter” that would allow researchers to control exactly how much radiation reached the tape without having to turn off the entire experiment.

    “You could just push the button — block the radiation — then unblock it. It sounds super simple, but it took many trials. Because first I needed the right size solenoid, and then I couldn’t find a piece of metal that was small enough, and then we needed cryogenic glue…. To this day the actual final piece is made partially of paper towels.”

    She shrugs and laughs. “It worked, and it was the cheapest option.”

    Fisher is always ready to find the fun in fusion. Referring to DANTE as “A really cool dude,” she admits, “He’s perhaps a bit fickle. I may or may not have broken him once.” During a recent IAP seminar, she joined other PSFC UROP students to discuss her research, and expanded on how a mishap can become a gateway to understanding.

    “The grad student I work with and I got to repair almost the entire internal circuit when we blew the fuse — which originally was a really bad thing. But it ended up being great because we figured out exactly how it works.”

    Fisher’s upbeat spirit makes her ideal not only for the challenges of fusion research, but for serving the MIT community. As a student representative for NSE’s Diversity, Equity and Inclusion Committee, she meets monthly with the goal of growing and supporting diversity within the department.

    “This opportunity is impactful because I get my voice, and the voices of my peers, taken seriously,” she says. “Currently, we are spending most of our efforts trying to identify and eliminate hurdles based on race, ethnicity, gender, and income that prevent people from pursuing — and applying to — NSE.”

    To break from the lab and committees, she explores the Charles River as part of MIT’s varsity sailing team, refusing to miss a sunset. She also volunteers as an FPOP mentor, seeking to provide incoming first-years with the kind of experience that will make them want to return to the topic, as she did.

    She looks forward to continuing her studies on the HTS tapes she has been irradiating, proposing to send a current pulse above the critical current through the tape, to possibly anneal any defects from radiation, which would make repairs on future fusion power plants much easier.

    Fisher credits her current path to her UROP mentors and their infectious enthusiasm for the carbon-free potential of fusion energy.

    “UROPing around the PSFC showed me what I wanted to do with my life,” she says. “Who doesn’t want to save the world?” More

  • in

    Q&A: Latifah Hamzah ’12 on creating sustainable solutions in Malaysia and beyond

    Latifah Hamzah ’12 graduated from MIT with a BS in mechanical engineering and minors in energy studies and music. During their time at MIT, Latifah participated in various student organizations, including the MIT Symphony Orchestra, Alpha Phi Omega, and the MIT Design/Build/Fly team. They also participated in the MIT Energy Initiative’s Undergraduate Research Opportunities Program (UROP) in the lab of former professor of mechanical engineering Alexander Mitsos, examining solar-powered thermal and electrical co-generation systems.

    After graduating from MIT, Latifah worked as a subsea engineer at Shell Global Solutions and co-founded Engineers Without Borders – Malaysia, a nonprofit organization dedicated to finding sustainable and empowering solutions that impact disadvantaged populations in Malaysia. More recently, Latifah received a master of science in mechanical engineering from Stanford University, where they are currently pursuing a PhD in environmental engineering with a focus on water and sanitation in developing contexts.

    Q: What inspired you to pursue energy studies as an undergraduate student at MIT?

    A: I grew up in Malaysia, where I was at once aware of both the extent to which the oil and gas industry is a cornerstone of the economy and the need to transition to a lower-carbon future. The Energy Studies minor was therefore enticing because it gave me a broader view of the energy space, including technical, policy, economic, and other viewpoints. This was my first exposure to how things worked in the real world — in that many different fields and perspectives had to be considered cohesively in order to have a successful, positive, and sustained impact. Although the minor was predominantly grounded in classroom learning, what I learned drove me to want to discover for myself how the forces of technology, society, and policy interacted in the field in my subsequent endeavors.

    In addition to the breadth that the minor added to my education, it also provided a structure and focus for me to build on my technical fundamentals. This included taking graduate-level classes and participating in UROPs that had specific energy foci. These were my first forays into questions that, while still predominantly technical, were more open-ended and with as-yet-unknown answers that would be substantially shaped by the framing of the question. This shift in mindset required from typical undergraduate classes and problem sets took a bit of adjusting to, but ultimately gave me the confidence and belief that I could succeed in a more challenging environment.

    Q: How did these experiences with energy help shape your path forward, particularly in regard to your work with Engineers Without Borders – Malaysia and now at Stanford?

    A: When I returned home after graduation, I was keen to harness my engineering education and explore in practice what the Energy Studies minor curriculum had taught by theory and case studies: to consider context, nuance, and interdisciplinary and myriad perspectives to craft successful, sustainable solutions. Recognizing that there were many underserved communities in Malaysia, I co-founded Engineers Without Borders – Malaysia with some friends with the aim of working with these communities to bring simple and sustainable engineering solutions. Many of these projects did have an energy focus. For example, we designed, sized, and installed micro-hydro or solar-power systems for various indigenous communities, allowing them to continue living on their ancestral lands while reducing energy poverty. Many other projects incorporated other aspects of engineering, such as hydrotherapy pools for folks with special needs, and water and sanitation systems for stateless maritime communities.

    Through my work with Engineers Without Borders – Malaysia, I found a passion for the broader aspects of sustainability, development, and equity. By spending time with communities in the field and sharing in their experiences, I recognized gaps in my skill set that I could work on to be more effective in advocating for social and environmental justice. In particular, I wanted to better understand communities and their perspectives while being mindful of my positionality. In addition, I wanted to address the more systemic aspects of the problems they faced, which I felt in many cases would only be possible through a combination of research, evidence, and policy. To this end, I embarked on a PhD in environmental engineering with a minor in anthropology and pursued a Community-Based Research Fellowship with Stanford’s Haas Center for Public Service. I have also participated in the Rising Environmental Leaders Program (RELP), which helps graduate students “hone their leadership and communications skills to maximize the impact of their research.” RELP afforded me the opportunity to interact with representatives from government, NGOs [nongovernmental organizations], think tanks, and industry, from which I gained a better understanding of the policy and adjacent ecosystems at both the federal and state levels.

    Q: What are you currently studying, and how does it relate to your past work and educational experiences?

    A: My dissertation investigates waste management and monitoring for improved planetary health in three distinct projects. Suboptimal waste management can lead to poor outcomes, including environmental contamination, overuse of resources, and lost economic and environmental opportunities in resource recovery. My first project showed that three combinations of factors resulted in ruminant feces contaminating the stored drinking water supplies of households in rural Kenya, and the results were published in the International Journal of Environmental Research and Public Health. Consequently, water and sanitation interventions must also consider animal waste for communities to have safe drinking water.

    My second project seeks to establish a circular economy in the chocolate industry with indigenous Malaysian farmers and the Chocolate Concierge, a tree-to-bar social enterprise. Having designed and optimized apparatuses and processes to create biochar from cacao husk waste, we are now examining its impact on the growth of cacao saplings and their root systems. The hope is that biochar will increase the resilience of saplings for when they are transplanted from the nursery to the farm. As biochar can improve soil health and yield while reducing fertilizer inputs and sequestering carbon, farmers can accrue substantial economic and environmental benefits, especially if they produce, use, and sell it themselves.

    My third project investigates the gap in sanitation coverage worldwide and potential ways of reducing it. Globally, 46 percent of the population lacks access to safely managed sanitation, while the majority of the 54 percent who do have access use on-site sanitation facilities such as septic tanks and latrines. Given that on-site, decentralized systems typically have a lower space and resource footprint, are cheaper to build and maintain, and can be designed to suit various contexts, they could represent the best chance of reaching the sanitation Sustainable Development Goal. To this end, I am part of a team of researchers at the Criddle Group at Stanford working to develop a household-scale system as part of the Gates Reinvent the Toilet Challenge, an initiative aimed at developing new sanitation and toilet technologies for developing contexts.

    The thread connecting these projects is a commitment to investigating both the technical and socio-anthropological dimensions of an issue to develop sustainable, reliable, and environmentally sensitive solutions, especially in low- and middle-income countries (LMICs). I believe that an interdisciplinary approach can provide a better understanding of the problem space, which will hopefully lead to effective potential solutions that can have a greater community impact.

    Q: What do you plan to do once you obtain your PhD?

    A: I hope to continue working in the spheres of water and sanitation and/or sustainability post-PhD. It is a fascinating moment to be in this space as a person of color from an LMIC, especially as ideas such as community-based research and decolonizing fields and institutions are becoming more widespread and acknowledged. Even during my time at Stanford, I have noticed some shifts in the discourse, although we still have a long way to go to achieve substantive and lasting change. Folks like me are underrepresented in forums where the priorities, policies, and financing of aid and development are discussed at the international or global scale. I hope I’ll be able to use my qualifications, experience, and background to advocate for more just outcomes.

    This article appears in the Autumn 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative More

  • in

    Setting carbon management in stone

    Keeping global temperatures within limits deemed safe by the Intergovernmental Panel on Climate Change means doing more than slashing carbon emissions. It means reversing them.

    “If we want to be anywhere near those limits [of 1.5 or 2 C], then we have to be carbon neutral by 2050, and then carbon negative after that,” says Matěj Peč, a geoscientist and the Victor P. Starr Career Development Assistant Professor in the Department of Earth, Atmospheric, and Planetary Sciences (EAPS).

    Going negative will require finding ways to radically increase the world’s capacity to capture carbon from the atmosphere and put it somewhere where it will not leak back out. Carbon capture and storage projects already suck in tens of million metric tons of carbon each year. But putting a dent in emissions will mean capturing many billions of metric tons more. Today, people emit around 40 billion tons of carbon each year globally, mainly by burning fossil fuels.

    Because of the need for new ideas when it comes to carbon storage, Peč has created a proposal for the MIT Climate Grand Challenges competition — a bold and sweeping effort by the Institute to support paradigm-shifting research and innovation to address the climate crisis. Called the Advanced Carbon Mineralization Initiative, his team’s proposal aims to bring geologists, chemists, and biologists together to make permanently storing carbon underground workable under different geological conditions. That means finding ways to speed-up the process by which carbon pumped underground is turned into rock, or mineralized.

    “That’s what the geology has to offer,” says Peč, who is a lead on the project, along with Ed Boyden, professor of biological engineering, brain and cognitive sciences, and media arts and sciences, and Yogesh Surendranath, professor of chemistry. “You look for the places where you can safely and permanently store these huge volumes of CO2.”

    Peč‘s proposal is one of 27 finalists selected from a pool of almost 100 Climate Grand Challenge proposals submitted by collaborators from across the Institute. Each finalist team received $100,000 to further develop their research proposals. A subset of finalists will be announced in April, making up a portfolio of multiyear “flagship” projects receiving additional funding and support.

    Building industries capable of going carbon negative presents huge technological, economic, environmental, and political challenges. For one, it’s expensive and energy-intensive to capture carbon from the air with existing technologies, which are “hellishly complicated,” says Peč. Much of the carbon capture underway today focuses on more concentrated sources like coal- or gas-burning power plants.

    It’s also difficult to find geologically suitable sites for storage. To keep it in the ground after it has been captured, carbon must either be trapped in airtight reservoirs or turned to stone.

    One of the best places for carbon capture and storage (CCS) is Iceland, where a number of CCS projects are up and running. The island’s volcanic geology helps speed up the mineralization process, as carbon pumped underground interacts with basalt rock at high temperatures. In that ideal setting, says Peč, 95 percent of carbon injected underground is mineralized after just two years — a geological flash.

    But Iceland’s geology is unusual. Elsewhere requires deeper drilling to reach suitable rocks at suitable temperature, which adds costs to already expensive projects. Further, says Peč, there’s not a complete understanding of how different factors influence the speed of mineralization.

    Peč‘s Climate Grand Challenge proposal would study how carbon mineralizes under different conditions, as well as explore ways to make mineralization happen more rapidly by mixing the carbon dioxide with different fluids before injecting it underground. Another idea — and the reason why there are biologists on the team — is to learn from various organisms adept at turning carbon into calcite shells, the same stuff that makes up limestone.

    Two other carbon management proposals, led by EAPS Cecil and Ida Green Professor Bradford Hager, were also selected as Climate Grand Challenge finalists. They focus on both the technologies necessary for capturing and storing gigatons of carbon as well as the logistical challenges involved in such an enormous undertaking.

    That involves everything from choosing suitable sites for storage, to regulatory and environmental issues, as well as how to bring disparate technologies together to improve the whole pipeline. The proposals emphasize CCS systems that can be powered by renewable sources, and can respond dynamically to the needs of different hard-to-decarbonize industries, like concrete and steel production.

    “We need to have an industry that is on the scale of the current oil industry that will not be doing anything but pumping CO2 into storage reservoirs,” says Peč.

    For a problem that involves capturing enormous amounts of gases from the atmosphere and storing it underground, it’s no surprise EAPS researchers are so involved. The Earth sciences have “everything” to offer, says Peč, including the good news that the Earth has more than enough places where carbon might be stored.

    “Basically, the Earth is really, really large,” says Peč. “The reasonably accessible places, which are close to the continents, store somewhere on the order of tens of thousands to hundreds thousands of gigatons of carbon. That’s orders of magnitude more than we need to put back in.” More

  • in

    Q&A: Climate Grand Challenges finalists on accelerating reductions in global greenhouse gas emissions

    This is the second article in a four-part interview series highlighting the work of the 27 MIT Climate Grand Challenges finalists, which received a total of $2.7 million in startup funding to advance their projects. In April, the Institute will name a subset of the finalists as multiyear flagship projects.

    Last month, the Intergovernmental Panel on Climate Change (IPCC), an expert body of the United Nations representing 195 governments, released its latest scientific report on the growing threats posed by climate change, and called for drastic reductions in greenhouse gas emissions to avert the most catastrophic outcomes for humanity and natural ecosystems.

    Bringing the global economy to net-zero carbon dioxide emissions by midcentury is complex and demands new ideas and novel approaches. The first-ever MIT Climate Grand Challenges competition focuses on four problem areas including removing greenhouse gases from the atmosphere and identifying effective, economic solutions for managing and storing these gases. The other Climate Grand Challenges research themes address using data and science to forecast climate-related risk, decarbonizing complex industries and processes, and building equity and fairness into climate solutions.

    In the following conversations prepared for MIT News, faculty from three of the teams working to solve “Removing, managing, and storing greenhouse gases” explain how they are drawing upon geological, biological, chemical, and oceanic processes to develop game-changing techniques for carbon removal, management, and storage. Their responses have been edited for length and clarity.

    Directed evolution of biological carbon fixation

    Agricultural demand is estimated to increase by 50 percent in the coming decades, while climate change is simultaneously projected to drastically reduce crop yield and predictability, requiring a dramatic acceleration of land clearing. Without immediate intervention, this will have dire impacts on wild habitat, rob the livelihoods of hundreds of millions of subsistence farmers, and create hundreds of gigatons of new emissions. Matthew Shoulders, associate professor in the Department of Chemistry, talks about the working group he is leading in partnership with Ed Boyden, the Y. Eva Tan professor of neurotechnology and Howard Hughes Medical Institute investigator at the McGovern Institute for Brain Research, that aims to massively reduce carbon emissions from agriculture by relieving core biochemical bottlenecks in the photosynthetic process using the most sophisticated synthetic biology available to science.

    Q: Describe the two pathways you have identified for improving agricultural productivity and climate resiliency.

    A: First, cyanobacteria grow millions of times faster than plants and dozens of times faster than microalgae. Engineering these cyanobacteria as a source of key food products using synthetic biology will enable food production using less land, in a fundamentally more climate-resilient manner. Second, carbon fixation, or the process by which carbon dioxide is incorporated into organic compounds, is the rate-limiting step of photosynthesis and becomes even less efficient under rising temperatures. Enhancements to Rubisco, the enzyme mediating this central process, will both improve crop yields and provide climate resilience to crops needed by 2050. Our team, led by Robbie Wilson and Max Schubert, has created new directed evolution methods tailored for both strategies, and we have already uncovered promising early results. Applying directed evolution to photosynthesis, carbon fixation, and food production has the potential to usher in a second green revolution.

    Q: What partners will you need to accelerate the development of your solutions?

    A: We have already partnered with leading agriculture institutes with deep experience in plant transformation and field trial capacity, enabling the integration of our improved carbon-dioxide-fixing enzymes into a wide range of crop plants. At the deployment stage, we will be positioned to partner with multiple industry groups to achieve improved agriculture at scale. Partnerships with major seed companies around the world will be key to leverage distribution channels in manufacturing supply chains and networks of farmers, agronomists, and licensed retailers. Support from local governments will also be critical where subsidies for seeds are necessary for farmers to earn a living, such as smallholder and subsistence farming communities. Additionally, our research provides an accessible platform that is capable of enabling and enhancing carbon dioxide sequestration in diverse organisms, extending our sphere of partnership to a wide range of companies interested in industrial microbial applications, including algal and cyanobacterial, and in carbon capture and storage.

    Strategies to reduce atmospheric methane

    One of the most potent greenhouse gases, methane is emitted by a range of human activities and natural processes that include agriculture and waste management, fossil fuel production, and changing land use practices — with no single dominant source. Together with a diverse group of faculty and researchers from the schools of Humanities, Arts, and Social Sciences; Architecture and Planning; Engineering; and Science; plus the MIT Schwarzman College of Computing, Desiree Plata, associate professor in the Department of Civil and Environmental Engineering, is spearheading the MIT Methane Network, an integrated approach to formulating scalable new technologies, business models, and policy solutions for driving down levels of atmospheric methane.

    Q: What is the problem you are trying to solve and why is it a “grand challenge”?

    A: Removing methane from the atmosphere, or stopping it from getting there in the first place, could change the rates of global warming in our lifetimes, saving as much as half a degree of warming by 2050. Methane sources are distributed in space and time and tend to be very dilute, making the removal of methane a challenge that pushes the boundaries of contemporary science and engineering capabilities. Because the primary sources of atmospheric methane are linked to our economy and culture — from clearing wetlands for cultivation to natural gas extraction and dairy and meat production — the social and economic implications of a fundamentally changed methane management system are far-reaching. Nevertheless, these problems are tractable and could significantly reduce the effects of climate change in the near term.

    Q: What is known about the rapid rise in atmospheric methane and what questions remain unanswered?

    A: Tracking atmospheric methane is a challenge in and of itself, but it has become clear that emissions are large, accelerated by human activity, and cause damage right away. While some progress has been made in satellite-based measurements of methane emissions, there is a need to translate that data into actionable solutions. Several key questions remain around improving sensor accuracy and sensor network design to optimize placement, improve response time, and stop leaks with autonomous controls on the ground. Additional questions involve deploying low-level methane oxidation systems and novel catalytic materials at coal mines, dairy barns, and other enriched sources; evaluating the policy strategies and the socioeconomic impacts of new technologies with an eye toward decarbonization pathways; and scaling technology with viable business models that stimulate the economy while reducing greenhouse gas emissions.

    Deploying versatile carbon capture technologies and storage at scale

    There is growing consensus that simply capturing current carbon dioxide emissions is no longer sufficient — it is equally important to target distributed sources such as the oceans and air where carbon dioxide has accumulated from past emissions. Betar Gallant, the American Bureau of Shipping Career Development Associate Professor of Mechanical Engineering, discusses her work with Bradford Hager, the Cecil and Ida Green Professor of Earth Sciences in the Department of Earth, Atmospheric and Planetary Sciences, and T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering and director of the School of Chemical Engineering Practice, to dramatically advance the portfolio of technologies available for carbon capture and permanent storage at scale. (A team led by Assistant Professor Matěj Peč of EAPS is also addressing carbon capture and storage.)

    Q: Carbon capture and storage processes have been around for several decades. What advances are you seeking to make through this project?

    A: Today’s capture paradigms are costly, inefficient, and complex. We seek to address this challenge by developing a new generation of capture technologies that operate using renewable energy inputs, are sufficiently versatile to accommodate emerging industrial demands, are adaptive and responsive to varied societal needs, and can be readily deployed to a wider landscape.

    New approaches will require the redesign of the entire capture process, necessitating basic science and engineering efforts that are broadly interdisciplinary in nature. At the same time, incumbent technologies have been optimized largely for integration with coal- or natural gas-burning power plants. Future applications must shift away from legacy emitters in the power sector towards hard-to-mitigate sectors such as cement, iron and steel, chemical, and hydrogen production. It will become equally important to develop and optimize systems targeted for much lower concentrations of carbon dioxide, such as in oceans or air. Our effort will expand basic science studies as well as human impacts of storage, including how public engagement and education can alter attitudes toward greater acceptance of carbon dioxide geologic storage.

    Q: What are the expected impacts of your proposed solution, both positive and negative?

    A: Renewable energy cannot be deployed rapidly enough everywhere, nor can it supplant all emissions sources, nor can it account for past emissions. Carbon capture and storage (CCS) provides a demonstrated method to address emissions that will undoubtedly occur before the transition to low-carbon energy is completed. CCS can succeed even if other strategies fail. It also allows for developing nations, which may need to adopt renewables over longer timescales, to see equitable economic development while avoiding the most harmful climate impacts. And, CCS enables the future viability of many core industries and transportation modes, many of which do not have clear alternatives before 2050, let alone 2040 or 2030.

    The perceived risks of potential leakage and earthquakes associated with geologic storage can be minimized by choosing suitable geologic formations for storage. Despite CCS providing a well-understood pathway for removing enough of the carbon dioxide already emitted into the atmosphere, some environmentalists vigorously oppose it, fearing that CCS rewards oil companies and disincentivizes the transition away from fossil fuels. We believe that it is more important to keep in mind the necessity of meeting key climate targets for the sake of the planet, and welcome those who can help. More

  • in

    How to clean solar panels without water

    Solar power is expected to reach 10 percent of global power generation by the year 2030, and much of that is likely to be located in desert areas, where sunlight is abundant. But the accumulation of dust on solar panels or mirrors is already a significant issue — it can reduce the output of photovoltaic panels by as much as 30 percent in just one month — so regular cleaning is essential for such installations.

    But cleaning solar panels currently is estimated to use about 10 billion gallons of water per year — enough to supply drinking water for up to 2 million people. Attempts at waterless cleaning are labor intensive and tend to cause irreversible scratching of the surfaces, which also reduces efficiency. Now, a team of researchers at MIT has devised a way of automatically cleaning solar panels, or the mirrors of solar thermal plants, in a waterless, no-contact system that could significantly reduce the dust problem, they say.

    The new system uses electrostatic repulsion to cause dust particles to detach and virtually leap off the panel’s surface, without the need for water or brushes. To activate the system, a simple electrode passes just above the solar panel’s surface, imparting an electrical charge to the dust particles, which are then repelled by a charge applied to the panel itself. The system can be operated automatically using a simple electric motor and guide rails along the side of the panel. The research is described today in the journal Science Advances, in a paper by MIT graduate student Sreedath Panat and professor of mechanical engineering Kripa Varanasi.

    Play video

    Despite concerted efforts worldwide to develop ever more efficient solar panels, Varanasi says, “a mundane problem like dust can actually put a serious dent in the whole thing.” Lab tests conducted by Panat and Varanasi showed that the dropoff of energy output from the panels happens steeply at the very beginning of the process of dust accumulation and can easily reach 30 percent reduction after just one month without cleaning. Even a 1 percent reduction in power, for a 150-megawatt solar installation, they calculated, could result in a $200,000 loss in annual revenue. The researchers say that globally, a 3 to 4 percent reduction in power output from solar plants would amount to a loss of between $3.3 billion and $5.5 billion.

    “There is so much work going on in solar materials,” Varanasi says. “They’re pushing the boundaries, trying to gain a few percent here and there in improving the efficiency, and here you have something that can obliterate all of that right away.”

    Many of the largest solar power installations in the world, including ones in China, India, the U.A.E., and the U.S., are located in desert regions. The water used for cleaning these solar panels using pressurized water jets has to be trucked in from a distance, and it has to be very pure to avoid leaving behind deposits on the surfaces. Dry scrubbing is sometimes used but is less effective at cleaning the surfaces and can cause permanent scratching that also reduces light transmission.

    Water cleaning makes up about 10 percent of the operating costs of solar installations. The new system could potentially reduce these costs while improving the overall power output by allowing for more frequent automated cleanings, the researchers say.

    “The water footprint of the solar industry is mind boggling,” Varanasi says, and it will be increasing as these installations continue to expand worldwide. “So, the industry has to be very careful and thoughtful about how to make this a sustainable solution.”

    Other groups have tried to develop electrostatic based solutions, but these have relied on a layer called an electrodynamic screen, using interdigitated electrodes. These screens can have defects that allow moisture in and cause them to fail, Varanasi says. While they might be useful on a place like Mars, he says, where moisture is not an issue, even in desert environments on Earth this can be a serious problem.

    The new system they developed only requires an electrode, which can be a simple metal bar, to pass over the panel, producing an electric field that imparts a charge to the dust particles as it goes. An opposite charge applied to a transparent conductive layer just a few nanometers thick deposited on the glass covering of the the solar panel then repels the particles, and by calculating the right voltage to apply, the researchers were able to find a voltage range sufficient to overcome the pull of gravity and adhesion forces, and cause the dust to lift away.

    Using specially prepared laboratory samples of dust with a range of particle sizes, experiments proved that the process works effectively on a laboratory-scale test installation, Panat says. The tests showed that humidity in the air provided a thin coating of water on the particles, which turned out to be crucial to making the effect work. “We performed experiments at varying humidities from 5 percent to 95 percent,” Panat says. “As long as the ambient humidity is greater than 30 percent, you can remove almost all of the particles from the surface, but as humidity decreases, it becomes harder.”

    Varanasi says that “the good news is that when you get to 30 percent humidity, most deserts actually fall in this regime.” And even those that are typically drier than that tend to have higher humidity in the early morning hours, leading to dew formation, so the cleaning could be timed accordingly.

    “Moreover, unlike some of the prior work on electrodynamic screens, which actually do not work at high or even moderate humidity, our system can work at humidity even as high as 95 percent, indefinitely,” Panat says.

    In practice, at scale, each solar panel could be fitted with railings on each side, with an electrode spanning across the panel. A small electric motor, perhaps using a tiny portion of the output from the panel itself, would drive a belt system to move the electrode from one end of the panel to the other, causing all the dust to fall away. The whole process could be automated or controlled remotely. Alternatively, thin strips of conductive transparent material could be permanently arranged above the panel, eliminating the need for moving parts.

    By eliminating the dependency on trucked-in water, by eliminating the buildup of dust that can contain corrosive compounds, and by lowering the overall operational costs, such systems have the potential to significantly improve the overall efficiency and reliability of solar installations, Varanasi says.

    The research was supported by Italian energy firm Eni. S.p.A. through the MIT Energy Initiative. More

  • in

    Using soap to remove micropollutants from water

    Imagine millions of soapy sponges the size of human cells that can clean water by soaking up contaminants. This simplistic model is used to describe technology that MIT chemical engineers have recently developed to remove micropollutants from water — a concerning, worldwide problem.

    Patrick S. Doyle, the Robert T. Haslam Professor of Chemical Engineering, PhD student Devashish Pratap Gokhale, and undergraduate Ian Chen recently published their research on micropollutant removal in the journal ACS Applied Polymer Materials. The work is funded by MIT’s Abdul Latif Jameel Water and Food Systems Lab (J-WAFS).

    In spite of their low concentrations (about 0.01–100 micrograms per liter), micropollutants can be hazardous to the ecosystem and to human health. They come from a variety of sources and have been detected in almost all bodies of water, says Gokhale. Pharmaceuticals passing through people and animals, for example, can end up as micropollutants in the water supply. Others, like endocrine disruptor bisphenol A (BPA), can leach from plastics during industrial manufacturing. Pesticides, dyes, petrochemicals, and per-and polyfluoroalkyl substances, more commonly known as PFAS, are also examples of micropollutants, as are some heavy metals like lead and arsenic. These are just some of the kinds of micropollutants, all of which can be toxic to humans and animals over time, potentially causing cancer, organ damage, developmental defects, or other adverse effects.

    Micropollutants are numerous but since their collective mass is small, they are difficult to remove from water. Currently, the most common practice for removing micropollutants from water is activated carbon adsorption. In this process, water passes through a carbon filter, removing only 30 percent of micropollutants. Activated carbon requires high temperatures to produce and regenerate, requiring specialized equipment and consuming large amounts of energy. Reverse osmosis can also be used to remove micropollutants from water; however, “it doesn’t lead to good elimination of this class of molecules, because of both their concentration and their molecular structure,” explains Doyle.

    Inspired by soap

    When devising their solution for how to remove micropollutants from water, the MIT researchers were inspired by a common household cleaning supply — soap. Soap cleans everything from our hands and bodies to dirty dishes to clothes, so perhaps the chemistry of soap could also be applied to sanitizing water. Soap has molecules called surfactants which have both hydrophobic (water-hating) and hydrophilic (water-loving) components. When water comes in contact with soap, the hydrophobic parts of the surfactant stick together, assembling into spherical structures called micelles with the hydrophobic portions of the molecules in the interior. The hydrophobic micelle cores trap and help carry away oily substances like dirt. 

    Doyle’s lab synthesized micelle-laden hydrogel particles to essentially cleanse water. Gokhale explains that they used microfluidics which “involve processing fluids on very small, micron-like scales” to generate uniform polymeric hydrogel particles continuously and reproducibly. These hydrogels, which are porous and absorbent, incorporate a surfactant, a photoinitiator (a molecule that creates reactive species), and a cross-linking agent known as PEGDA. The surfactant assembles into micelles that are chemically bonded to the hydrogel using ultraviolet light. When water flows through this micro-particle system, micropollutants latch onto the micelles and separate from the water. The physical interaction used in the system is strong enough to pull micropollutants from water, but weak enough that the hydrogel particles can be separated from the micropollutants, restabilized, and reused. Lab testing shows that both the speed and extent of pollutant removal increase when the amount of surfactant incorporated into the hydrogels is increased.

    “We’ve shown that in terms of rate of pullout, which is what really matters when you scale this up for industrial use, that with our initial format, we can already outperform the activated carbon,” says Doyle. “We can actually regenerate these particles very easily at room temperature. Nearly 10 regeneration cycles with minimal change in performance,” he adds.

    Regeneration of the particles occurs by soaking the micelles in 90 percent ethanol, whereby “all the pollutants just come out of the particles and back into the ethanol” says Gokhale. Ethanol is biosafe at low concentrations, inexpensive, and combustible, allowing for safe and economically feasible disposal. The recycling of the hydrogel particles makes this technology sustainable, which is a large advantage over activated carbon. The hydrogels can also be tuned to any hydrophobic micropollutant, making this system a novel, flexible approach to water purification.

    Scaling up

    The team experimented in the lab using 2-naphthol, a micropollutant that is an organic pollutant of concern and known to be difficult to remove by using conventional water filtration methods. They hope to continue testing with real water samples. 

    “Right now, we spike one micropollutant into pure lab water. We’d like to get water samples from the natural environment, that we can study and look at experimentally,” says Doyle. 

    By using microfluidics to increase particle production, Doyle and his lab hope to make household-scale filters to be tested with real wastewater. They then anticipate scaling up to municipal water treatment or even industrial wastewater treatment. 

    The lab recently filed an international patent application for their hydrogel technology that uses immobilized micelles. They plan to continue this work by experimenting with different kinds of hydrogels for the removal of heavy metal contaminants like lead from water. 

    Societal impacts

    Funded by a 2019 J-WAFS seed grant that is currently ongoing, this research has the potential to improve the speed, precision, efficiency, and environmental sustainability of water purification systems across the world. 

    “I always wanted to do work which had a social impact, and I was also always interested in water, because I think it’s really cool,” says Gokhale. He notes, “it’s really interesting how water sort of fits into different kinds of fields … we have to consider the cultures of peoples, how we’re going to use this, and then just the equity of these water processes.” Originally from India, Gokhale says he’s seen places that have barely any water at all and others that have floods year after year. “There’s a lot of interesting work to be done, and I think it’s work in this area that’s really going to impact a lot of people’s lives in years to come,” Gokhale says.

    Doyle adds, “water is the most important thing, perhaps for the next decades to come, so it’s very fulfilling to work on something that is so important to the whole world.” More

  • in

    Using nature’s structures in wooden buildings

    Concern about climate change has focused significant attention on the buildings sector, in particular on the extraction and processing of construction materials. The concrete and steel industries together are responsible for as much as 15 percent of global carbon dioxide emissions. In contrast, wood provides a natural form of carbon sequestration, so there’s a move to use timber instead. Indeed, some countries are calling for public buildings to be made at least partly from timber, and large-scale timber buildings have been appearing around the world.

    Observing those trends, Caitlin Mueller ’07, SM ’14, PhD ’14, an associate professor of architecture and of civil and environmental engineering in the Building Technology Program at MIT, sees an opportunity for further sustainability gains. As the timber industry seeks to produce wooden replacements for traditional concrete and steel elements, the focus is on harvesting the straight sections of trees. Irregular sections such as knots and forks are turned into pellets and burned, or ground up to make garden mulch, which will decompose within a few years; both approaches release the carbon trapped in the wood to the atmosphere.

    For the past four years, Mueller and her Digital Structures research group have been developing a strategy for “upcycling” those waste materials by using them in construction — not as cladding or finishes aimed at improving appearance, but as structural components. “The greatest value you can give to a material is to give it a load-bearing role in a structure,” she says. But when builders use virgin materials, those structural components are the most emissions-intensive parts of buildings due to their large volume of high-strength materials. Using upcycled materials in place of those high-carbon systems is therefore especially impactful in reducing emissions.

    Mueller and her team focus on tree forks — that is, spots where the trunk or branch of a tree divides in two, forming a Y-shaped piece. In architectural drawings, there are many similar Y-shaped nodes where straight elements come together. In such cases, those units must be strong enough to support critical loads.

    “Tree forks are naturally engineered structural connections that work as cantilevers in trees, which means that they have the potential to transfer force very efficiently thanks to their internal fiber structure,” says Mueller. “If you take a tree fork and slice it down the middle, you see an unbelievable network of fibers that are intertwining to create these often three-dimensional load transfer points in a tree. We’re starting to do the same thing using 3D printing, but we’re nowhere near what nature does in terms of complex fiber orientation and geometry.”

    She and her team have developed a five-step “design-to-fabrication workflow” that combines natural structures such as tree forks with the digital and computational tools now used in architectural design. While there’s long been a “craft” movement to use natural wood in railings and decorative features, the use of computational tools makes it possible to use wood in structural roles — without excessive cutting, which is costly and may compromise the natural geometry and internal grain structure of the wood.

    Given the wide use of digital tools by today’s architects, Mueller believes that her approach is “at least potentially scalable and potentially achievable within our industrialized materials processing systems.” In addition, by combining tree forks with digital design tools, the novel approach can also support the trend among architects to explore new forms. “Many iconic buildings built in the past two decades have unexpected shapes,” says Mueller. “Tree branches have a very specific geometry that sometimes lends itself to an irregular or nonstandard architectural form — driven not by some arbitrary algorithm but by the material itself.”

    Step 0: Find a source, set goals

    Before starting their design-to-fabrication process, the researchers needed to locate a source of tree forks. Mueller found help in the Urban Forestry Division of the City of Somerville, Massachusetts, which maintains a digital inventory of more than 2,000 street trees — including more than 20 species — and records information about the location, approximate trunk diameter, and condition of each tree.

    With permission from the forestry division, the team was on hand in 2018 when a large group of trees was cut down near the site of the new Somerville High School. Among the heavy equipment on site was a chipper, poised to turn all the waste wood into mulch. Instead, the workers obligingly put the waste wood into the researchers’ truck to be brought to MIT.

    In their project, the MIT team sought not only to upcycle that waste material but also to use it to create a structure that would be valued by the public. “Where I live, the city has had to take down a lot of trees due to damage from an invasive species of beetle,” Mueller explains. “People get really upset — understandably. Trees are an important part of the urban fabric, providing shade and beauty.” She and her team hoped to reduce that animosity by “reinstalling the removed trees in the form of a new functional structure that would recreate the atmosphere and spatial experience previously provided by the felled trees.”

    With their source and goals identified, the researchers were ready to demonstrate the five steps in their design-to-fabrication workflow for making spatial structures using an inventory of tree forks.

    Step 1: Create a digital material library

    The first task was to turn their collection of tree forks into a digital library. They began by cutting off excess material to produce isolated tree forks. They then created a 3D scan of each fork. Mueller notes that as a result of recent progress in photogrammetry (measuring objects using photographs) and 3D scanning, they could create high-resolution digital representations of the individual tree forks with relatively inexpensive equipment, even using apps that run on a typical smartphone.

    In the digital library, each fork is represented by a “skeletonized” version showing three straight bars coming together at a point. The relative geometry and orientation of the branches are of particular interest because they determine the internal fiber orientation that gives the component its strength.

    Step 2: Find the best match between the initial design and the material library

    Like a tree, a typical architectural design is filled with Y-shaped nodes where three straight elements meet up to support a critical load. The goal was therefore to match the tree forks in the material library with the nodes in a sample architectural design.

    First, the researchers developed a “mismatch metric” for quantifying how well the geometries of a particular tree fork aligned with a given design node. “We’re trying to line up the straight elements in the structure with where the branches originally were in the tree,” explains Mueller. “That gives us the optimal orientation for load transfer and maximizes use of the inherent strength of the wood fiber.” The poorer the alignment, the higher the mismatch metric.

    The goal was to get the best overall distribution of all the tree forks among the nodes in the target design. Therefore, the researchers needed to try different fork-to-node distributions and, for each distribution, add up the individual fork-to-node mismatch errors to generate an overall, or global, matching score. The distribution with the best matching score would produce the most structurally efficient use of the total tree fork inventory.

    Since performing that process manually would take far too long to be practical, they turned to the “Hungarian algorithm,” a technique developed in 1955 for solving such problems. “The brilliance of the algorithm is solving that [matching] problem very quickly,” Mueller says. She notes that it’s a very general-use algorithm. “It’s used for things like marriage match-making. It can be used any time you have two collections of things that you’re trying to find unique matches between. So, we definitely didn’t invent the algorithm, but we were the first to identify that it could be used for this problem.”

    The researchers performed repeated tests to show possible distributions of the tree forks in their inventory and found that the matching score improved as the number of forks available in the material library increased — up to a point. In general, the researchers concluded that the mismatch score was lowest, and thus best, when there were about three times as many forks in the material library as there were nodes in the target design.

    Step 3: Balance designer intention with structural performance

    The next step in the process was to incorporate the intention or preference of the designer. To permit that flexibility, each design includes a limited number of critical parameters, such as bar length and bending strain. Using those parameters, the designer can manually change the overall shape, or geometry, of the design or can use an algorithm that automatically changes, or “morphs,” the geometry. And every time the design geometry changes, the Hungarian algorithm recalculates the optimal fork-to-node matching.

    “Because the Hungarian algorithm is extremely fast, all the morphing and the design updating can be really fluid,” notes Mueller. In addition, any change to a new geometry is followed by a structural analysis that checks the deflections, strain energy, and other performance measures of the structure. On occasion, the automatically generated design that yields the best matching score may deviate far from the designer’s initial intention. In such cases, an alternative solution can be found that satisfactorily balances the design intention with a low matching score.

    Step 4: Automatically generate the machine code for fast cutting

    When the structural geometry and distribution of tree forks have been finalized, it’s time to think about actually building the structure. To simplify assembly and maintenance, the researchers prepare the tree forks by recutting their end faces to better match adjoining straight timbers and cutting off any remaining bark to reduce susceptibility to rot and fire.

    To guide that process, they developed a custom algorithm that automatically computes the cuts needed to make a given tree fork fit into its assigned node and to strip off the bark. The goal is to remove as little material as possible but also to avoid a complex, time-consuming machining process. “If we make too few cuts, we’ll cut off too much of the critical structural material. But we don’t want to make a million tiny cuts because it will take forever,” Mueller explains.

    The team uses facilities at the Autodesk Boston Technology Center Build Space, where the robots are far larger than any at MIT and the processing is all automated. To prepare each tree fork, they mount it on a robotic arm that pushes the joint through a traditional band saw in different orientations, guided by computer-generated instructions. The robot also mills all the holes for the structural connections. “That’s helpful because it ensures that everything is aligned the way you expect it to be,” says Mueller.

    Step 5: Assemble the available forks and linear elements to build the structure

    The final step is to assemble the structure. The tree-fork-based joints are all irregular, and combining them with the precut, straight wooden elements could be difficult. However, they’re all labeled. “All the information for the geometry is embedded in the joint, so the assembly process is really low-tech,” says Mueller. “It’s like a child’s toy set. You just follow the instructions on the joints to put all the pieces together.”

    They installed their final structure temporarily on the MIT campus, but Mueller notes that it was only a portion of the structure they plan to eventually build. “It had 12 nodes that we designed and fabricated using our process,” she says, adding that the team’s work was “a little interrupted by the pandemic.” As activity on campus resumes, the researchers plan to finish designing and building the complete structure, which will include about 40 nodes and will be installed as an outdoor pavilion on the site of the felled trees in Somerville.

    In addition, they will continue their research. Plans include working with larger material libraries, some with multibranch forks, and replacing their 3D-scanning technique with computerized tomography scanning technologies that can automatically generate a detailed geometric representation of a tree fork, including its precise fiber orientation and density. And in a parallel project, they’ve been exploring using their process with other sources of materials, with one case study focusing on using material from a demolished wood-framed house to construct more than a dozen geodesic domes.

    To Mueller, the work to date already provides new guidance for the architectural design process. With digital tools, it has become easy for architects to analyze the embodied carbon or future energy use of a design option. “Now we have a new metric of performance: How well am I using available resources?” she says. “With the Hungarian algorithm, we can compute that metric basically in real time, so we can work rapidly and creatively with that as another input to the design process.”

    This research was supported by MIT’s School of Architecture and Planning via the HASS Award.

    This article appears in the Autumn 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative. More