More stories

  • in

    Designing tiny filters to solve big problems

    For many industrial processes, the typical way to separate gases, liquids, or ions is with heat, using slight differences in boiling points to purify mixtures. These thermal processes account for roughly 10 percent of the energy use in the United States.MIT chemical engineer Zachary Smith wants to reduce costs and carbon footprints by replacing these energy-intensive processes with highly efficient filters that can separate gases, liquids, and ions at room temperature.In his lab at MIT, Smith is designing membranes with tiny pores that can filter tiny molecules based on their size. These membranes could be useful for purifying biogas, capturing carbon dioxide from power plant emissions, or generating hydrogen fuel.“We’re taking materials that have unique capabilities for separating molecules and ions with precision, and applying them to applications where the current processes are not efficient, and where there’s an enormous carbon footprint,” says Smith, an associate professor of chemical engineering.Smith and several former students have founded a company called Osmoses that is working toward developing these materials for large-scale use in gas purification. Removing the need for high temperatures in these widespread industrial processes could have a significant impact on energy consumption, potentially reducing it by as much as 90 percent.“I would love to see a world where we could eliminate thermal separations, and where heat is no longer a problem in creating the things that we need and producing the energy that we need,” Smith says.Hooked on researchAs a high school student, Smith was drawn to engineering but didn’t have many engineering role models. Both of his parents were physicians, and they always encouraged him to work hard in school.“I grew up without knowing many engineers, and certainly no chemical engineers. But I knew that I really liked seeing how the world worked. I was always fascinated by chemistry and seeing how mathematics helped to explain this area of science,” recalls Smith, who grew up near Harrisburg, Pennsylvania. “Chemical engineering seemed to have all those things built into it, but I really had no idea what it was.”At Penn State University, Smith worked with a professor named Henry “Hank” Foley on a research project designing carbon-based materials to create a “molecular sieve” for gas separation. Through a time-consuming and iterative layering process, he created a sieve that could purify oxygen and nitrogen from air.“I kept adding more and more coatings of a special material that I could subsequently carbonize, and eventually I started to get selectivity. In the end, I had made a membrane that could sieve molecules that only differed by 0.18 angstrom in size,” he says. “I got hooked on research at that point, and that’s what led me to do more things in the area of membranes.”After graduating from college in 2008, Smith pursued graduate studies in chemical engineering at the University of Texas at Austin. There, he continued developing membranes for gas separation, this time using a different class of materials — polymers. By controlling polymer structure, he was able to create films with pores that filter out specific molecules, such as carbon dioxide or other gases.“Polymers are a type of material that you can actually form into big devices that can integrate into world-class chemical plants. So, it was exciting to see that there was a scalable class of materials that could have a real impact on addressing questions related to CO2 and other energy-efficient separations,” Smith says.After finishing his PhD, he decided he wanted to learn more chemistry, which led him to a postdoctoral fellowship at the University of California at Berkeley.“I wanted to learn how to make my own molecules and materials. I wanted to run my own reactions and do it in a more systematic way,” he says.At Berkeley, he learned how make compounds called metal-organic frameworks (MOFs) — cage-like molecules that have potential applications in gas separation and many other fields. He also realized that while he enjoyed chemistry, he was definitely a chemical engineer at heart.“I learned a ton when I was there, but I also learned a lot about myself,” he says. “As much as I love chemistry, work with chemists, and advise chemists in my own group, I’m definitely a chemical engineer, really focused on the process and application.”Solving global problemsWhile interviewing for faculty jobs, Smith found himself drawn to MIT because of the mindset of the people he met.“I began to realize not only how talented the faculty and the students were, but the way they thought was very different than other places I had been,” he says. “It wasn’t just about doing something that would move their field a little bit forward. They were actually creating new fields. There was something inspirational about the type of people that ended up at MIT who wanted to solve global problems.”In his lab at MIT, Smith is now tackling some of those global problems, including water purification, critical element recovery, renewable energy, battery development, and carbon sequestration.In a close collaboration with Yan Xia, a professor at Stanford University, Smith recently developed gas separation membranes that incorporate a novel type of polymer known as “ladder polymers,” which are currently being scaled for deployment at his startup. Historically, using polymers for gas separation has been limited by a tradeoff between permeability and selectivity — that is, membranes that permit a faster flow of gases through the membrane tend to be less selective, allowing impurities to get through.Using ladder polymers, which consist of double strands connected by rung-like bonds, the researchers were able to create gas separation membranes that are both highly permeable and very selective. The boost in permeability — a 100- to 1,000-fold improvement over earlier materials — could enable membranes to replace some of the high-energy techniques now used to separate gases, Smith says.“This allows you to envision large-scale industrial problems solved with miniaturized devices,” he says. “If you can really shrink down the system, then the solutions we’re developing in the lab could easily be applied to big industries like the chemicals industry.”These developments and others have been part of a number of advancements made by collaborators, students, postdocs, and researchers who are part of Smith’s team.“I have a great research team of talented and hard-working students and postdocs, and I get to teach on topics that have been instrumental in my own professional career,” Smith says. “MIT has been a playground to explore and learn new things. I am excited for what my team will discover next, and grateful for an opportunity to help solve many important global problems.” More

  • in

    Q&A: Examining American attitudes on global climate policies

    Does the United States have a “moral responsibility” for providing aid to poor nations — which have a significantly smaller carbon footprint and face catastrophic climate events at a much higher rate than wealthy countries?A study published Dec. 11 in Climatic Change explores U.S. public opinion on global climate policies considering our nation’s historic role as a leading contributor of carbon emissions. The randomized, experimental survey specifically investigates American attitudes toward such a moral responsibility. The work was led by MIT Professor Evan Lieberman, the Total Chair on Contemporary African Politics and director of the MIT Center for International Studies, and Volha Charnysh, the Ford Career Development Associate Professor of Political Science, and was co-authored with MIT political science PhD student Jared Kalow and University of Pennsylvania postdoc Erin Walk PhD ’24. Here, Lieberman describes the team’s research and insights, and offers recommendations that could result in more effective climate advocacy.Q: What are the key findings — and any surprises — of your recent work on climate attitudes among the U.S. population?A: A big question at the COP29 Climate talks in Baku, Azerbaijan was: Who will pay the trillions of dollars needed to help lower-income countries adapt to climate change? During past meetings, global leaders have come to an increasing consensus that the wealthiest countries should pay, but there has been little follow-through on commitments. In countries like the United States, popular opinion about such policies can weigh heavily on politicians’ minds, as citizens focus on their own challenges at home.Prime Minister Gaston Browne of Antigua and Barbuda is one of many who views such transfers as a matter of moral responsibility, explaining that many rich countries see climate finance as “a random act of charity … not recognizing that they have a moral obligation to provide funding, especially the historical emitters and even those who currently have large emissions.”In our study, we set out to measure American attitudes towards climate-related foreign aid, and explicitly to test the impact of this particular moral responsibility narrative. We did this on an experimental basis, so subjects were randomly assigned to receive different messages.One message emphasized what we call a “climate justice” frame, and it argued that Americans should contribute to helping poor countries because of the United States’ disproportionate role in the emissions of greenhouse gasses that have led to global warming. That message had a positive impact on the extent to which citizens supported the use of foreign aid for climate adaptation in poor countries. However, when we looked at who was actually moved by the message, we found that the effect was larger and statistically significant only among Democrats, but not among Republicans.We were surprised that a message emphasizing solidarity, the idea that “we are all in this together,” had no overall effect on citizen attitudes, Democrats or Republicans. Q: What are your recommendations toward addressing the attitudes on global climate policies within the U.S.?A: First, given limited budgets and attention for communications campaigns, our research certainly suggests that emphasizing a bit of blaming and shaming is more powerful than more diffuse messages of shared responsibility.But our research also emphasized how critically important it is to find new ways to communicate with Republicans about climate change and about foreign aid. Republicans were overwhelmingly less supportive of climate aid and yet even from that low baseline, a message that moved Democrats had a much more mixed reception among Republicans. Researchers and those working on the front lines of climate communications need to do more to better understand Republican perspectives. Younger Republicans, for example, might be more movable on key climate policies.Q: With an incoming Trump administration, what are some of the specific hurdles and/or opportunities we face in garnering U.S. public support for international climate negotiations?A: Not only did Trump demonstrate his disdain for international action on climate change by withdrawing from the Paris agreement during his first term in office, but he has indicated his intention to double down on such strategies in his second term. And the idea that he would support assistance for the world’s poorest countries harmed by climate change? This seems unlikely. Because we find Republican public opinion so firmly in line with these perspectives, frankly, it is hard to be optimistic.Those Americans concerned with the effects of climate change may need to look to state-level, non-government, corporate, and more global organizations to support climate justice efforts.Q: Are there any other takeaways you’d like to share?A: Those working in the climate change area may need to rethink how we talk and message about the challenges the world faces. Right now, almost anything that sounds like “climate change” is likely to be rejected by Republican leaders and large segments of American society. Our approach of experimenting with different types of messages is a relatively low-cost strategy for identifying more promising strategies, targeted at Americans and at citizens in other wealthy countries.But our study, in line with other work, also demonstrates that partisanship — identifying as a Republican or Democrat — is by far the strongest predictor of attitudes toward climate aid. While climate justice messaging can move attitudes slightly, the effects are still modest relative to the contributions of party identification itself. Just as Republican party elites were once persuaded to take leadership in the global fight against HIV and AIDS, a similar challenge lies ahead for climate aid. More

  • in

    Minimizing the carbon footprint of bridges and other structures

    Awed as a young child by the majesty of the Golden Gate Bridge in San Francisco, civil engineer and MIT Morningside Academy for Design (MAD) Fellow Zane Schemmer has retained his fascination with bridges: what they look like, why they work, and how they’re designed and built.He weighed the choice between architecture and engineering when heading off to college, but, motivated by the why and how of structural engineering, selected the latter. Now he incorporates design as an iterative process in the writing of algorithms that perfectly balance the forces involved in discrete portions of a structure to create an overall design that optimizes function, minimizes carbon footprint, and still produces a manufacturable result.While this may sound like an obvious goal in structural design, it’s not. It’s new. It’s a more holistic way of looking at the design process that can optimize even down to the materials, angles, and number of elements in the nodes or joints that connect the larger components of a building, bridge, tower, etc.According to Schemmer, there hasn’t been much progress on optimizing structural design to minimize embodied carbon, and the work that exists often results in designs that are “too complex to be built in real life,” he says. The embodied carbon of a structure is the total carbon dioxide emissions of its life cycle: from the extraction or manufacture of its materials to their transport and use and through the demolition of the structure and disposal of the materials. Schemmer, who works with Josephine V. Carstensen, the Gilbert W. Winslow Career Development Associate Professor of Civil and Environmental Engineering at MIT, is focusing on the portion of that cycle that runs through construction.In September, at the IASS 2024 symposium “Redefining the Art of Structural Design in Zurich,” Schemmer and Carstensen presented their work on Discrete Topology Optimization algorithms that are able to minimize the embodied carbon in a bridge or other structure by up to 20 percent. This comes through materials selection that considers not only a material’s appearance and its ability to get the job done, but also the ease of procurement, its proximity to the building site, and the carbon embodied in its manufacture and transport.“The real novelty of our algorithm is its ability to consider multiple materials in a highly constrained solution space to produce manufacturable designs with a user-specified force flow,” Schemmer says. “Real-life problems are complex and often have many constraints associated with them. In traditional formulations, it can be difficult to have a long list of complicated constraints. Our goal is to incorporate these constraints to make it easier to take our designs out of the computer and create them in real life.”Take, for instance, a steel tower, which could be a “super lightweight, efficient design solution,” Schemmer explains. Because steel is so strong, you don’t need as much of it compared to concrete or timber to build a big building. But steel is also very carbon-intensive to produce and transport. Shipping it across the country or especially from a different continent can sharply increase its embodied carbon price tag. Schemmer’s topology optimization will replace some of the steel with timber elements or decrease the amount of steel in other elements to create a hybrid structure that will function effectively and minimize the carbon footprint. “This is why using the same steel in two different parts of the world can lead to two different optimized designs,” he explains.Schemmer, who grew up in the mountains of Utah, earned a BS and MS in civil and environmental engineering from University of California at Berkeley, where his graduate work focused on seismic design. He describes that education as providing a “very traditional, super-strong engineering background that tackled some of the toughest engineering problems,” along with knowledge of structural engineering’s traditions and current methods.But at MIT, he says, a lot of the work he sees “looks at removing the constraints of current societal conventions of doing things, and asks how could we do things if it was in a more ideal form; what are we looking at then? Which I think is really cool,” he says. “But I think sometimes too, there’s a jump between the most-perfect version of something and where we are now, that there needs to be a bridge between those two. And I feel like my education helps me see that bridge.”The bridge he’s referring to is the topology optimization algorithms that make good designs better in terms of decreased global warming potential.“That’s where the optimization algorithm comes in,” Schemmer says. “In contrast to a standard structure designed in the past, the algorithm can take the same design space and come up with a much more efficient material usage that still meets all the structural requirements, be up to code, and have everything we want from a safety standpoint.”That’s also where the MAD Design Fellowship comes in. The program provides yearlong fellowships with full financial support to graduate students from all across the Institute who network with each other, with the MAD faculty, and with outside speakers who use design in new ways in a surprising variety of fields. This helps the fellows gain a better understanding of how to use iterative design in their own work.“Usually people think of their own work like, ‘Oh, I had this background. I’ve been looking at this one way for a very long time.’ And when you look at it from an outside perspective, I think it opens your mind to be like, ‘Oh my God. I never would have thought about doing this that way. Maybe I should try that.’ And then we can move to new ideas, new inspiration for better work,” Schemmer says.He chose civil and structural engineering over architecture some seven years ago, but says that “100 years ago, I don’t think architecture and structural engineering were two separate professions. I think there was an understanding of how things looked and how things worked, and it was merged together. Maybe from an efficiency standpoint, it’s better to have things done separately. But I think there’s something to be said for having knowledge about how the whole system works, potentially more intermingling between the free-form architectural design and the mathematical design of a civil engineer. Merging it back together, I think, has a lot of benefits.”Which brings us back to the Golden Gate Bridge, Schemmer’s longtime favorite. You can still hear that excited 3-year-old in his voice when he talks about it.“It’s so iconic,” he says. “It’s connecting these two spits of land that just rise straight up out of the ocean. There’s this fog that comes in and out a lot of days. It’s a really magical place, from the size of the cable strands and everything. It’s just, ‘Wow.’ People built this over 100 years ago, before the existence of a lot of the computational tools that we have now. So, all the math, everything in the design, was all done by hand and from the mind. Nothing was computerized, which I think is crazy to think about.”As Schemmer continues work on his doctoral degree at MIT, the MAD fellowship will expose him to many more awe-inspiring ideas in other fields, leading him to incorporate some of these in some way with his engineering knowledge to design better ways of building bridges and other structures. More

  • in

    Coffee fix: MIT students decode the science behind the perfect cup

    Elaine Jutamulia ’24 took a sip of coffee with a few drops of anise extract. It was her second try.“What do you think?” asked Omar Orozco, standing at a lab table in MIT’s Breakerspace, surrounded by filters, brewing pots, and other coffee paraphernalia.“I think when I first tried it, it was still pretty bitter,” Jutamulia said thoughtfully. “But I think now that it’s steeped for a little bit — it took out some of the bitterness.”Jutamulia and current MIT senior Orozco were part of class 3.000 (Coffee Matters: Using the Breakerspace to Make the Perfect Cup), a new MIT course that debuted in spring 2024. The class combines lectures on chemistry and the science of coffee with hands-on experimentation and group projects. Their project explored how additives such as anise, salt, and chili oil influence coffee extraction — the process of dissolving flavor compounds from ground coffee into water — to improve taste and correct common brewing errors.Alongside tasting, they used an infrared spectrometer to identify the chemical compounds in their coffee samples that contribute to flavor. Does anise make bitter coffee smoother? Could chili oil balance the taste?“Generally speaking, if we could make a recommendation, that’s what we’re trying to find,” Orozco said.A three-unit “discovery class” designed to help first-year students explore majors, 3.000 was widely popular, enrolling more than 50 students. Its success was driven by the beverage at its core and the class’s hands-on approach, which pushes students to ask and answer questions they might not have otherwise.For aeronautics and astronautics majors Gabi McDonald and McKenzie Dinesen, coffee was the draw, but the class encouraged them to experiment and think in new ways. “It’s easy to drop people like us in, who love coffee, and, ‘Oh my gosh, there’s this class where we can go make coffee half the time and try all different kinds of things?’” McDonald says.Percolating knowledgeThe class pairs weekly lectures on topics such as coffee chemistry, the anatomy and composition of a coffee bean, the effects of roasting, and the brewing process with tasting sessions — students sample coffee brewed from different beans, roasts, and grinds. In the MIT Breakerspace, a new space on campus conceived and managed by the Department of Materials Science and Engineering (DMSE), students use equipment such as a digital optical microscope to examine ground coffee particles and a scanning electron microscope, which shoots beams of electrons at samples to reveal cross-sections of beans in stunning detail.Once students learn to operate instruments for guided tasks, they form groups and design their own projects.“The driver for those projects is some question they have about coffee raised by one of the lectures or the tasting sessions, or just something they’ve always wanted to know,” says DMSE Professor Jeffrey Grossman, who designed and teaches the class. “Then they’ll use one or more of these pieces of equipment to shed some light on it.”Grossman traces the origins of the class to his initial vision for the Breakerspace, a laboratory for materials analysis and lounge for MIT undergraduates. Opened in November 2023, the space gives students hands-on experience with materials science and engineering, an interdisciplinary field combining chemistry, physics, and engineering to probe the composition and structure of materials.“The world is made of stuff, and these are the tools to understand that stuff and bring it to life,” says Grossman. So he envisioned a class that would give students an “exploratory, inspiring nudge.”“Then the question wasn’t the pedagogy, it was, ‘What’s the hook?’ In materials science, there are a lot of directions you could go, but if you have one that inspires people because they know it and maybe like it already, then that’s exciting.”Cup of ambitionThat hook, of course, was coffee, the second-most-consumed beverage after water. It captured students’ imagination and motivated them to push boundaries.Orozco brought a fair amount of coffee knowledge to the class. In 2023, he taught in Mexico through the MISTI Global Teaching Labs program, where he toured several coffee farms and acquired a deeper knowledge of the beverage. He learned, for example, that black coffee, contrary to general American opinion, isn’t naturally bitter; bitterness arises from certain compounds that develop during the roasting process.“If you properly brew it with the right beans, it actually tastes good,” says Orozco, a humanities and engineering major. A year later, in 3.000, he expanded his understanding of making a good brew, particularly through the group project with Jutamulia and other students to fix bad coffee.The group prepared a control sample of “perfectly brewed” coffee — based on taste, coffee-to-water ratio, and other standards covered in class — alongside coffee that was under-extracted and over-extracted. Under-extracted coffee, made with water that isn’t hot enough or brewed for too short a time, tastes sharp or sour. Over-extracted coffee, brewed with too much coffee or for too long, tastes bitter.Those coffee samples got additives and were analyzed using Fourier Transform Infrared (FTIR) spectroscopy, measuring how coffee absorbed infrared light to identify flavor-related compounds. Jutamulia examined FTIR readings taken from a sample with lime juice to see how the citric acid influenced its chemical profile.“Can we find any correlation between what we saw and the existing known measurements of citric acid?” asks Jutamulia, who studied computation and cognition at MIT, graduating last May.Another group dove into coffee storage, questioning why conventional wisdom advises against freezing.“We just wondered why that’s the case,” says electrical engineering and computer science major Noah Wiley, a coffee enthusiast with his own espresso machine.The team compared methods like freezing brewed coffee, frozen coffee grounds, and whole beans ground after freezing, evaluating their impact on flavor and chemical composition.“Then we’re going to see which ones taste good,” says Wiley. The team used a class coffee review sheet to record attributes like acidity, bitterness, sweetness, and overall flavor, pairing the results with FTIR analysis to determine how storage affected taste.Wiley acknowledged that “good” is subjective. “Sometimes there’s a group consensus. I think people like fuller coffee, not watery,” he says.Other student projects compared caffeine levels in different coffee types, analyzed the effect of microwaving coffee on its chemical composition and flavor, and investigated the differences between authentic and counterfeit coffee beans.“We gave the students some papers to look at in case they were interested,” says Justin Lavallee, Breakerspace manager and co-teacher of the class. “But mostly we told them to focus on something they wanted to learn more about.”Drip, drip, dripBeyond answering specific questions about coffee, both students and teachers gained deeper insights into the beverage.“Coffee is a complicated material. There are thousands of molecules in the beans, which change as you roast and extract them,” says Grossman. “The number of ways you can engineer this collection of molecules — it’s profound, ranging from where and how the coffee’s grown to how the cherries are then treated to get the beans to how the beans are roasted and ground to the brewing method you use.”Dinesen learned firsthand, discovering, for example, that darker roasts have less caffeine than lighter roasts, puncturing a common misconception. “You can vary coffee so much — just with the roast of the bean, the size of the ground,” she says. “It’s so easily manipulatable, if that’s a word.”In addition to learning about the science and chemistry behind coffee, Dinesen and McDonald gained new brewing techniques, like using a pour-over cone. The pair even incorporated coffee making and testing into their study routine, brewing coffee while tackling problem sets for another class.“I would put my pour-over cone in my backpack with a Ziploc bag full of grounds, and we would go to the Student Center and pull out the cone, a filter, and the coffee grounds,” McDonald says. “And then we would make pour-overs while doing a P-set. We tested different amounts of water, too. It was fun.”Tony Chen, a materials science and engineering major, reflected on the 3.000’s title — “Using the Breakerspace to Make the Perfect Cup” — and whether making a perfect cup is possible. “I don’t think there’s one perfect cup because each person has their own preferences. I don’t think I’ve gotten to mine yet,” he says.Enthusiasm for coffee’s complexity and the discovery process was exactly what Grossman hoped to inspire in his students. “The best part for me was also just seeing them developing their own sense of curiosity,” he says.He recalled a moment early in the class when students, after being given a demo of the optical microscope, saw the surface texture of a magnified coffee bean, the mottled shades of color, and the honeycomb-like pattern of tiny irregular cells.“They’re like, ‘Wait a second. What if we add hot water to the grounds while it’s under the microscope? Would we see the extraction?’ So, they got hot water and some ground coffee beans, and lo and behold, it looked different. They could see the extraction right there,” Grossman says. “It’s like they have an idea that’s inspired by the learning, and they go and try it. I saw that happen many, many times throughout the semester.” More

  • in

    An abundant phytoplankton feeds a global network of marine microbes

    One of the hardest-working organisms in the ocean is the tiny, emerald-tinged Prochlorococcus marinus. These single-celled “picoplankton,” which are smaller than a human red blood cell, can be found in staggering numbers throughout the ocean’s surface waters, making Prochlorococcus the most abundant photosynthesizing organism on the planet. (Collectively, Prochlorococcus fix as much carbon as all the crops on land.) Scientists continue to find new ways that the little green microbe is involved in the ocean’s cycling and storage of carbon.Now, MIT scientists have discovered a new ocean-regulating ability in the small but mighty microbes: cross-feeding of DNA building blocks. In a study appearing today in Science Advances, the team reports that Prochlorococcus shed these extra compounds into their surroundings, where they are then “cross-fed,” or taken up by other ocean organisms, either as nutrients, energy, or for regulating metabolism. Prochlorococcus’ rejects, then, are other microbes’ resources.What’s more, this cross-feeding occurs on a regular cycle: Prochlorococcus tend to shed their molecular baggage at night, when enterprising microbes quickly consume the cast-offs. For a microbe called SAR11, the most abundant bacteria in the ocean, the researchers found that the nighttime snack acts as a relaxant of sorts, forcing the bacteria to slow down their metabolism and effectively recharge for the next day.Through this cross-feeding interaction, Prochlorococcus could be helping many microbial communities to grow sustainably, simply by giving away what it doesn’t need. And they’re doing so in a way that could set the daily rhythms of microbes around the world.“The relationship between the two most abundant groups of microbes in ocean ecosystems has intrigued oceanographers for years,” says co-author and MIT Institute Professor Sallie “Penny” Chisholm, who played a role in the discovery of Prochlorococcus in 1986. “Now we have a glimpse of the finely tuned choreography that contributes to their growth and stability across vast regions of the oceans.”Given that Prochlorococcus and SAR11 suffuse the surface oceans, the team suspects that the exchange of molecules from one to the other could amount to one of the major cross-feeding relationships in the ocean, making it an important regulator of the ocean carbon cycle.“By looking at the details and diversity of cross-feeding processes, we can start to unearth important forces that are shaping the carbon cycle,” says the study’s lead author, Rogier Braakman, a research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).Other MIT co-authors include Brandon Satinsky, Tyler O’Keefe, Shane Hogle, Jamie Becker, Robert Li, Keven Dooley, and Aldo Arellano, along with Krista Longnecker, Melissa Soule, and Elizabeth Kujawinski of Woods Hole Oceanographic Institution (WHOI).Spotting castawaysCross-feeding occurs throughout the microbial world, though the process has mainly been studied in close-knit communities. In the human gut, for instance, microbes are in close proximity and can easily exchange and benefit from shared resources.By comparison, Prochlorococcus are free-floating microbes that are regularly tossed and mixed through the ocean’s surface layers. While scientists assume that the plankton are involved in some amount of cross-feeding, exactly how this occurs, and who would benefit, have historically been challenging to probe; any stuff that Prochlorococcus cast away would have vanishingly low concentrations,and be exceedingly difficult to measure.But in work published in 2023, Braakman teamed up with scientists at WHOI, who pioneered ways to measure small organic compounds in seawater. In the lab, they grew various strains of Prochlorococcus under different conditions and characterized what the microbes released. They found that among the major “exudants,” or released molecules, were purines and pyridines, which are molecular building blocks of DNA. The molecules also happen to be nitrogen-rich — a fact that puzzled the team. Prochlorococcus are mainly found in ocean regions that are low in nitrogen, so it was assumed they’d want to retain any and all nitrogen-containing compounds they can. Why, then, were they instead throwing such compounds away?Global symphonyIn their new study, the researchers took a deep dive into the details of Prochlorococcus’ cross-feeding and how it influences various types of ocean microbes.They set out to study how Prochlorococcus use purine and pyridine in the first place, before expelling the compounds into their surroundings. They compared published genomes of the microbes, looking for genes that encode purine and pyridine metabolism. Tracing the genes forward through the genomes, the team found that once the compounds are produced, they are used to make DNA and replicate the microbes’ genome. Any leftover purine and pyridine is recycled and used again, though a fraction of the stuff is ultimately released into the environment. Prochlorococcus appear to make the most of the compounds, then cast off what they can’t.The team also looked to gene expression data and found that genes involved in recycling purine and pyrimidine peak several hours after the recognized peak in genome replication that occurs at dusk. The question then was: What could be benefiting from this nightly shedding?For this, the team looked at the genomes of more than 300 heterotrophic microbes — organisms that consume organic carbon rather than making it themselves through photosynthesis. They suspected that such carbon-feeders could be likely consumers of Prochlorococcus’ organic rejects. They found most of the heterotrophs contained genes that take up either purine or pyridine, or in some cases, both, suggesting microbes have evolved along different paths in terms of how they cross-feed.The group zeroed in on one purine-preferring microbe, SAR11, as it is the most abundant heterotrophic microbe in the ocean. When they then compared the genes across different strains of SAR11, they found that various types use purines for different purposes, from simply taking them up and using them intact to breaking them down for their energy, carbon, or nitrogen. What could explain the diversity in how the microbes were using Prochlorococcus’ cast-offs?It turns out the local environment plays a big role. Braakman and his collaborators performed a metagenome analysis in which they compared the collectively sequenced genomes of all microbes in over 600 seawater samples from around the world, focusing on SAR11 bacteria. Metagenome sequences were collected alongside measurements of various environmental conditions and geographic locations in which they are found. This analysis showed that the bacteria gobble up purine for its nitrogen when the nitrogen in seawater is low, and for its carbon or energy when nitrogen is in surplus — revealing the selective pressures shaping these communities in different ocean regimes.“The work here suggests that microbes in the ocean have developed relationships that advance their growth potential in ways we don’t expect,” says co-author Kujawinski.Finally, the team carried out a simple experiment in the lab, to see if they could directly observe a mechanism by which purine acts on SAR11. They grew the bacteria in cultures, exposed them to various concentrations of purine, and unexpectedly found it causes them to slow down their normal metabolic activities and even growth. However, when the researchers put these same cells under environmentally stressful conditions, they continued growing strong and healthy cells, as if the metabolic pausing by purines helped prime them for growth, thereby avoiding the effects of the stress.“When you think about the ocean, where you see this daily pulse of purines being released by Prochlorococcus, this provides a daily inhibition signal that could be causing a pause in SAR11 metabolism, so that the next day when the sun comes out, they are primed and ready,” Braakman says. “So we think Prochlorococcus is acting as a conductor in the daily symphony of ocean metabolism, and cross-feeding is creating a global synchronization among all these microbial cells.”This work was supported, in part, by the Simons Foundation and the National Science Foundation. More

  • in

    Surface-based sonar system could rapidly map the ocean floor at high resolution

    On June 18, 2023, the Titan submersible was about an hour-and-a-half into its two-hour descent to the Titanic wreckage at the bottom of the Atlantic Ocean when it lost contact with its support ship. This cease in communication set off a frantic search for the tourist submersible and five passengers onboard, located about two miles below the ocean’s surface.Deep-ocean search and recovery is one of the many missions of military services like the U.S. Coast Guard Office of Search and Rescue and the U.S. Navy Supervisor of Salvage and Diving. For this mission, the longest delays come from transporting search-and-rescue equipment via ship to the area of interest and comprehensively surveying that area. A search operation on the scale of that for Titan — which was conducted 420 nautical miles from the nearest port and covered 13,000 square kilometers, an area roughly twice the size of Connecticut — could take weeks to complete. The search area for Titan is considered relatively small, focused on the immediate vicinity of the Titanic. When the area is less known, operations could take months. (A remotely operated underwater vehicle deployed by a Canadian vessel ended up finding the debris field of Titan on the seafloor, four days after the submersible had gone missing.)A research team from MIT Lincoln Laboratory and the MIT Department of Mechanical Engineering’s Ocean Science and Engineering lab is developing a surface-based sonar system that could accelerate the timeline for small- and large-scale search operations to days. Called the Autonomous Sparse-Aperture Multibeam Echo Sounder, the system scans at surface-ship rates while providing sufficient resolution to find objects and features in the deep ocean, without the time and expense of deploying underwater vehicles. The echo sounder — which features a large sonar array using a small set of autonomous surface vehicles (ASVs) that can be deployed via aircraft into the ocean — holds the potential to map the seabed at 50 times the coverage rate of an underwater vehicle and 100 times the resolution of a surface vessel.

    Play video

    Autonomous Sparse-Aperture Multibeam Echo SounderVideo: MIT Lincoln Laboratory

    “Our array provides the best of both worlds: the high resolution of underwater vehicles and the high coverage rate of surface ships,” says co–principal investigator Andrew March, assistant leader of the laboratory’s Advanced Undersea Systems and Technology Group. “Though large surface-based sonar systems at low frequency have the potential to determine the materials and profiles of the seabed, they typically do so at the expense of resolution, particularly with increasing ocean depth. Our array can likely determine this information, too, but at significantly enhanced resolution in the deep ocean.”Underwater unknownOceans cover 71 percent of Earth’s surface, yet more than 80 percent of this underwater realm remains undiscovered and unexplored. Humans know more about the surface of other planets and the moon than the bottom of our oceans. High-resolution seabed maps would not only be useful to find missing objects like ships or aircraft, but also to support a host of other scientific applications: understanding Earth’s geology, improving forecasting of ocean currents and corresponding weather and climate impacts, uncovering archaeological sites, monitoring marine ecosystems and habitats, and identifying locations containing natural resources such as mineral and oil deposits.Scientists and governments worldwide recognize the importance of creating a high-resolution global map of the seafloor; the problem is that no existing technology can achieve meter-scale resolution from the ocean surface. The average depth of our oceans is approximately 3,700 meters. However, today’s technologies capable of finding human-made objects on the seabed or identifying person-sized natural features — these technologies include sonar, lidar, cameras, and gravitational field mapping — have a maximum range of less than 1,000 meters through water.Ships with large sonar arrays mounted on their hull map the deep ocean by emitting low-frequency sound waves that bounce off the seafloor and return as echoes to the surface. Operation at low frequencies is necessary because water readily absorbs high-frequency sound waves, especially with increasing depth; however, such operation yields low-resolution images, with each image pixel representing a football field in size. Resolution is also restricted because sonar arrays installed on large mapping ships are already using all of the available hull space, thereby capping the sonar beam’s aperture size. By contrast, sonars on autonomous underwater vehicles (AUVs) that operate at higher frequencies within a few hundred meters of the seafloor generate maps with each pixel representing one square meter or less, resulting in 10,000 times more pixels in that same football field–sized area. However, this higher resolution comes with trade-offs: AUVs are time-consuming and expensive to deploy in the deep ocean, limiting the amount of seafloor that can be mapped; they have a maximum range of about 1,000 meters before their high-frequency sound gets absorbed; and they move at slow speeds to conserve power. The area-coverage rate of AUVs performing high-resolution mapping is about 8 square kilometers per hour; surface vessels map the deep ocean at more than 50 times that rate.A solution surfacesThe Autonomous Sparse-Aperture Multibeam Echo Sounder could offer a cost-effective approach to high-resolution, rapid mapping of the deep seafloor from the ocean’s surface. A collaborative fleet of about 20 ASVs, each hosting a small sonar array, effectively forms a single sonar array 100 times the size of a large sonar array installed on a ship. The large aperture achieved by the array (hundreds of meters) produces a narrow beam, which enables sound to be precisely steered to generate high-resolution maps at low frequency. Because very few sonars are installed relative to the array’s overall size (i.e., a sparse aperture), the cost is tractable.However, this collaborative and sparse setup introduces some operational challenges. First, for coherent 3D imaging, the relative position of each ASV’s sonar subarray must be accurately tracked through dynamic ocean-induced motions. Second, because sonar elements are not placed directly next to each other without any gaps, the array suffers from a lower signal-to-noise ratio and is less able to reject noise coming from unintended or undesired directions. To mitigate these challenges, the team has been developing a low-cost precision-relative navigation system and leveraging acoustic signal processing tools and new ocean-field estimation algorithms. The MIT campus collaborators are developing algorithms for data processing and image formation, especially to estimate depth-integrated water-column parameters. These enabling technologies will help account for complex ocean physics, spanning physical properties like temperature, dynamic processes like currents and waves, and acoustic propagation factors like sound speed.Processing for all required control and calculations could be completed either remotely or onboard the ASVs. For example, ASVs deployed from a ship or flying boat could be controlled and guided remotely from land via a satellite link or from a nearby support ship (with direct communications or a satellite link), and left to map the seabed for weeks or months at a time until maintenance is needed. Sonar-return health checks and coarse seabed mapping would be conducted on board, while full, high-resolution reconstruction of the seabed would require a supercomputing infrastructure on land or on a support ship.”Deploying vehicles in an area and letting them map for extended periods of time without the need for a ship to return home to replenish supplies and rotate crews would significantly simplify logistics and operating costs,” says co–principal investigator Paul Ryu, a researcher in the Advanced Undersea Systems and Technology Group.Since beginning their research in 2018, the team has turned their concept into a prototype. Initially, the scientists built a scale model of a sparse-aperture sonar array and tested it in a water tank at the laboratory’s Autonomous Systems Development Facility. Then, they prototyped an ASV-sized sonar subarray and demonstrated its functionality in Gloucester, Massachusetts. In follow-on sea tests in Boston Harbor, they deployed an 8-meter array containing multiple subarrays equivalent to 25 ASVs locked together; with this array, they generated 3D reconstructions of the seafloor and a shipwreck. Most recently, the team fabricated, in collaboration with Woods Hole Oceanographic Institution, a first-generation, 12-foot-long, all-electric ASV prototype carrying a sonar array underneath. With this prototype, they conducted preliminary relative navigation testing in Woods Hole, Massachusetts and Newport, Rhode Island. Their full deep-ocean concept calls for approximately 20 such ASVs of a similar size, likely powered by wave or solar energy.This work was funded through Lincoln Laboratory’s internally administered R&D portfolio on autonomous systems. The team is now seeking external sponsorship to continue development of their ocean floor–mapping technology, which was recognized with a 2024 R&D 100 Award.  More

  • in

    In a unique research collaboration, students make the case for less e-waste

    Brought together as part of the Social and Ethical Responsibilities of Computing (SERC) initiative within the MIT Schwarzman College of Computing, a community of students known as SERC Scholars is collaborating to examine the most urgent problems humans face in the digital landscape.Each semester, students from all levels from across MIT are invited to join a different topical working group led by a SERC postdoctoral associate. Each group delves into a specific issue — such as surveillance or data ownership — culminating in a final project presented at the end of the term.Typically, students complete the program with hands-on experience conducting research in a new cross-disciplinary field. However, one group of undergraduate and graduate students recently had the unique opportunity to enhance their resume by becoming published authors of a case study about the environmental and climate justice implications of the electronics hardware life cycle.Although it’s not uncommon for graduate students to co-author case studies, it’s unusual for undergraduates to earn this opportunity — and for their audience to be other undergraduates around the world.“Our team was insanely interdisciplinary,” says Anastasia Dunca, a junior studying computer science and one of the co-authors. “I joined the SERC Scholars Program because I liked the idea of being part of a cohort from across MIT working on a project that utilized all of our skillsets. It also helps [undergraduates] learn the ins and outs of computing ethics research.”Case study co-author Jasmin Liu, an MBA student in the MIT Sloan School of Management, sees the program as a platform to learn about the intersection of technology, society, and ethics: “I met team members spanning computer science, urban planning, to art/culture/technology. I was excited to work with a diverse team because I know complex problems must be approached with many different perspectives. Combining my background in humanities and business with the expertise of others allowed us to be more innovative and comprehensive.”Christopher Rabe, a former SERC postdoc who facilitated the group, says, “I let the students take the lead on identifying the topic and conducting the research.” His goal for the group was to challenge students across disciplines to develop a working definition of climate justice.From mining to e-wasteThe SERC Scholars’ case study, “From Mining to E-waste: The Environmental and Climate Justice Implications of the Electronics Hardware Life Cycle,” was published by the MIT Case Studies in Social and Ethical Responsibilities of Computing.The ongoing case studies series, which releases new issues twice a year on an open-source platform, is enabling undergraduate instructors worldwide to incorporate research-based education materials on computing ethics into their existing class syllabi.This particular case study broke down the electronics life cycle from mining to manufacturing, usage, and disposal. It offered an in-depth look at how this cycle promotes inequity in the Global South. Mining for the average of 60 minerals that power everyday devices lead to illegal deforestation, compromising air quality in the Amazon, and triggering armed conflict in Congo. Manufacturing leads to proven health risks for both formal and informal workers, some of whom are child laborers.Life cycle assessment and circular economy are proposed as mechanisms for analyzing environmental and climate justice issues in the electronics life cycle. Rather than posing solutions, the case study offers readers entry points for further discussion and for assessing their own individual responsibility as producers of e-waste.Crufting and crafting a case studyDunca joined Rabe’s working group, intrigued by the invitation to conduct a rigorous literature review examining issues like data center resource and energy use, manufacturing waste, ethical issues with AI, and climate change. Rabe quickly realized that a common thread among all participants was an interest in understanding and reducing e-waste and its impact on the environment.“I came in with the idea of us co-authoring a case study,” Rabe said. However, the writing-intensive process was initially daunting to those students who were used to conducting applied research. Once Rabe created sub-groups with discrete tasks, the steps for researching, writing, and iterating a case study became more approachable.For Ellie Bultena, an undergraduate student studying linguistics and philosophy and a contributor to the study, that meant conducting field research on the loading dock of MIT’s Stata Center, where students and faculty go “crufting” through piles of clunky printers, broken computers, and used lab equipment discarded by the Institute’s labs, departments, and individual users.Although not a formally sanctioned activity on-campus, “crufting” is the act of gleaning usable parts from these junk piles to be repurposed into new equipment or art. Bultena’s respondents, who opted to be anonymous, said that MIT could do better when it comes to the amount of e-waste generated and suggested that formal strategies could be implemented to encourage community members to repair equipment more easily or recycle more formally.Rabe, now an education program director at the MIT Environmental Solutions Initiative, is hopeful that through the Zero-Carbon Campus Initiative, which commits MIT to eliminating all direct emissions by 2050, MIT will ultimately become a model for other higher education institutions.Although the group lacked the time and resources to travel to communities in the Global South that they profiled in their case study, members leaned into exhaustive secondary research, collecting data on how some countries are irresponsibly dumping e-waste. In contrast, others have developed alternative solutions that can be duplicated elsewhere and scaled.“We source materials, manufacture them, and then throw them away,” Lelia Hampton says. A PhD candidate in electrical engineering and computer science and another co-author, Hampton jumped at the opportunity to serve in a writing role, bringing together the sub-groups research findings. “I’d never written a case study, and it was exciting. Now I want to write 10 more.”The content directly informed Hampton’s dissertation research, which “looks at applying machine learning to climate justice issues such as urban heat islands.” She said that writing a case study that is accessible to general audiences upskilled her for the non-profit organization she’s determined to start. “It’s going to provide communities with free resources and data needed to understand how they are impacted by climate change and begin to advocate against injustice,” Hampton explains.Dunca, Liu, Rabe, Bultena, and Hampton are joined on the case study by fellow authors Mrinalini Singha, a graduate student in the Art, Culture, and Technology program; Sungmoon Lim, a graduate student in urban studies and planning and EECS; Lauren Higgins, an undergraduate majoring in political science; and Madeline Schlegal, a Northeastern University co-op student.Taking the case study to classrooms around the worldAlthough PhD candidates have contributed to previous case studies in the series, this publication is the first to be co-authored with MIT undergraduates. Like any other peer-reviewed journal, before publication, the SERC Scholars’ case study was anonymously reviewed by senior scholars drawn from various fields.The series editor, David Kaiser, also served as one of SERC’s inaugural associate deans and helped shape the program. “The case studies, by design, are short, easy to read, and don’t take up lots of time,” Kaiser explained. “They are gateways for students to explore, and instructors can cover a topic that has likely already been on their mind.” This semester, Kaiser, the Germeshausen Professor of the History of Science and a professor of physics, is teaching STS.004 (Intersections: Science, Technology, and the World), an undergraduate introduction to the field of science, technology, and society. The last month of the semester has been dedicated wholly to SERC case studies, one of which is: “From Mining to E-Waste.”Hampton was visibly moved to hear that the case study is being used at MIT but also by some of the 250,000 visitors to the SERC platform, many of whom are based in the Global South and directly impacted by the issues she and her cohort researched. “Many students are focused on climate, whether through computer science, data science, or mechanical engineering. I hope that this case study educates them on environmental and climate aspects of e-waste and computing.” More

  • in

    Enabling a circular economy in the built environment

    The amount of waste generated by the construction sector underscores an urgent need for embracing circularity — a sustainable model that aims to minimize waste and maximize material efficiency through recovery and reuse — in the built environment: 600 million tons of construction and demolition waste was produced in the United States alone in 2018, with 820 million tons reported in the European Union, and an excess of 2 billion tons annually in China.This significant resource loss embedded in our current industrial ecosystem marks a linear economy that operates on a “take-make-dispose” model of construction; in contrast, the “make-use-reuse” approach of a circular economy offers an important opportunity to reduce environmental impacts.A team of MIT researchers has begun to assess what may be needed to spur widespread circular transition within the built environment in a new open-access study that aims to understand stakeholders’ current perceptions of circularity and quantify their willingness to pay.“This paper acts as an initial endeavor into understanding what the industry may be motivated by, and how integration of stakeholder motivations could lead to greater adoption,” says lead author Juliana Berglund-Brown, PhD student in the Department of Architecture at MIT.Considering stakeholders’ perceptionsThree different stakeholder groups from North America, Europe, and Asia — material suppliers, design and construction teams, and real estate developers — were surveyed by the research team that also comprises Akrisht Pandey ’23; Fabio Duarte, associate director of the MIT Senseable City Lab; Raquel Ganitsky, fellow in the Sustainable Real Estate Development Action Program; Randolph Kirchain, co-director of MIT Concrete Sustainability Hub; and Siqi Zheng, the STL Champion Professor of Urban and Real Estate Sustainability at Department of Urban Studies and Planning.Despite growing awareness of reuse practice among construction industry stakeholders, circular practices have yet to be implemented at scale — attributable to many factors that influence the intersection of construction needs with government regulations and the economic interests of real estate developers.The study notes that perceived barriers to circular adoption differ based on industry role, with lack of both client interest and standardized structural assessment methods identified as the primary concern of design and construction teams, while the largest deterrents for material suppliers are logistics complexity, and supply uncertainty. Real estate developers, on the other hand, are chiefly concerned with higher costs and structural assessment. Yet encouragingly, respondents expressed willingness to absorb higher costs, with developers indicating readiness to pay an average of 9.6 percent higher construction costs for a minimum 52.9 percent reduction in embodied carbon — and all stakeholders highly favor the potential of incentives like tax exemptions to aid with cost premiums.Next steps to encourage circularityThe findings highlight the need for further conversation between design teams and developers, as well as for additional exploration into potential solutions to practical challenges. “The thing about circularity is that there is opportunity for a lot of value creation, and subsequently profit,” says Berglund-Brown. “If people are motivated by cost, let’s provide a cost incentive, or establish strategies that have one.”When it comes to motivating reasons to adopt circularity practices, the study also found trends emerging by industry role. Future net-zero goals influence developers as well as design and construction teams, with government regulation the third-most frequently named reason across all respondent types.“The construction industry needs a market driver to embrace circularity,” says Berglund-Brown, “Be it carrots or sticks, stakeholders require incentives for adoption.”The effect of policy to motivate change cannot be understated, with major strides being made in low operational carbon building design after policy restricting emissions was introduced, such as Local Law 97 in New York City and the Building Emissions Reduction and Disclosure Ordinance in Boston. These pieces of policy, and their results, can serve as models for embodied carbon reduction policy elsewhere.Berglund-Brown suggests that municipalities might initiate ordinances requiring buildings to be deconstructed, which would allow components to be reused, curbing demolition methods that result in waste rather than salvage. Top-down ordinances could be one way to trigger a supply chain shift toward reprocessing building materials that are typically deemed “end-of-life.”The study also identifies other challenges to the implementation of circularity at scale, including risk associated with how to reuse materials in new buildings, and disrupting status quo design practices.“Understanding the best way to motivate transition despite uncertainty is where our work comes in,” says Berglund-Brown. “Beyond that, researchers can continue to do a lot to alleviate risk — like developing standards for reuse.”Innovations that challenge the status quoDisrupting the status quo is not unusual for MIT researchers; other visionary work in construction circularity pioneered at MIT includes “a smart kit of parts” called Pixelframe. This system for modular concrete reuse allows building elements to be disassembled and rebuilt several times, aiding deconstruction and reuse while maintaining material efficiency and versatility.Developed by MIT Climate and Sustainability Consortium Associate Director Caitlin Mueller’s research team, Pixelframe is designed to accommodate a wide range of applications from housing to warehouses, with each piece of interlocking precast concrete modules, called Pixels, assigned a material passport to enable tracking through its many life cycles.Mueller’s work demonstrates that circularity can work technically and logistically at the scale of the built environment — by designing specifically for disassembly, configuration, versatility, and upfront carbon and cost efficiency.“This can be built today. This is building code-compliant today,” said Mueller of Pixelframe in a keynote speech at the recent MCSC Annual Symposium, which saw industry representatives and members of the MIT community coming together to discuss scalable solutions to climate and sustainability problems. “We currently have the potential for high-impact carbon reduction as a compelling alternative to the business-as-usual construction methods we are used to.”Pixelframe was recently awarded a grant by the Massachusetts Clean Energy Center (MassCEC) to pursue commercialization, an important next step toward integrating innovations like this into a circular economy in practice. “It’s MassCEC’s job to make sure that these climate leaders have the resources they need to turn their technologies into successful businesses that make a difference around the world,” said MassCEC CEO Emily Reichart, in a press release.Additional support for circular innovation has emerged thanks to a historic piece of climate legislation from the Biden administration. The Environmental Protection Agency recently awarded a federal grant on the topic of advancing steel reuse to Berglund-Brown — whose PhD thesis focuses on scaling the reuse of structural heavy-section steel — and John Ochsendorf, the Class of 1942 Professor of Civil and Environmental Engineering and Architecture at MIT.“There is a lot of exciting upcoming work on this topic,” says Berglund-Brown. “To any practitioners reading this who are interested in getting involved — please reach out.”The study is supported in part by the MIT Climate and Sustainability Consortium. More