More stories

  • in

    Designing tiny filters to solve big problems

    For many industrial processes, the typical way to separate gases, liquids, or ions is with heat, using slight differences in boiling points to purify mixtures. These thermal processes account for roughly 10 percent of the energy use in the United States.MIT chemical engineer Zachary Smith wants to reduce costs and carbon footprints by replacing these energy-intensive processes with highly efficient filters that can separate gases, liquids, and ions at room temperature.In his lab at MIT, Smith is designing membranes with tiny pores that can filter tiny molecules based on their size. These membranes could be useful for purifying biogas, capturing carbon dioxide from power plant emissions, or generating hydrogen fuel.“We’re taking materials that have unique capabilities for separating molecules and ions with precision, and applying them to applications where the current processes are not efficient, and where there’s an enormous carbon footprint,” says Smith, an associate professor of chemical engineering.Smith and several former students have founded a company called Osmoses that is working toward developing these materials for large-scale use in gas purification. Removing the need for high temperatures in these widespread industrial processes could have a significant impact on energy consumption, potentially reducing it by as much as 90 percent.“I would love to see a world where we could eliminate thermal separations, and where heat is no longer a problem in creating the things that we need and producing the energy that we need,” Smith says.Hooked on researchAs a high school student, Smith was drawn to engineering but didn’t have many engineering role models. Both of his parents were physicians, and they always encouraged him to work hard in school.“I grew up without knowing many engineers, and certainly no chemical engineers. But I knew that I really liked seeing how the world worked. I was always fascinated by chemistry and seeing how mathematics helped to explain this area of science,” recalls Smith, who grew up near Harrisburg, Pennsylvania. “Chemical engineering seemed to have all those things built into it, but I really had no idea what it was.”At Penn State University, Smith worked with a professor named Henry “Hank” Foley on a research project designing carbon-based materials to create a “molecular sieve” for gas separation. Through a time-consuming and iterative layering process, he created a sieve that could purify oxygen and nitrogen from air.“I kept adding more and more coatings of a special material that I could subsequently carbonize, and eventually I started to get selectivity. In the end, I had made a membrane that could sieve molecules that only differed by 0.18 angstrom in size,” he says. “I got hooked on research at that point, and that’s what led me to do more things in the area of membranes.”After graduating from college in 2008, Smith pursued graduate studies in chemical engineering at the University of Texas at Austin. There, he continued developing membranes for gas separation, this time using a different class of materials — polymers. By controlling polymer structure, he was able to create films with pores that filter out specific molecules, such as carbon dioxide or other gases.“Polymers are a type of material that you can actually form into big devices that can integrate into world-class chemical plants. So, it was exciting to see that there was a scalable class of materials that could have a real impact on addressing questions related to CO2 and other energy-efficient separations,” Smith says.After finishing his PhD, he decided he wanted to learn more chemistry, which led him to a postdoctoral fellowship at the University of California at Berkeley.“I wanted to learn how to make my own molecules and materials. I wanted to run my own reactions and do it in a more systematic way,” he says.At Berkeley, he learned how make compounds called metal-organic frameworks (MOFs) — cage-like molecules that have potential applications in gas separation and many other fields. He also realized that while he enjoyed chemistry, he was definitely a chemical engineer at heart.“I learned a ton when I was there, but I also learned a lot about myself,” he says. “As much as I love chemistry, work with chemists, and advise chemists in my own group, I’m definitely a chemical engineer, really focused on the process and application.”Solving global problemsWhile interviewing for faculty jobs, Smith found himself drawn to MIT because of the mindset of the people he met.“I began to realize not only how talented the faculty and the students were, but the way they thought was very different than other places I had been,” he says. “It wasn’t just about doing something that would move their field a little bit forward. They were actually creating new fields. There was something inspirational about the type of people that ended up at MIT who wanted to solve global problems.”In his lab at MIT, Smith is now tackling some of those global problems, including water purification, critical element recovery, renewable energy, battery development, and carbon sequestration.In a close collaboration with Yan Xia, a professor at Stanford University, Smith recently developed gas separation membranes that incorporate a novel type of polymer known as “ladder polymers,” which are currently being scaled for deployment at his startup. Historically, using polymers for gas separation has been limited by a tradeoff between permeability and selectivity — that is, membranes that permit a faster flow of gases through the membrane tend to be less selective, allowing impurities to get through.Using ladder polymers, which consist of double strands connected by rung-like bonds, the researchers were able to create gas separation membranes that are both highly permeable and very selective. The boost in permeability — a 100- to 1,000-fold improvement over earlier materials — could enable membranes to replace some of the high-energy techniques now used to separate gases, Smith says.“This allows you to envision large-scale industrial problems solved with miniaturized devices,” he says. “If you can really shrink down the system, then the solutions we’re developing in the lab could easily be applied to big industries like the chemicals industry.”These developments and others have been part of a number of advancements made by collaborators, students, postdocs, and researchers who are part of Smith’s team.“I have a great research team of talented and hard-working students and postdocs, and I get to teach on topics that have been instrumental in my own professional career,” Smith says. “MIT has been a playground to explore and learn new things. I am excited for what my team will discover next, and grateful for an opportunity to help solve many important global problems.” More

  • in

    Minimizing the carbon footprint of bridges and other structures

    Awed as a young child by the majesty of the Golden Gate Bridge in San Francisco, civil engineer and MIT Morningside Academy for Design (MAD) Fellow Zane Schemmer has retained his fascination with bridges: what they look like, why they work, and how they’re designed and built.He weighed the choice between architecture and engineering when heading off to college, but, motivated by the why and how of structural engineering, selected the latter. Now he incorporates design as an iterative process in the writing of algorithms that perfectly balance the forces involved in discrete portions of a structure to create an overall design that optimizes function, minimizes carbon footprint, and still produces a manufacturable result.While this may sound like an obvious goal in structural design, it’s not. It’s new. It’s a more holistic way of looking at the design process that can optimize even down to the materials, angles, and number of elements in the nodes or joints that connect the larger components of a building, bridge, tower, etc.According to Schemmer, there hasn’t been much progress on optimizing structural design to minimize embodied carbon, and the work that exists often results in designs that are “too complex to be built in real life,” he says. The embodied carbon of a structure is the total carbon dioxide emissions of its life cycle: from the extraction or manufacture of its materials to their transport and use and through the demolition of the structure and disposal of the materials. Schemmer, who works with Josephine V. Carstensen, the Gilbert W. Winslow Career Development Associate Professor of Civil and Environmental Engineering at MIT, is focusing on the portion of that cycle that runs through construction.In September, at the IASS 2024 symposium “Redefining the Art of Structural Design in Zurich,” Schemmer and Carstensen presented their work on Discrete Topology Optimization algorithms that are able to minimize the embodied carbon in a bridge or other structure by up to 20 percent. This comes through materials selection that considers not only a material’s appearance and its ability to get the job done, but also the ease of procurement, its proximity to the building site, and the carbon embodied in its manufacture and transport.“The real novelty of our algorithm is its ability to consider multiple materials in a highly constrained solution space to produce manufacturable designs with a user-specified force flow,” Schemmer says. “Real-life problems are complex and often have many constraints associated with them. In traditional formulations, it can be difficult to have a long list of complicated constraints. Our goal is to incorporate these constraints to make it easier to take our designs out of the computer and create them in real life.”Take, for instance, a steel tower, which could be a “super lightweight, efficient design solution,” Schemmer explains. Because steel is so strong, you don’t need as much of it compared to concrete or timber to build a big building. But steel is also very carbon-intensive to produce and transport. Shipping it across the country or especially from a different continent can sharply increase its embodied carbon price tag. Schemmer’s topology optimization will replace some of the steel with timber elements or decrease the amount of steel in other elements to create a hybrid structure that will function effectively and minimize the carbon footprint. “This is why using the same steel in two different parts of the world can lead to two different optimized designs,” he explains.Schemmer, who grew up in the mountains of Utah, earned a BS and MS in civil and environmental engineering from University of California at Berkeley, where his graduate work focused on seismic design. He describes that education as providing a “very traditional, super-strong engineering background that tackled some of the toughest engineering problems,” along with knowledge of structural engineering’s traditions and current methods.But at MIT, he says, a lot of the work he sees “looks at removing the constraints of current societal conventions of doing things, and asks how could we do things if it was in a more ideal form; what are we looking at then? Which I think is really cool,” he says. “But I think sometimes too, there’s a jump between the most-perfect version of something and where we are now, that there needs to be a bridge between those two. And I feel like my education helps me see that bridge.”The bridge he’s referring to is the topology optimization algorithms that make good designs better in terms of decreased global warming potential.“That’s where the optimization algorithm comes in,” Schemmer says. “In contrast to a standard structure designed in the past, the algorithm can take the same design space and come up with a much more efficient material usage that still meets all the structural requirements, be up to code, and have everything we want from a safety standpoint.”That’s also where the MAD Design Fellowship comes in. The program provides yearlong fellowships with full financial support to graduate students from all across the Institute who network with each other, with the MAD faculty, and with outside speakers who use design in new ways in a surprising variety of fields. This helps the fellows gain a better understanding of how to use iterative design in their own work.“Usually people think of their own work like, ‘Oh, I had this background. I’ve been looking at this one way for a very long time.’ And when you look at it from an outside perspective, I think it opens your mind to be like, ‘Oh my God. I never would have thought about doing this that way. Maybe I should try that.’ And then we can move to new ideas, new inspiration for better work,” Schemmer says.He chose civil and structural engineering over architecture some seven years ago, but says that “100 years ago, I don’t think architecture and structural engineering were two separate professions. I think there was an understanding of how things looked and how things worked, and it was merged together. Maybe from an efficiency standpoint, it’s better to have things done separately. But I think there’s something to be said for having knowledge about how the whole system works, potentially more intermingling between the free-form architectural design and the mathematical design of a civil engineer. Merging it back together, I think, has a lot of benefits.”Which brings us back to the Golden Gate Bridge, Schemmer’s longtime favorite. You can still hear that excited 3-year-old in his voice when he talks about it.“It’s so iconic,” he says. “It’s connecting these two spits of land that just rise straight up out of the ocean. There’s this fog that comes in and out a lot of days. It’s a really magical place, from the size of the cable strands and everything. It’s just, ‘Wow.’ People built this over 100 years ago, before the existence of a lot of the computational tools that we have now. So, all the math, everything in the design, was all done by hand and from the mind. Nothing was computerized, which I think is crazy to think about.”As Schemmer continues work on his doctoral degree at MIT, the MAD fellowship will expose him to many more awe-inspiring ideas in other fields, leading him to incorporate some of these in some way with his engineering knowledge to design better ways of building bridges and other structures. More

  • in

    Coffee fix: MIT students decode the science behind the perfect cup

    Elaine Jutamulia ’24 took a sip of coffee with a few drops of anise extract. It was her second try.“What do you think?” asked Omar Orozco, standing at a lab table in MIT’s Breakerspace, surrounded by filters, brewing pots, and other coffee paraphernalia.“I think when I first tried it, it was still pretty bitter,” Jutamulia said thoughtfully. “But I think now that it’s steeped for a little bit — it took out some of the bitterness.”Jutamulia and current MIT senior Orozco were part of class 3.000 (Coffee Matters: Using the Breakerspace to Make the Perfect Cup), a new MIT course that debuted in spring 2024. The class combines lectures on chemistry and the science of coffee with hands-on experimentation and group projects. Their project explored how additives such as anise, salt, and chili oil influence coffee extraction — the process of dissolving flavor compounds from ground coffee into water — to improve taste and correct common brewing errors.Alongside tasting, they used an infrared spectrometer to identify the chemical compounds in their coffee samples that contribute to flavor. Does anise make bitter coffee smoother? Could chili oil balance the taste?“Generally speaking, if we could make a recommendation, that’s what we’re trying to find,” Orozco said.A three-unit “discovery class” designed to help first-year students explore majors, 3.000 was widely popular, enrolling more than 50 students. Its success was driven by the beverage at its core and the class’s hands-on approach, which pushes students to ask and answer questions they might not have otherwise.For aeronautics and astronautics majors Gabi McDonald and McKenzie Dinesen, coffee was the draw, but the class encouraged them to experiment and think in new ways. “It’s easy to drop people like us in, who love coffee, and, ‘Oh my gosh, there’s this class where we can go make coffee half the time and try all different kinds of things?’” McDonald says.Percolating knowledgeThe class pairs weekly lectures on topics such as coffee chemistry, the anatomy and composition of a coffee bean, the effects of roasting, and the brewing process with tasting sessions — students sample coffee brewed from different beans, roasts, and grinds. In the MIT Breakerspace, a new space on campus conceived and managed by the Department of Materials Science and Engineering (DMSE), students use equipment such as a digital optical microscope to examine ground coffee particles and a scanning electron microscope, which shoots beams of electrons at samples to reveal cross-sections of beans in stunning detail.Once students learn to operate instruments for guided tasks, they form groups and design their own projects.“The driver for those projects is some question they have about coffee raised by one of the lectures or the tasting sessions, or just something they’ve always wanted to know,” says DMSE Professor Jeffrey Grossman, who designed and teaches the class. “Then they’ll use one or more of these pieces of equipment to shed some light on it.”Grossman traces the origins of the class to his initial vision for the Breakerspace, a laboratory for materials analysis and lounge for MIT undergraduates. Opened in November 2023, the space gives students hands-on experience with materials science and engineering, an interdisciplinary field combining chemistry, physics, and engineering to probe the composition and structure of materials.“The world is made of stuff, and these are the tools to understand that stuff and bring it to life,” says Grossman. So he envisioned a class that would give students an “exploratory, inspiring nudge.”“Then the question wasn’t the pedagogy, it was, ‘What’s the hook?’ In materials science, there are a lot of directions you could go, but if you have one that inspires people because they know it and maybe like it already, then that’s exciting.”Cup of ambitionThat hook, of course, was coffee, the second-most-consumed beverage after water. It captured students’ imagination and motivated them to push boundaries.Orozco brought a fair amount of coffee knowledge to the class. In 2023, he taught in Mexico through the MISTI Global Teaching Labs program, where he toured several coffee farms and acquired a deeper knowledge of the beverage. He learned, for example, that black coffee, contrary to general American opinion, isn’t naturally bitter; bitterness arises from certain compounds that develop during the roasting process.“If you properly brew it with the right beans, it actually tastes good,” says Orozco, a humanities and engineering major. A year later, in 3.000, he expanded his understanding of making a good brew, particularly through the group project with Jutamulia and other students to fix bad coffee.The group prepared a control sample of “perfectly brewed” coffee — based on taste, coffee-to-water ratio, and other standards covered in class — alongside coffee that was under-extracted and over-extracted. Under-extracted coffee, made with water that isn’t hot enough or brewed for too short a time, tastes sharp or sour. Over-extracted coffee, brewed with too much coffee or for too long, tastes bitter.Those coffee samples got additives and were analyzed using Fourier Transform Infrared (FTIR) spectroscopy, measuring how coffee absorbed infrared light to identify flavor-related compounds. Jutamulia examined FTIR readings taken from a sample with lime juice to see how the citric acid influenced its chemical profile.“Can we find any correlation between what we saw and the existing known measurements of citric acid?” asks Jutamulia, who studied computation and cognition at MIT, graduating last May.Another group dove into coffee storage, questioning why conventional wisdom advises against freezing.“We just wondered why that’s the case,” says electrical engineering and computer science major Noah Wiley, a coffee enthusiast with his own espresso machine.The team compared methods like freezing brewed coffee, frozen coffee grounds, and whole beans ground after freezing, evaluating their impact on flavor and chemical composition.“Then we’re going to see which ones taste good,” says Wiley. The team used a class coffee review sheet to record attributes like acidity, bitterness, sweetness, and overall flavor, pairing the results with FTIR analysis to determine how storage affected taste.Wiley acknowledged that “good” is subjective. “Sometimes there’s a group consensus. I think people like fuller coffee, not watery,” he says.Other student projects compared caffeine levels in different coffee types, analyzed the effect of microwaving coffee on its chemical composition and flavor, and investigated the differences between authentic and counterfeit coffee beans.“We gave the students some papers to look at in case they were interested,” says Justin Lavallee, Breakerspace manager and co-teacher of the class. “But mostly we told them to focus on something they wanted to learn more about.”Drip, drip, dripBeyond answering specific questions about coffee, both students and teachers gained deeper insights into the beverage.“Coffee is a complicated material. There are thousands of molecules in the beans, which change as you roast and extract them,” says Grossman. “The number of ways you can engineer this collection of molecules — it’s profound, ranging from where and how the coffee’s grown to how the cherries are then treated to get the beans to how the beans are roasted and ground to the brewing method you use.”Dinesen learned firsthand, discovering, for example, that darker roasts have less caffeine than lighter roasts, puncturing a common misconception. “You can vary coffee so much — just with the roast of the bean, the size of the ground,” she says. “It’s so easily manipulatable, if that’s a word.”In addition to learning about the science and chemistry behind coffee, Dinesen and McDonald gained new brewing techniques, like using a pour-over cone. The pair even incorporated coffee making and testing into their study routine, brewing coffee while tackling problem sets for another class.“I would put my pour-over cone in my backpack with a Ziploc bag full of grounds, and we would go to the Student Center and pull out the cone, a filter, and the coffee grounds,” McDonald says. “And then we would make pour-overs while doing a P-set. We tested different amounts of water, too. It was fun.”Tony Chen, a materials science and engineering major, reflected on the 3.000’s title — “Using the Breakerspace to Make the Perfect Cup” — and whether making a perfect cup is possible. “I don’t think there’s one perfect cup because each person has their own preferences. I don’t think I’ve gotten to mine yet,” he says.Enthusiasm for coffee’s complexity and the discovery process was exactly what Grossman hoped to inspire in his students. “The best part for me was also just seeing them developing their own sense of curiosity,” he says.He recalled a moment early in the class when students, after being given a demo of the optical microscope, saw the surface texture of a magnified coffee bean, the mottled shades of color, and the honeycomb-like pattern of tiny irregular cells.“They’re like, ‘Wait a second. What if we add hot water to the grounds while it’s under the microscope? Would we see the extraction?’ So, they got hot water and some ground coffee beans, and lo and behold, it looked different. They could see the extraction right there,” Grossman says. “It’s like they have an idea that’s inspired by the learning, and they go and try it. I saw that happen many, many times throughout the semester.” More

  • in

    How hard is it to prevent recurring blackouts in Puerto Rico?

    Researchers at MIT’s Laboratory for Information and Decision Systems (LIDS) have shown that using decision-making software and dynamic monitoring of weather and energy use can significantly improve resiliency in the face of weather-related outages, and can also help to efficiently integrate renewable energy sources into the grid.The researchers point out that the system they suggest might have prevented or at least lessened the kind of widespread power outage that Puerto Rico experienced last week by providing analysis to guide rerouting of power through different lines and thus limit the spread of the outage.The computer platform, which the researchers describe as DyMonDS, for Dynamic Monitoring and Decision Systems, can be used to enhance the existing operating and planning practices used in the electric industry. The platform supports interactive information exchange and decision-making between the grid operators and grid-edge users — all the distributed power sources, storage systems and software that contribute to the grid. It also supports optimization of available resources and controllable grid equipment as system conditions vary. It further lends itself to implementing cooperative decision-making by different utility- and non-utility-owned electric power grid users, including portfolios of mixed resources, users, and storage. Operating and planning the interactions of the end-to-end high-voltage transmission grid with local distribution grids and microgrids represents another major potential use of this platform.This general approach was illustrated using a set of publicly-available data on both meteorology and details of electricity production and distribution in Puerto Rico. An extended AC Optimal Power Flow software developed by SmartGridz Inc. is used for system-level optimization of controllable equipment. This provides real-time guidance for deciding how much power, and through which transmission lines, should be channeled by adjusting plant dispatch and voltage-related set points, and in extreme cases, where to reduce or cut power in order to maintain physically-implementable service for as many customers as possible. The team found that the use of such a system can help to ensure that the greatest number of critical services maintain power even during a hurricane, and at the same time can lead to a substantial decrease in the need for construction of new power plants thanks to more efficient use of existing resources.The findings are described in a paper in the journal Foundations and Trends in Electric Energy Systems, by MIT LIDS researchers Marija Ilic and Laurentiu Anton, along with recent alumna Ramapathi Jaddivada.“Using this software,” Ilic says, they show that “even during bad weather, if you predict equipment failures, and by using that information exchange, you can localize the effect of equipment failures and still serve a lot of customers, 50 percent of customers, when otherwise things would black out.”Anton says that “the way many grids today are operated is sub-optimal.” As a result, “we showed how much better they could do even under normal conditions, without any failures, by utilizing this software.” The savings resulting from this optimization, under everyday conditions, could be in the tens of percents, they say.The way utility systems plan currently, Ilic says, “usually the standard is that they have to build enough capacity and operate in real time so that if one large piece of equipment fails, like a large generator or transmission line, you still serve customers in an uninterrupted way. That’s what’s called N-minus-1.” Under this policy, if one major component of the system fails, they should be able to maintain service for at least 30 minutes. That system allows utilities to plan for how much reserve generating capacity they need to have on hand. That’s expensive, Ilic points out, because it means maintaining this reserve capacity all the time, even under normal operating conditions when it’s not needed.In addition, “right now there are no criteria for what I call N-minus-K,” she says. If bad weather causes five pieces of equipment to fail at once, “there is no software to help utilities decide what to schedule” in terms of keeping the most customers, and the most important services such as hospitals and emergency services, provided with power. They showed that even with 50 percent of the infrastructure out of commission, it would still be possible to keep power flowing to a large proportion of customers.Their work on analyzing the power situation in Puerto Rico started after the island had been devastated by hurricanes Irma and Maria. Most of the electric generation capacity is in the south, yet the largest loads are in San Juan, in the north, and Mayaguez in the west. When transmission lines get knocked down, a lot of rerouting of power needs to happen quickly.With the new systems, “the software finds the optimal adjustments for set points,” for example, changing voltages can allow for power to be redirected through less-congested lines, or can be increased to lessen power losses, Anton says.The software also helps in the long-term planning for the grid. As many fossil-fuel power plants are scheduled to be decommissioned soon in Puerto Rico, as they are in many other places, planning for how to replace that power without having to resort to greenhouse gas-emitting sources is a key to achieving carbon-reduction goals. And by analyzing usage patterns, the software can guide the placement of new renewable power sources where they can most efficiently provide power where and when it’s needed.As plants are retired or as components are affected by weather, “We wanted to ensure the dispatchability of power when the load changes,” Anton says, “but also when crucial components are lost, to ensure the robustness at each step of the retirement schedule.”One thing they found was that “if you look at how much generating capacity exists, it’s more than the peak load, even after you retire a few fossil plants,” Ilic says. “But it’s hard to deliver.” Strategic planning of new distribution lines could make a big difference.Jaddivada, director of innovation at SmartGridz, says that “we evaluated different possible architectures in Puerto Rico, and we showed the ability of this software to ensure uninterrupted electricity service. This is the most important challenge utilities have today. They have to go through a computationally tedious process to make sure the grid functions for any possible outage in the system. And that can be done in a much more efficient way through the software that the company  developed.”The project was a collaborative effort between the MIT LIDS researchers and others at MIT Lincoln Laboratory, the Pacific Northwest National Laboratory, with overall help of SmartGridz software.  More

  • in

    An abundant phytoplankton feeds a global network of marine microbes

    One of the hardest-working organisms in the ocean is the tiny, emerald-tinged Prochlorococcus marinus. These single-celled “picoplankton,” which are smaller than a human red blood cell, can be found in staggering numbers throughout the ocean’s surface waters, making Prochlorococcus the most abundant photosynthesizing organism on the planet. (Collectively, Prochlorococcus fix as much carbon as all the crops on land.) Scientists continue to find new ways that the little green microbe is involved in the ocean’s cycling and storage of carbon.Now, MIT scientists have discovered a new ocean-regulating ability in the small but mighty microbes: cross-feeding of DNA building blocks. In a study appearing today in Science Advances, the team reports that Prochlorococcus shed these extra compounds into their surroundings, where they are then “cross-fed,” or taken up by other ocean organisms, either as nutrients, energy, or for regulating metabolism. Prochlorococcus’ rejects, then, are other microbes’ resources.What’s more, this cross-feeding occurs on a regular cycle: Prochlorococcus tend to shed their molecular baggage at night, when enterprising microbes quickly consume the cast-offs. For a microbe called SAR11, the most abundant bacteria in the ocean, the researchers found that the nighttime snack acts as a relaxant of sorts, forcing the bacteria to slow down their metabolism and effectively recharge for the next day.Through this cross-feeding interaction, Prochlorococcus could be helping many microbial communities to grow sustainably, simply by giving away what it doesn’t need. And they’re doing so in a way that could set the daily rhythms of microbes around the world.“The relationship between the two most abundant groups of microbes in ocean ecosystems has intrigued oceanographers for years,” says co-author and MIT Institute Professor Sallie “Penny” Chisholm, who played a role in the discovery of Prochlorococcus in 1986. “Now we have a glimpse of the finely tuned choreography that contributes to their growth and stability across vast regions of the oceans.”Given that Prochlorococcus and SAR11 suffuse the surface oceans, the team suspects that the exchange of molecules from one to the other could amount to one of the major cross-feeding relationships in the ocean, making it an important regulator of the ocean carbon cycle.“By looking at the details and diversity of cross-feeding processes, we can start to unearth important forces that are shaping the carbon cycle,” says the study’s lead author, Rogier Braakman, a research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).Other MIT co-authors include Brandon Satinsky, Tyler O’Keefe, Shane Hogle, Jamie Becker, Robert Li, Keven Dooley, and Aldo Arellano, along with Krista Longnecker, Melissa Soule, and Elizabeth Kujawinski of Woods Hole Oceanographic Institution (WHOI).Spotting castawaysCross-feeding occurs throughout the microbial world, though the process has mainly been studied in close-knit communities. In the human gut, for instance, microbes are in close proximity and can easily exchange and benefit from shared resources.By comparison, Prochlorococcus are free-floating microbes that are regularly tossed and mixed through the ocean’s surface layers. While scientists assume that the plankton are involved in some amount of cross-feeding, exactly how this occurs, and who would benefit, have historically been challenging to probe; any stuff that Prochlorococcus cast away would have vanishingly low concentrations,and be exceedingly difficult to measure.But in work published in 2023, Braakman teamed up with scientists at WHOI, who pioneered ways to measure small organic compounds in seawater. In the lab, they grew various strains of Prochlorococcus under different conditions and characterized what the microbes released. They found that among the major “exudants,” or released molecules, were purines and pyridines, which are molecular building blocks of DNA. The molecules also happen to be nitrogen-rich — a fact that puzzled the team. Prochlorococcus are mainly found in ocean regions that are low in nitrogen, so it was assumed they’d want to retain any and all nitrogen-containing compounds they can. Why, then, were they instead throwing such compounds away?Global symphonyIn their new study, the researchers took a deep dive into the details of Prochlorococcus’ cross-feeding and how it influences various types of ocean microbes.They set out to study how Prochlorococcus use purine and pyridine in the first place, before expelling the compounds into their surroundings. They compared published genomes of the microbes, looking for genes that encode purine and pyridine metabolism. Tracing the genes forward through the genomes, the team found that once the compounds are produced, they are used to make DNA and replicate the microbes’ genome. Any leftover purine and pyridine is recycled and used again, though a fraction of the stuff is ultimately released into the environment. Prochlorococcus appear to make the most of the compounds, then cast off what they can’t.The team also looked to gene expression data and found that genes involved in recycling purine and pyrimidine peak several hours after the recognized peak in genome replication that occurs at dusk. The question then was: What could be benefiting from this nightly shedding?For this, the team looked at the genomes of more than 300 heterotrophic microbes — organisms that consume organic carbon rather than making it themselves through photosynthesis. They suspected that such carbon-feeders could be likely consumers of Prochlorococcus’ organic rejects. They found most of the heterotrophs contained genes that take up either purine or pyridine, or in some cases, both, suggesting microbes have evolved along different paths in terms of how they cross-feed.The group zeroed in on one purine-preferring microbe, SAR11, as it is the most abundant heterotrophic microbe in the ocean. When they then compared the genes across different strains of SAR11, they found that various types use purines for different purposes, from simply taking them up and using them intact to breaking them down for their energy, carbon, or nitrogen. What could explain the diversity in how the microbes were using Prochlorococcus’ cast-offs?It turns out the local environment plays a big role. Braakman and his collaborators performed a metagenome analysis in which they compared the collectively sequenced genomes of all microbes in over 600 seawater samples from around the world, focusing on SAR11 bacteria. Metagenome sequences were collected alongside measurements of various environmental conditions and geographic locations in which they are found. This analysis showed that the bacteria gobble up purine for its nitrogen when the nitrogen in seawater is low, and for its carbon or energy when nitrogen is in surplus — revealing the selective pressures shaping these communities in different ocean regimes.“The work here suggests that microbes in the ocean have developed relationships that advance their growth potential in ways we don’t expect,” says co-author Kujawinski.Finally, the team carried out a simple experiment in the lab, to see if they could directly observe a mechanism by which purine acts on SAR11. They grew the bacteria in cultures, exposed them to various concentrations of purine, and unexpectedly found it causes them to slow down their normal metabolic activities and even growth. However, when the researchers put these same cells under environmentally stressful conditions, they continued growing strong and healthy cells, as if the metabolic pausing by purines helped prime them for growth, thereby avoiding the effects of the stress.“When you think about the ocean, where you see this daily pulse of purines being released by Prochlorococcus, this provides a daily inhibition signal that could be causing a pause in SAR11 metabolism, so that the next day when the sun comes out, they are primed and ready,” Braakman says. “So we think Prochlorococcus is acting as a conductor in the daily symphony of ocean metabolism, and cross-feeding is creating a global synchronization among all these microbial cells.”This work was supported, in part, by the Simons Foundation and the National Science Foundation. More

  • in

    Unlocking the hidden power of boiling — for energy, space, and beyond

    Most people take boiling water for granted. For Associate Professor Matteo Bucci, uncovering the physics behind boiling has been a decade-long journey filled with unexpected challenges and new insights.The seemingly simple phenomenon is extremely hard to study in complex systems like nuclear reactors, and yet it sits at the core of a wide range of important industrial processes. Unlocking its secrets could thus enable advances in efficient energy production, electronics cooling, water desalination, medical diagnostics, and more.“Boiling is important for applications way beyond nuclear,” says Bucci, who earned tenure at MIT in July. “Boiling is used in 80 percent of the power plants that produce electricity. My research has implications for space propulsion, energy storage, electronics, and the increasingly important task of cooling computers.”Bucci’s lab has developed new experimental techniques to shed light on a wide range of boiling and heat transfer phenomena that have limited energy projects for decades. Chief among those is a problem caused by bubbles forming so quickly they create a band of vapor across a surface that prevents further heat transfer. In 2023, Bucci and collaborators developed a unifying principle governing the problem, known as the boiling crisis, which could enable more efficient nuclear reactors and prevent catastrophic failures.For Bucci, each bout of progress brings new possibilities — and new questions to answer.“What’s the best paper?” Bucci asks. “The best paper is the next one. I think Alfred Hitchcock used to say it doesn’t matter how good your last movie was. If your next one is poor, people won’t remember it. I always tell my students that our next paper should always be better than the last. It’s a continuous journey of improvement.”From engineering to bubblesThe Italian village where Bucci grew up had a population of about 1,000 during his childhood. He gained mechanical skills by working in his father’s machine shop and by taking apart and reassembling appliances like washing machines and air conditioners to see what was inside. He also gained a passion for cycling, competing in the sport until he attended the University of Pisa for undergraduate and graduate studies.In college, Bucci was fascinated with matter and the origins of life, but he also liked building things, so when it came time to pick between physics and engineering, he decided nuclear engineering was a good middle ground.“I have a passion for construction and for understanding how things are made,” Bucci says. “Nuclear engineering was a very unlikely but obvious choice. It was unlikely because in Italy, nuclear was already out of the energy landscape, so there were very few of us. At the same time, there were a combination of intellectual and practical challenges, which is what I like.”For his PhD, Bucci went to France, where he met his wife, and went on to work at a French national lab. One day his department head asked him to work on a problem in nuclear reactor safety known as transient boiling. To solve it, he wanted to use a method for making measurements pioneered by MIT Professor Jacopo Buongiorno, so he received grant money to become a visiting scientist at MIT in 2013. He’s been studying boiling at MIT ever since.Today Bucci’s lab is developing new diagnostic techniques to study boiling and heat transfer along with new materials and coatings that could make heat transfer more efficient. The work has given researchers an unprecedented view into the conditions inside a nuclear reactor.“The diagnostics we’ve developed can collect the equivalent of 20 years of experimental work in a one-day experiment,” Bucci says.That data, in turn, led Bucci to a remarkably simple model describing the boiling crisis.“The effectiveness of the boiling process on the surface of nuclear reactor cladding determines the efficiency and the safety of the reactor,” Bucci explains. “It’s like a car that you want to accelerate, but there is an upper limit. For a nuclear reactor, that upper limit is dictated by boiling heat transfer, so we are interested in understanding what that upper limit is and how we can overcome it to enhance the reactor performance.”Another particularly impactful area of research for Bucci is two-phase immersion cooling, a process wherein hot server parts bring liquid to boil, then the resulting vapor condenses on a heat exchanger above to create a constant, passive cycle of cooling.“It keeps chips cold with minimal waste of energy, significantly reducing the electricity consumption and carbon dioxide emissions of data centers,” Bucci explains. “Data centers emit as much CO2 as the entire aviation industry. By 2040, they will account for over 10 percent of emissions.”Supporting studentsBucci says working with students is the most rewarding part of his job. “They have such great passion and competence. It’s motivating to work with people who have the same passion as you.”“My students have no fear to explore new ideas,” Bucci adds. “They almost never stop in front of an obstacle — sometimes to the point where you have to slow them down and put them back on track.”In running the Red Lab in the Department of Nuclear Science and Engineering, Bucci tries to give students independence as well as support.“We’re not educating students, we’re educating future researchers,” Bucci says. “I think the most important part of our work is to not only provide the tools, but also to give the confidence and the self-starting attitude to fix problems. That can be business problems, problems with experiments, problems with your lab mates.”Some of the more unique experiments Bucci’s students do require them to gather measurements while free falling in an airplane to achieve zero gravity.“Space research is the big fantasy of all the kids,” says Bucci, who joins students in the experiments about twice a year. “It’s very fun and inspiring research for students. Zero g gives you a new perspective on life.”Applying AIBucci is also excited about incorporating artificial intelligence into his field. In 2023, he was a co-recipient of a multi-university research initiative (MURI) project in thermal science dedicated solely to machine learning. In a nod to the promise AI holds in his field, Bucci also recently founded a journal called AI Thermal Fluids to feature AI-driven research advances.“Our community doesn’t have a home for people that want to develop machine-learning techniques,” Bucci says. “We wanted to create an avenue for people in computer science and thermal science to work together to make progress. I think we really need to bring computer scientists into our community to speed this process up.”Bucci also believes AI can be used to process huge reams of data gathered using the new experimental techniques he’s developed as well as to model phenomena researchers can’t yet study.“It’s possible that AI will give us the opportunity to understand things that cannot be observed, or at least guide us in the dark as we try to find the root causes of many problems,” Bucci says. More

  • in

    Surface-based sonar system could rapidly map the ocean floor at high resolution

    On June 18, 2023, the Titan submersible was about an hour-and-a-half into its two-hour descent to the Titanic wreckage at the bottom of the Atlantic Ocean when it lost contact with its support ship. This cease in communication set off a frantic search for the tourist submersible and five passengers onboard, located about two miles below the ocean’s surface.Deep-ocean search and recovery is one of the many missions of military services like the U.S. Coast Guard Office of Search and Rescue and the U.S. Navy Supervisor of Salvage and Diving. For this mission, the longest delays come from transporting search-and-rescue equipment via ship to the area of interest and comprehensively surveying that area. A search operation on the scale of that for Titan — which was conducted 420 nautical miles from the nearest port and covered 13,000 square kilometers, an area roughly twice the size of Connecticut — could take weeks to complete. The search area for Titan is considered relatively small, focused on the immediate vicinity of the Titanic. When the area is less known, operations could take months. (A remotely operated underwater vehicle deployed by a Canadian vessel ended up finding the debris field of Titan on the seafloor, four days after the submersible had gone missing.)A research team from MIT Lincoln Laboratory and the MIT Department of Mechanical Engineering’s Ocean Science and Engineering lab is developing a surface-based sonar system that could accelerate the timeline for small- and large-scale search operations to days. Called the Autonomous Sparse-Aperture Multibeam Echo Sounder, the system scans at surface-ship rates while providing sufficient resolution to find objects and features in the deep ocean, without the time and expense of deploying underwater vehicles. The echo sounder — which features a large sonar array using a small set of autonomous surface vehicles (ASVs) that can be deployed via aircraft into the ocean — holds the potential to map the seabed at 50 times the coverage rate of an underwater vehicle and 100 times the resolution of a surface vessel.

    Play video

    Autonomous Sparse-Aperture Multibeam Echo SounderVideo: MIT Lincoln Laboratory

    “Our array provides the best of both worlds: the high resolution of underwater vehicles and the high coverage rate of surface ships,” says co–principal investigator Andrew March, assistant leader of the laboratory’s Advanced Undersea Systems and Technology Group. “Though large surface-based sonar systems at low frequency have the potential to determine the materials and profiles of the seabed, they typically do so at the expense of resolution, particularly with increasing ocean depth. Our array can likely determine this information, too, but at significantly enhanced resolution in the deep ocean.”Underwater unknownOceans cover 71 percent of Earth’s surface, yet more than 80 percent of this underwater realm remains undiscovered and unexplored. Humans know more about the surface of other planets and the moon than the bottom of our oceans. High-resolution seabed maps would not only be useful to find missing objects like ships or aircraft, but also to support a host of other scientific applications: understanding Earth’s geology, improving forecasting of ocean currents and corresponding weather and climate impacts, uncovering archaeological sites, monitoring marine ecosystems and habitats, and identifying locations containing natural resources such as mineral and oil deposits.Scientists and governments worldwide recognize the importance of creating a high-resolution global map of the seafloor; the problem is that no existing technology can achieve meter-scale resolution from the ocean surface. The average depth of our oceans is approximately 3,700 meters. However, today’s technologies capable of finding human-made objects on the seabed or identifying person-sized natural features — these technologies include sonar, lidar, cameras, and gravitational field mapping — have a maximum range of less than 1,000 meters through water.Ships with large sonar arrays mounted on their hull map the deep ocean by emitting low-frequency sound waves that bounce off the seafloor and return as echoes to the surface. Operation at low frequencies is necessary because water readily absorbs high-frequency sound waves, especially with increasing depth; however, such operation yields low-resolution images, with each image pixel representing a football field in size. Resolution is also restricted because sonar arrays installed on large mapping ships are already using all of the available hull space, thereby capping the sonar beam’s aperture size. By contrast, sonars on autonomous underwater vehicles (AUVs) that operate at higher frequencies within a few hundred meters of the seafloor generate maps with each pixel representing one square meter or less, resulting in 10,000 times more pixels in that same football field–sized area. However, this higher resolution comes with trade-offs: AUVs are time-consuming and expensive to deploy in the deep ocean, limiting the amount of seafloor that can be mapped; they have a maximum range of about 1,000 meters before their high-frequency sound gets absorbed; and they move at slow speeds to conserve power. The area-coverage rate of AUVs performing high-resolution mapping is about 8 square kilometers per hour; surface vessels map the deep ocean at more than 50 times that rate.A solution surfacesThe Autonomous Sparse-Aperture Multibeam Echo Sounder could offer a cost-effective approach to high-resolution, rapid mapping of the deep seafloor from the ocean’s surface. A collaborative fleet of about 20 ASVs, each hosting a small sonar array, effectively forms a single sonar array 100 times the size of a large sonar array installed on a ship. The large aperture achieved by the array (hundreds of meters) produces a narrow beam, which enables sound to be precisely steered to generate high-resolution maps at low frequency. Because very few sonars are installed relative to the array’s overall size (i.e., a sparse aperture), the cost is tractable.However, this collaborative and sparse setup introduces some operational challenges. First, for coherent 3D imaging, the relative position of each ASV’s sonar subarray must be accurately tracked through dynamic ocean-induced motions. Second, because sonar elements are not placed directly next to each other without any gaps, the array suffers from a lower signal-to-noise ratio and is less able to reject noise coming from unintended or undesired directions. To mitigate these challenges, the team has been developing a low-cost precision-relative navigation system and leveraging acoustic signal processing tools and new ocean-field estimation algorithms. The MIT campus collaborators are developing algorithms for data processing and image formation, especially to estimate depth-integrated water-column parameters. These enabling technologies will help account for complex ocean physics, spanning physical properties like temperature, dynamic processes like currents and waves, and acoustic propagation factors like sound speed.Processing for all required control and calculations could be completed either remotely or onboard the ASVs. For example, ASVs deployed from a ship or flying boat could be controlled and guided remotely from land via a satellite link or from a nearby support ship (with direct communications or a satellite link), and left to map the seabed for weeks or months at a time until maintenance is needed. Sonar-return health checks and coarse seabed mapping would be conducted on board, while full, high-resolution reconstruction of the seabed would require a supercomputing infrastructure on land or on a support ship.”Deploying vehicles in an area and letting them map for extended periods of time without the need for a ship to return home to replenish supplies and rotate crews would significantly simplify logistics and operating costs,” says co–principal investigator Paul Ryu, a researcher in the Advanced Undersea Systems and Technology Group.Since beginning their research in 2018, the team has turned their concept into a prototype. Initially, the scientists built a scale model of a sparse-aperture sonar array and tested it in a water tank at the laboratory’s Autonomous Systems Development Facility. Then, they prototyped an ASV-sized sonar subarray and demonstrated its functionality in Gloucester, Massachusetts. In follow-on sea tests in Boston Harbor, they deployed an 8-meter array containing multiple subarrays equivalent to 25 ASVs locked together; with this array, they generated 3D reconstructions of the seafloor and a shipwreck. Most recently, the team fabricated, in collaboration with Woods Hole Oceanographic Institution, a first-generation, 12-foot-long, all-electric ASV prototype carrying a sonar array underneath. With this prototype, they conducted preliminary relative navigation testing in Woods Hole, Massachusetts and Newport, Rhode Island. Their full deep-ocean concept calls for approximately 20 such ASVs of a similar size, likely powered by wave or solar energy.This work was funded through Lincoln Laboratory’s internally administered R&D portfolio on autonomous systems. The team is now seeking external sponsorship to continue development of their ocean floor–mapping technology, which was recognized with a 2024 R&D 100 Award.  More

  • in

    MIT spinout Commonwealth Fusion Systems unveils plans for the world’s first fusion power plant

    America is one step closer to tapping into a new and potentially limitless clean energy source today, with the announcement from MIT spinout Commonwealth Fusion Systems (CFS) that it plans to build the world’s first grid-scale fusion power plant in Chesterfield County, Virginia.The announcement is the latest milestone for the company, which has made groundbreaking progress toward harnessing fusion — the reaction that powers the sun — since its founders first conceived of their approach in an MIT classroom in 2012. CFS is now commercializing a suite of advanced technologies developed in MIT research labs.“This moment exemplifies the power of MIT’s mission, which is to create knowledge that serves the nation and the world, whether via the classroom, the lab, or out in communities,” MIT Vice President for Research Ian Waitz says. “From student coursework 12 years ago to today’s announcement of the siting in Virginia of the world’s first fusion power plant, progress has been amazingly rapid. At the same time, we owe this progress to over 65 years of sustained investment by the U.S. federal government in basic science and energy research.”The new fusion power plant, named ARC, is expected to come online in the early 2030s and generate about 400 megawatts of clean, carbon-free electricity — enough energy to power large industrial sites or about 150,000 homes.The plant will be built at the James River Industrial Park outside of Richmond through a nonfinancial collaboration with Dominion Energy Virginia, which will provide development and technical expertise along with leasing rights for the site. CFS will independently finance, build, own, and operate the power plant.The plant will support Virginia’s economic and clean energy goals by generating what is expected to be billions of dollars in economic development and hundreds of jobs during its construction and long-term operation.More broadly, ARC will position the U.S. to lead the world in harnessing a new form of safe and reliable energy that could prove critical for economic prosperity and national security, including for meeting increasing electricity demands driven by needs like artificial intelligence.“This will be a watershed moment for fusion,” says CFS co-founder Dennis Whyte, the Hitachi America Professor of Engineering at MIT. “It sets the pace in the race toward commercial fusion power plants. The ambition is to build thousands of these power plants and to change the world.”Fusion can generate energy from abundant fuels like hydrogen and lithium isotopes, which can be sourced from seawater, and leave behind no emissions or toxic waste. However, harnessing fusion in a way that produces more power than it takes in has proven difficult because of the high temperatures needed to create and maintain the fusion reaction. Over the course of decades, scientists and engineers have worked to make the dream of fusion power plants a reality.In 2012, teaching the MIT class 22.63 (Principles of Fusion Engineering), Whyte challenged a group of graduate students to design a fusion device that would use a new kind of superconducting magnet to confine the plasma used in the reaction. It turned out the magnets enabled a more compact and economic reactor design. When Whyte reviewed his students’ work, he realized that could mean a new development path for fusion.Since then, a huge amount of capital and expertise has rushed into the once fledgling fusion industry. Today there are dozens of private fusion companies around the world racing to develop the first net-energy fusion power plants, many utilizing the new superconducting magnets. CFS, which Whyte founded with several students from his class, has attracted more than $2 billion in funding.“It all started with that class, where our ideas kept evolving as we challenged the standard assumptions that came with fusion,” Whyte says. “We had this new superconducting technology, so much of the common wisdom was no longer valid. It was a perfect forum for students, who can challenge the status quo.”Since the company’s founding in 2017, it has collaborated with researchers in MIT’s Plasma Science and Fusion Center (PFSC) on a range of initiatives, from validating the underlying plasma physics for the first demonstration machine to breaking records with a new kind of magnet to be used in commercial fusion power plants. Each piece of progress moves the U.S. closer to harnessing a revolutionary new energy source.CFS is currently completing development of its fusion demonstration machine, SPARC, at its headquarters in Devens, Massachusetts. SPARC is expected to produce its first plasma in 2026 and net fusion energy shortly after, demonstrating for the first time a commercially relevant design that will produce more power than it consumes. SPARC will pave the way for ARC, which is expected to deliver power to the grid in the early 2030s.“There’s more challenging engineering and science to be done in this field, and we’re very enthusiastic about the progress that CFS and the researchers on our campus are making on those problems,” Waitz says. “We’re in a ‘hockey stick’ moment in fusion energy, where things are moving incredibly quickly now. On the other hand, we can’t forget about the much longer part of that hockey stick, the sustained support for very complex, fundamental research that underlies great innovations. If we’re going to continue to lead the world in these cutting-edge technologies, continued investment in those areas will be crucial.” More