More stories

  • in

    MIT senior turns waste from the fishing industry into biodegradable plastic

    Sometimes the answers to seemingly intractable environmental problems are found in nature itself. Take the growing challenge of plastic waste. Jacqueline Prawira, an MIT senior in the Department of Materials Science and Engineering (DMSE), has developed biodegradable, plastic-like materials from fish offal, as featured in a recent segment on the CBS show “The Visioneers with Zay Harding.” “We basically made plastics to be too good at their job. That also means the environment doesn’t know what to do with this, because they simply won’t degrade,” Prawira told Harding. “And now we’re literally drowning in plastic. By 2050, plastics are expected to outweigh fish in the ocean.” “The Visioneers” regularly highlights environmental innovators. The episode featuring Prawira premiered during a special screening at Climate Week NYC on Sept. 24.Her inspiration came from the Asian fish market her family visits. Once the fish they buy are butchered, the scales are typically discarded. “But I also started noticing they’re actually fairly strong. They’re thin, somewhat flexible, and pretty lightweight, too, for their strength,” Prawira says. “And that got me thinking: Well, what other material has these properties? Plastics.” She transformed this waste product into a transparent, thin-film material that can be used for disposable products such as grocery bags, packaging, and utensils. Both her fish-scale material and a composite she developed don’t just mimic plastic — they address one of its biggest flaws. “If you put them in composting environments, [they] will degrade on their own naturally without needing much, if any, external help,” Prawira says. This isn’t Prawira’s first environmental innovation. Working in DMSE Professor Yet-Ming Chiang’s lab, she helped develop a low-carbon process for making cement — the world’s most widely used construction material, and a major emitter of carbon dioxide. The process, called silicate subtraction, enables compounds to form at lower temperatures, cutting fossil fuel use. Prawira and her co-inventors in the Chiang lab are also using the method to extract valuable lithium with zero waste. The process is patented and is being commercialized through the startup Rock Zero. For her achievements, Prawira recently received the Barry Goldwater Scholarship, awarded to undergraduates pursuing careers in science, mathematics, or engineering. In her “Visioneers” interview, she shared her hope for more sustainable ways of living. “I’m hoping that we can have daily lives that can be more in sync with the environment,” Prawira said. “So you don’t always have to choose between the convenience of daily life and having to help protect the environment.” More

  • in

    What should countries do with their nuclear waste?

    One of the highest-risk components of nuclear waste is iodine-129 (I-129), which stays radioactive for millions of years and accumulates in human thyroids when ingested. In the U.S., nuclear waste containing I-129 is scheduled to be disposed of in deep underground repositories, which scientists say will sufficiently isolate it.Meanwhile, across the globe, France routinely releases low-level radioactive effluents containing iodine-129 and other radionuclides into the ocean. France recycles its spent nuclear fuel, and the reprocessing plant discharges about 153 kilograms of iodine-129 each year, under the French regulatory limit.Is dilution a good solution? What’s the best way to handle spent nuclear fuel? A new study by MIT researchers and their collaborators at national laboratories quantifies I-129 release under three different scenarios: the U.S. approach of disposing spent fuel directly in deep underground repositories, the French approach of dilution and release, and an approach that uses filters to capture I-129 and disposes of them in shallow underground waste repositories.The researchers found France’s current practice of reprocessing releases about 90 percent of the waste’s I-129 into the biosphere. They found low levels of I-129 in ocean water around France and the U.K.’s former reprocessing sites, including the English Channel and North Sea. Although the low level of I-129 in the water in Europe is not considered to pose health risks, the U.S. approach of deep underground disposal leads to far less I-129 being released, the researchers found.The researchers also investigated the effect of environmental regulations and technologies related to I-129 management, to illuminate the tradeoffs associated with different approaches around the world.“Putting these pieces together to provide a comprehensive view of Iodine-129 is important,” says MIT Assistant Professor Haruko Wainwright, a first author on the paper who holds a joint appointment in the departments of Nuclear Science and Engineering and of Civil and Environmental Engineering. “There are scientists that spend their lives trying to clean up iodine-129 at contaminated sites. These scientists are sometimes shocked to learn some countries are releasing so much iodine-129. This work also provides a life-cycle perspective. We’re not just looking at final disposal and solid waste, but also when and where release is happening. It puts all the pieces together.”MIT graduate student Kate Whiteaker SM ’24 led many of the analyses with Wainwright. Their co-authors are Hansell Gonzalez-Raymat, Miles Denham, Ian Pegg, Daniel Kaplan, Nikolla Qafoku, David Wilson, Shelly Wilson, and Carol Eddy-Dilek. The study appears today in Nature Sustainability.Managing wasteIodine-129 is often a key focus for scientists and engineers as they conduct safety assessments of nuclear waste disposal sites around the world. It has a half-life of 15.7 million years, high environmental mobility, and could potentially cause cancers if ingested. The U.S. sets a strict limit on how much I-129 can be released and how much I-129 can be in drinking water — 5.66 nanograms per liter, the lowest such level of any radionuclides.“Iodine-129 is very mobile, so it is usually the highest-dose contributor in safety assessments,” Wainwright says.For the study, the researchers calculated the release of I-129 across three different waste management strategies by combining data from current and former reprocessing sites as well as repository assessment models and simulations.The authors defined the environmental impact as the release of I-129 into the biosphere that humans could be exposed to, as well as its concentrations in surface water. They measured I-129 release per the total electrical energy generated by a 1-gigawatt power plant over one year, denoted as kg/GWe.y.Under the U.S. approach of deep underground disposal with barrier systems, assuming the barrier canisters fail at 1,000 years (a conservative estimate), the researchers found 2.14 x 10–8 kg/GWe.y of I-129 would be released between 1,000 and 1 million years from today.They estimate that 4.51 kg/GWe.y of I-129, or 91 percent of the total, would be released into the biosphere in the scenario where fuel is reprocessed and the effluents are diluted and released. About 3.3 percent of I-129 is captured by gas filters, which are then disposed of in shallow subsurfaces as low-level radioactive waste. A further 5.2 percent remains in the waste stream of the reprocessing plant, which is then disposed of as high-level radioactive waste.If the waste is recycled with gas filters to directly capture I-129, 0.05 kg/GWe.y of the I-129 is released, while 94 percent is disposed of in the low-level disposal sites. For shallow disposal, some kind of human disruption and intrusion is assumed to occur after government or institutional control expires (typically 100-1,000 years). That results in a potential release of the disposed amount to the environment after the control period.Overall, the current practice of recycling spent nuclear fuel releases the majority of I-129 into the environment today, while the direct disposal of spent fuel releases around 1/100,000,000 that amount over 1 million years. When the gas filters are used to capture I-129, the majority of I-129 goes to shallow underground repositories, which could be accidentally released through human intrusion down the line.The researchers also quantified the concentration of I-129 in different surface waters near current and former fuel reprocessing facilities, including the English Channel and the North Sea near reprocessing plants in France and U.K. They also analyzed the U.S. Columbia River downstream of a site in Washington state where material for nuclear weapons was produced during the Cold War, and they studied a similar site in South Carolina. The researchers found far higher concentrations of I-129 within the South Carolina site, where the low-level radioactive effluents were released far from major rivers and hence resulted in less dilution in the environment.“We wanted to quantify the environmental factors and the impact of dilution, which in this case affected concentrations more than discharge amounts,” Wainwright says. “Someone might take our results to say dilution still works: It’s reducing the contaminant concentration and spreading it over a large area. On the other hand, in the U.S., imperfect disposal has led to locally higher surface water concentrations. This provides a cautionary tale that disposal could concentrate contaminants, and should be carefully designed to protect local communities.”Fuel cycles and policyWainwright doesn’t want her findings to dissuade countries from recycling nuclear fuel. She says countries like Japan plan to use increased filtration to capture I-129 when they reprocess spent fuel. Filters with I-129 can be disposed of as low-level waste under U.S. regulations.“Since I-129 is an internal carcinogen without strong penetrating radiation, shallow underground disposal would be appropriate in line with other hazardous waste,” Wainwright says. “The history of environmental protection since the 1960s is shifting from waste dumping and release to isolation. But there are still industries that release waste into the air and water. We have seen that they often end up causing issues in our daily life — such as CO2, mercury, PFAS and others — especially when there are many sources or when bioaccumulation happens. The nuclear community has been leading in waste isolation strategies and technologies since the 1950s. These efforts should be further enhanced and accelerated. But at the same time, if someone does not choose nuclear energy because of waste issues, it would encourage other industries with much lower environmental standards.”The work was supported by MIT’s Climate Fast Forward Faculty Fund and the U.S. Department of Energy. More

  • in

    3 Questions: How AI is helping us monitor and support vulnerable ecosystems

    A recent study from Oregon State University estimated that more than 3,500 animal species are at risk of extinction because of factors including habitat alterations, natural resources being overexploited, and climate change.To better understand these changes and protect vulnerable wildlife, conservationists like MIT PhD student and Computer Science and Artificial Intelligence Laboratory (CSAIL) researcher Justin Kay are developing computer vision algorithms that carefully monitor animal populations. A member of the lab of MIT Department of Electrical Engineering and Computer Science assistant professor and CSAIL principal investigator Sara Beery, Kay is currently working on tracking salmon in the Pacific Northwest, where they provide crucial nutrients to predators like birds and bears, while managing the population of prey, like bugs.With all that wildlife data, though, researchers have lots of information to sort through and many AI models to choose from to analyze it all. Kay and his colleagues at CSAIL and the University of Massachusetts Amherst are developing AI methods that make this data-crunching process much more efficient, including a new approach called “consensus-driven active model selection” (or “CODA”) that helps conservationists choose which AI model to use. Their work was named a Highlight Paper at the International Conference on Computer Vision (ICCV) in October.That research was supported, in part, by the National Science Foundation, Natural Sciences and Engineering Research Council of Canada, and Abdul Latif Jameel Water and Food Systems Lab (J-WAFS). Here, Kay discusses this project, among other conservation efforts.Q: In your paper, you pose the question of which AI models will perform the best on a particular dataset. With as many as 1.9 million pre-trained models available in the HuggingFace Models repository alone, how does CODA help us address that challenge?A: Until recently, using AI for data analysis has typically meant training your own model. This requires significant effort to collect and annotate a representative training dataset, as well as iteratively train and validate models. You also need a certain technical skill set to run and modify AI training code. The way people interact with AI is changing, though — in particular, there are now millions of publicly available pre-trained models that can perform a variety of predictive tasks very well. This potentially enables people to use AI to analyze their data without developing their own model, simply by downloading an existing model with the capabilities they need. But this poses a new challenge: Which model, of the millions available, should they use to analyze their data? Typically, answering this model selection question also requires you to spend a lot of time collecting and annotating a large dataset, albeit for testing models rather than training them. This is especially true for real applications where user needs are specific, data distributions are imbalanced and constantly changing, and model performance may be inconsistent across samples. Our goal with CODA was to substantially reduce this effort. We do this by making the data annotation process “active.” Instead of requiring users to bulk-annotate a large test dataset all at once, in active model selection we make the process interactive, guiding users to annotate the most informative data points in their raw data. This is remarkably effective, often requiring users to annotate as few as 25 examples to identify the best model from their set of candidates. We’re very excited about CODA offering a new perspective on how to best utilize human effort in the development and deployment of machine-learning (ML) systems. As AI models become more commonplace, our work emphasizes the value of focusing effort on robust evaluation pipelines, rather than solely on training.Q: You applied the CODA method to classifying wildlife in images. Why did it perform so well, and what role can systems like this have in monitoring ecosystems in the future?A: One key insight was that when considering a collection of candidate AI models, the consensus of all of their predictions is more informative than any individual model’s predictions. This can be seen as a sort of “wisdom of the crowd:” On average, pooling the votes of all models gives you a decent prior over what the labels of individual data points in your raw dataset should be. Our approach with CODA is based on estimating a “confusion matrix” for each AI model — given the true label for some data point is class X, what is the probability that an individual model predicts class X, Y, or Z? This creates informative dependencies between all of the candidate models, the categories you want to label, and the unlabeled points in your dataset.Consider an example application where you are a wildlife ecologist who has just collected a dataset containing potentially hundreds of thousands of images from cameras deployed in the wild. You want to know what species are in these images, a time-consuming task that computer vision classifiers can help automate. You are trying to decide which species classification model to run on your data. If you have labeled 50 images of tigers so far, and some model has performed well on those 50 images, you can be pretty confident it will perform well on the remainder of the (currently unlabeled) images of tigers in your raw dataset as well. You also know that when that model predicts some image contains a tiger, it is likely to be correct, and therefore that any model that predicts a different label for that image is more likely to be wrong. You can use all these interdependencies to construct probabilistic estimates of each model’s confusion matrix, as well as a probability distribution over which model has the highest accuracy on the overall dataset. These design choices allow us to make more informed choices over which data points to label and ultimately are the reason why CODA performs model selection much more efficiently than past work.There are also a lot of exciting possibilities for building on top of our work. We think there may be even better ways of constructing informative priors for model selection based on domain expertise — for instance, if it is already known that one model performs exceptionally well on some subset of classes or poorly on others. There are also opportunities to extend the framework to support more complex machine-learning tasks and more sophisticated probabilistic models of performance. We hope our work can provide inspiration and a starting point for other researchers to keep pushing the state of the art.Q: You work in the Beerylab, led by Sara Beery, where researchers are combining the pattern-recognition capabilities of machine-learning algorithms with computer vision technology to monitor wildlife. What are some other ways your team is tracking and analyzing the natural world, beyond CODA?A: The lab is a really exciting place to work, and new projects are emerging all the time. We have ongoing projects monitoring coral reefs with drones, re-identifying individual elephants over time, and fusing multi-modal Earth observation data from satellites and in-situ cameras, just to name a few. Broadly, we look at emerging technologies for biodiversity monitoring and try to understand where the data analysis bottlenecks are, and develop new computer vision and machine-learning approaches that address those problems in a widely applicable way. It’s an exciting way of approaching problems that sort of targets the “meta-questions” underlying particular data challenges we face. The computer vision algorithms I’ve worked on that count migrating salmon in underwater sonar video are examples of that work. We often deal with shifting data distributions, even as we try to construct the most diverse training datasets we can. We always encounter something new when we deploy a new camera, and this tends to degrade the performance of computer vision algorithms. This is one instance of a general problem in machine learning called domain adaptation, but when we tried to apply existing domain adaptation algorithms to our fisheries data we realized there were serious limitations in how existing algorithms were trained and evaluated. We were able to develop a new domain adaptation framework, published earlier this year in Transactions on Machine Learning Research, that addressed these limitations and led to advancements in fish counting, and even self-driving and spacecraft analysis.One line of work that I’m particularly excited about is understanding how to better develop and analyze the performance of predictive ML algorithms in the context of what they are actually used for. Usually, the outputs from some computer vision algorithm — say, bounding boxes around animals in images — are not actually the thing that people care about, but rather a means to an end to answer a larger problem — say, what species live here, and how is that changing over time? We have been working on methods to analyze predictive performance in this context and reconsider the ways that we input human expertise into ML systems with this in mind. CODA was one example of this, where we showed that we could actually consider the ML models themselves as fixed and build a statistical framework to understand their performance very efficiently. We have been working recently on similar integrated analyses combining ML predictions with multi-stage prediction pipelines, as well as ecological statistical models. The natural world is changing at unprecedented rates and scales, and being able to quickly move from scientific hypotheses or management questions to data-driven answers is more important than ever for protecting ecosystems and the communities that depend on them. Advancements in AI can play an important role, but we need to think critically about the ways that we design, train, and evaluate algorithms in the context of these very real challenges. More

  • in

    Using classic physical phenomena to solve new problems

    Quenching, a powerful heat transfer mechanism, is remarkably effective at transporting heat away. But in extreme environments, like nuclear power plants and aboard spaceships, a lot rides on the efficiency and speed of the process.It’s why Marco Graffiedi, a fifth-year doctoral student at MIT’s Department of Nuclear Science and Engineering (NSE), is researching the phenomenon to help develop the next generation of spaceships and nuclear plants.Growing up in small-town ItalyGraffiedi’s parents encouraged a sense of exploration, giving him responsibilities for family projects even at a young age. When they restored a countryside cabin in a small town near Palazzolo, in the hills between Florence and Bologna, the then-14-year-old Marco got a project of his own. He had to ensure the animals on the property had enough accessible water without overfilling the storage tank. Marco designed and built a passive hydraulic system that effectively solved the problem and is still functional today.His proclivity for science continued in high school in Lugo, where Graffiedi enjoyed recreating classical physics phenomena, through experiments. Incidentally, the high school is named after Gregorio Ricci-Curbastro, a mathematician who laid the foundation for the theory of relativity — history that is not lost on Graffiedi. After high school, Graffiedi attended the International Physics Olympiad in Bangkok, a formative event that cemented his love for physics.A gradual shift toward engineeringA passion for physics and basic sciences notwithstanding, Graffiedi wondered if he’d be a better fit for engineering, where he could use the study of physics, chemistry, and math as tools to build something.Following that path, he completed a bachelor’s and master’s in mechanical engineering — because an undergraduate degree in Italy takes only three years, pretty much everyone does a master’s, Graffiedi laughs — at the Università di Pisa and the Scuola Superiore Sant’Anna (School of Engineering). The Sant’Anna is a highly selective institution that most students attend to complement their university studies.Graffiedi’s university studies gradually moved him toward the field of environmental engineering. He researched concentrated solar power in order to reduce the cost of solar power by studying the associated thermal cycle and trying to improve solar power collection. While the project was not very successful, it reinforced Graffiedi’s impression of the necessity of alternative energies. Still firmly planted in energy studies, Graffiedi worked on fracture mechanics for his master’s thesis, in collaboration with (what was then) GE Oil and Gas, researching how to improve the effectiveness of centrifugal compressors. And a summer internship at Fermilab had Graffiedi working on the thermal characterization of superconductive coatings.With his studies behind him, Graffiedi was still unsure about this professional path. Through the Edison Program from GE Oil and Gas, where he worked shortly after graduation, Graffiedi got to test drive many fields — from mechanical and thermal engineering to exploring gas turbines and combustion. He eventually became a test engineer, coordinating a team of engineers to test a new upgrade to the company’s gas turbines. “I set up the test bench, understanding how to instrument the machine, collect data, and run the test,” Graffiedi remembers, “there was a lot you need to think about, from a little turbine blade with sensors on it to the location of safety exits on the test bench.”The move toward nuclear engineeringAs fun as the test engineering job was, Graffiedi started to crave more technical knowledge and wanted to pivot to science. As part of his exploration, he came across nuclear energy and, understanding it to be the future, decided to lean on his engineering background to apply to MIT NSE.He found a fit in Professor Matteo Bucci’s group and decided to explore boiling and quenching. The move from science to engineering, and back to science, was now complete.NASA, the primary sponsor of the research, is interested in preventing boiling of cryogenic fuels, because boiling leads to loss of fuel and the resulting vapor will need to be vented to avoid overpressurizing a fuel tank.Graffiedi’s primary focus is on quenching, which will play an important role in refueling in space — and in the cooling of nuclear cores. When a cryogen is used to cool down a surface, it undergoes what is known as the Leidenfrost effect, which means it first forms a thin vapor film that acts as an insulator and prevents further cooling. To facilitate rapid cooling, it’s important to accelerate the collapse of the vapor film. Graffiedi is exploring the mechanics of the quenching process on a microscopic level, studies that are important for land and space applications.Boiling can be used for yet another modern application: to improve the efficiency of cooling systems for data centers. The growth of data centers and electric transportation systems needs effective heat transfer mechanisms to avoid overheating. Immersion cooling using dielectric fluids — fluids that do not conduct electricity — is one way to do so. These fluids remove heat from a surface by leaning on the principle of boiling. For effective boiling, the fluid must overcome the Leidenfrost effect and break the vapor film that forms. The fluid must also have high critical heat flux (CHF), which is the maximum value of the heat flux at which boiling can effectively be used to transfer heat from a heated surface to a liquid. Because dielectric fluids have lower CHF than water, Graffiedi is exploring solutions to enhance these limits. In particular, he is investigating how high electric fields can be used to enhance CHF and even to use boiling as a way to cool electronic components in the absence of gravity. He published this research in Applied Thermal Engineering in June.Beyond boilingGraffiedi’s love of science and engineering shows in his commitment to teaching as well. He has been a teaching assistant for four classes at NSE, winning awards for his contributions. His many additional achievements include winning the Manson Benedict Award presented to an NSE graduate student for excellence in academic performance and professional promise in nuclear science and engineering, and a service award for his role as past president of the MIT Division of the American Nuclear Society.Boston has a fervent Italian community, Graffiedi says, and he enjoys being a part of it. Fittingly, the MIT Italian club is called MITaly. When he’s not at work or otherwise engaged, Graffiedi loves Latin dancing, something he makes time for at least a couple of times a week. While he has his favorite Italian restaurants in the city, Graffiedi is grateful for another set of skills his parents gave him when was just 11: making perfect pizza and pasta. More

  • in

    Burning things to make things

    Around 80 percent of global energy production today comes from the combustion of fossil fuels. Combustion, or the process of converting stored chemical energy into thermal energy through burning, is vital for a variety of common activities including electricity generation, transportation, and domestic uses like heating and cooking — but it also yields a host of environmental consequences, contributing to air pollution and greenhouse gas emissions.Sili Deng, the Doherty Chair in Ocean Utilization and associate professor of mechanical engineering at MIT, is leading research to drive the transition from the heavy dependence on fossil fuels to renewable energy with storage.“I was first introduced to flame synthesis in my junior year in college,” Deng says. “I realized you can actually burn things to make things, [and] that was really fascinating.”

    Play video

    Burning Things to Make ThingsVideo: Department of Mechanical Engineering

    Deng says she ultimately picked combustion as a focus of her work because she likes the intellectual challenge the concept offers. “In combustion you have chemistry, and you have fluid mechanics. Each subject is very rich in science. This also has very strong engineering implications and applications.”Deng’s research group targets three areas: building up fundamental knowledge on combustion processes and emissions; developing alternative fuels and metal combustion to replace fossil fuels; and synthesizing flame-based materials for catalysis and energy storage, which can bring down the cost of manufacturing battery materials.One focus of the team has been on low-cost, low-emission manufacturing of cathode materials for lithium-ion batteries. Lithium-ion batteries play an increasingly critical role in transportation electrification (e.g., batteries for electric vehicles) and grid energy storage for electricity that is generated from renewable energy sources like wind and solar. Deng’s team has developed a technology they call flame-assisted spray pyrolysis, or FASP, which can help reduce the high manufacturing costs associated with cathode materials.FASP is based on flame synthesis, a technology that dates back nearly 3,000 years. In ancient China, this was the primary way black ink materials were made. “[People burned] vegetables or woods, such that afterwards they can collect the solidified smoke,” Deng explains. “For our battery applications, we can try to fit in the same formula, but of course with new tweaks.”The team is also interested in developing alternative fuels, including looking at the use of metals like aluminum to power rockets. “We’re interested in utilizing aluminum as a fuel for civil applications,” Deng says, because aluminum is abundant in the earth, cheap, and it’s available globally. “What we are trying to do is to understand [aluminum combustion] and be able to tailor its ignition and propagation properties.”Among other accolades, Deng is a 2025 recipient of the Hiroshi Tsuji Early Career Researcher Award from the Combustion Institute, an award that recognizes excellence in fundamental or applied combustion science research. More

  • in

    The brain power behind sustainable AI

    How can you use science to build a better gingerbread house?That was something Miranda Schwacke spent a lot of time thinking about. The MIT graduate student in the Department of Materials Science and Engineering (DMSE) is part of Kitchen Matters, a group of grad students who use food and kitchen tools to explain scientific concepts through short videos and outreach events. Past topics included why chocolate “seizes,” or becomes difficult to work with when melting (spoiler: water gets in), and how to make isomalt, the sugar glass that stunt performers jump through in action movies.Two years ago, when the group was making a video on how to build a structurally sound gingerbread house, Schwacke scoured cookbooks for a variable that would produce the most dramatic difference in the cookies.“I was reading about what determines the texture of cookies, and then tried several recipes in my kitchen until I got two gingerbread recipes that I was happy with,” Schwacke says.She focused on butter, which contains water that turns to steam at high baking temperatures, creating air pockets in cookies. Schwacke predicted that decreasing the amount of butter would yield denser gingerbread, strong enough to hold together as a house.“This hypothesis is an example of how changing the structure can influence the properties and performance of material,” Schwacke said in the eight-minute video.That same curiosity about materials properties and performance drives her research on the high energy cost of computing, especially for artificial intelligence. Schwacke develops new materials and devices for neuromorphic computing, which mimics the brain by processing and storing information in the same place. She studies electrochemical ionic synapses — tiny devices that can be “tuned” to adjust conductivity, much like neurons strengthening or weakening connections in the brain.“If you look at AI in particular — to train these really large models — that consumes a lot of energy. And if you compare that to the amount of energy that we consume as humans when we’re learning things, the brain consumes a lot less energy,” Schwacke says. “That’s what led to this idea to find more brain-inspired, energy-efficient ways of doing AI.”Her advisor, Bilge Yildiz, underscores the point: One reason the brain is so efficient is that data doesn’t need to be moved back and forth.“In the brain, the connections between our neurons, called synapses, are where we process information. Signal transmission is there. It is processed, programmed, and also stored in the same place,” says Yildiz, the Breene M. Kerr (1951) Professor in the Department of Nuclear Science and Engineering and DMSE. Schwacke’s devices aim to replicate that efficiency.Scientific rootsThe daughter of a marine biologist mom and an electrical engineer dad, Schwacke was immersed in science from a young age. Science was “always a part of how I understood the world.”“I was obsessed with dinosaurs. I wanted to be a paleontologist when I grew up,” she says. But her interests broadened. At her middle school in Charleston, South Carolina, she joined a FIRST Lego League robotics competition, building robots to complete tasks like pushing or pulling objects. “My parents, my dad especially, got very involved in the school team and helping us design and build our little robot for the competition.”Her mother, meanwhile, studied how dolphin populations are affected by pollution for the National Oceanic and Atmospheric Administration. That had a lasting impact.“That was an example of how science can be used to understand the world, and also to figure out how we can improve the world,” Schwacke says. “And that’s what I’ve always wanted to do with science.”Her interest in materials science came later, in her high school magnet program. There, she was introduced to the interdisciplinary subject, a blend of physics, chemistry, and engineering that studies the structure and properties of materials and uses that knowledge to design new ones.“I always liked that it goes from this very basic science, where we’re studying how atoms are ordering, all the way up to these solid materials that we interact with in our everyday lives — and how that gives them their properties that we can see and play with,” Schwacke says.As a senior, she participated in a research program with a thesis project on dye-sensitized solar cells, a low-cost, lightweight solar technology that uses dye molecules to absorb light and generate electricity.“What drove me was really understanding, this is how we go from light to energy that we can use — and also seeing how this could help us with having more renewable energy sources,” Schwacke says.After high school, she headed across the country to Caltech. “I wanted to try a totally new place,” she says, where she studied materials science, including nanostructured materials thousands of times thinner than a human hair. She focused on materials properties and microstructure — the tiny internal structure that governs how materials behave — which led her to electrochemical systems like batteries and fuel cells.AI energy challengeAt MIT, she continued exploring energy technologies. She met Yildiz during a Zoom meeting in her first year of graduate school, in fall 2020, when the campus was still operating under strict Covid-19 protocols. Yildiz’s lab studies how charged atoms, or ions, move through materials in technologies like fuel cells, batteries, and electrolyzers.The lab’s research into brain-inspired computing fired Schwacke’s imagination, but she was equally drawn to Yildiz’s way of talking about science.“It wasn’t based on jargon and emphasized a very basic understanding of what was going on — that ions are going here, and electrons are going here — to understand fundamentally what’s happening in the system,” Schwacke says.That mindset shaped her approach to research. Her early projects focused on the properties these devices need to work well — fast operation, low energy use, and compatibility with semiconductor technology — and on using magnesium ions instead of hydrogen, which can escape into the environment and make devices unstable.Her current project, the focus of her PhD thesis, centers on understanding how the insertion of magnesium ions into tungsten oxide, a metal oxide whose electrical properties can be precisely tuned, changes its electrical resistance. In these devices, tungsten oxide serves as a channel layer, where resistance controls signal strength, much like synapses regulate signals in the brain.“I am trying to understand exactly how these devices change the channel conductance,” Schwacke says.Schwacke’s research was recognized with a MathWorks Fellowship from the School of Engineering in 2023 and 2024. The fellowship supports graduate students who leverage tools like MATLAB or Simulink in their work; Schwacke applied MATLAB for critical data analysis and visualization.Yildiz describes Schwacke’s research as a novel step toward solving one of AI’s biggest challenges.“This is electrochemistry for brain-inspired computing,” Yildiz says. “It’s a new context for electrochemistry, but also with an energy implication, because the energy consumption of computing is unsustainably increasing. We have to find new ways of doing computing with much lower energy, and this is one way that can help us move in that direction.”Like any pioneering work, it comes with challenges, especially in bridging the concepts between electrochemistry and semiconductor physics.“Our group comes from a solid-state chemistry background, and when we started this work looking into magnesium, no one had used magnesium in these kinds of devices before,” Schwacke says. “So we were looking at the magnesium battery literature for inspiration and different materials and strategies we could use. When I started this, I wasn’t just learning the language and norms for one field — I was trying to learn it for two fields, and also translate between the two.”She also grapples with a challenge familiar to all scientists: how to make sense of messy data.“The main challenge is being able to take my data and know that I’m interpreting it in a way that’s correct, and that I understand what it actually means,” Schwacke says.She overcomes hurdles by collaborating closely with colleagues across fields, including neuroscience and electrical engineering, and sometimes by just making small changes to her experiments and watching what happens next.Community mattersSchwacke is not just active in the lab. In Kitchen Matters, she and her fellow DMSE grad students set up booths at local events like the Cambridge Science Fair and Steam It Up, an after-school program with hands-on activities for kids.“We did ‘pHun with Food’ with ‘fun’ spelled with a pH, so we had cabbage juice as a pH indicator,” Schwacke says. “We let the kids test the pH of lemon juice and vinegar and dish soap, and they had a lot of fun mixing the different liquids and seeing all the different colors.”She has also served as the social chair and treasurer for DMSE’s graduate student group, the Graduate Materials Council. As an undergraduate at Caltech, she led workshops in science and technology for Robogals, a student-run group that encourages young women to pursue careers in science, and assisted students in applying for the school’s Summer Undergraduate Research Fellowships.For Schwacke, these experiences sharpened her ability to explain science to different audiences, a skill she sees as vital whether she’s presenting at a kids’ fair or at a research conference.“I always think, where is my audience starting from, and what do I need to explain before I can get into what I’m doing so that it’ll all make sense to them?” she says.Schwacke sees the ability to communicate as central to building community, which she considers an important part of doing research. “It helps with spreading ideas. It always helps to get a new perspective on what you’re working on,” she says. “I also think it keeps us sane during our PhD.”Yildiz sees Schwacke’s community involvement as an important part of her resume. “She’s doing all these activities to motivate the broader community to do research, to be interested in science, to pursue science and technology, but that ability will help her also progress in her own research and academic endeavors.”After her PhD, Schwacke wants to take that ability to communicate with her to academia, where she’d like to inspire the next generation of scientists and engineers. Yildiz has no doubt she’ll thrive.“I think she’s a perfect fit,” Yildiz says. “She’s brilliant, but brilliance by itself is not enough. She’s persistent, resilient. You really need those on top of that.” More

  • in

    MIT Maritime Consortium releases “Nuclear Ship Safety Handbook”

    Commercial shipping accounts for 3 percent of all greenhouse gas emissions globally. As the sector sets climate goals and chases a carbon-free future, nuclear power — long used as a source for military vessels — presents an enticing solution. To date, however, there has been no clear, unified public document available to guide design safety for certain components of civilian nuclear ships. A new “Nuclear Ship Safety Handbook” by the MIT Maritime Consortium aims to change that and set the standard for safe maritime nuclear propulsion.“This handbook is a critical tool in efforts to support the adoption of nuclear in the maritime industry,” explains Themis Sapsis, the William I. Koch Professor of Mechanical Engineering at MIT, director of the MIT Center for Ocean Engineering, and co-director of the MIT Maritime Consortium. “The goal is to provide a strong basis for initial safety on key areas that require nuclear and maritime regulatory research and development in the coming years to prepare for nuclear propulsion in the maritime industry.”Using research data and standards, combined with operational experiences during civilian maritime nuclear operations, the handbook provides unique insights into potential issues and resolutions in the design efficacy of maritime nuclear operations, a topic of growing importance on the national and international stage. “Right now, the nuclear-maritime policies that exist are outdated and often tied only to specific technologies, like pressurized water reactors,” says Jose Izurieta, a graduate student in the Department of Mechanical Engineering (MechE) Naval Construction and Engineering (2N) Program, and one of the handbook authors. “With the recent U.K.-U.S. Technology Prosperity Deal now including civil maritime nuclear applications, I hope the handbook can serve as a foundation for creating a clear, modern regulatory framework for nuclear-powered commercial ships.”The recent memorandum of understanding signed by the U.S. and U.K calls for the exploration of “novel applications of advanced nuclear energy, including civil maritime applications,” and for the parties to play “a leading role informing the establishment of international standards, potential establishment of a maritime shipping corridor between the Participants’ territories, and strengthening energy resilience for the Participants’ defense facilities.”“The U.S.-U.K. nuclear shipping corridor offers a great opportunity to collaborate with legislators on establishing the critical framework that will enable the United States to invest on nuclear-powered merchant vessels — an achievement that will reestablish America in the shipbuilding space,” says Fotini Christia, the Ford International Professor of the Social Sciences, director of the Institute for Data, Systems, and Society (IDSS), director of the MIT Sociotechnical Systems Research Center, and co-director of the MIT Maritime Consortium.“With over 30 nations now building or planning their first reactors, nuclear energy’s global acceptance is unprecedented — and that momentum is key to aligning safety rules across borders for nuclear-powered ships and the respective ports,” says Koroush Shirvan, the Atlantic Richfield Career Development Professor in Energy Studies at MIT and director of the Reactor Technology Course for Utility Executives.The handbook, which is divided into chapters in areas involving the overlapping nuclear and maritime safety design decisions that will be encountered by engineers, is careful to balance technical and practical guidance with policy considerations.Commander Christopher MacLean, MIT associate professor of the practice in mechanical engineering, naval construction, and engineering, says the handbook will significantly benefit the entire maritime community, specifically naval architects and marine engineers, by providing standardized guidelines for design and operation specific to nuclear powered commercial vessels.“This will assist in enhancing safety protocols, improve risk assessments, and ensure consistent compliance with international regulations,” MacLean says. “This will also help foster collaboration amongst engineers and regulators. Overall, this will further strengthen the reliability, sustainability, and public trust in nuclear-powered maritime systems.”Anthony Valiaveedu, the handbook’s lead author, and co-author Nat Edmonds, are both students in the MIT Master’s Program in Technology and Policy (TPP) within the IDSS. The pair are also co-authors of a paper published in Science Policy Review earlier this year that offered structured advice on the development of nuclear regulatory policies.“It is important for safety and technology to go hand-in-hand,” Valiaveedu explains. “What we have done is provide a risk-informed process to begin these discussions for engineers and policymakers.”“Ultimately, I hope this framework can be used to build strong bilateral agreements between nations that will allow nuclear propulsion to thrive,” says fellow co-author Izurieta.Impact on industry“Maritime designers needed a source of information to improve their ability to understand and design the reactor primary components, and development of the ‘Nuclear Ship Safety Handbook’ was a good step to bridge this knowledge gap,” says Christopher J. Wiernicki, American Bureau of Shipping (ABS) chair and CEO. “For this reason, it is an important document for the industry.”The ABS, which is the American classification society for the maritime industry, develops criteria and provides safety certification for all ocean-going vessels. ABS is among the founding members of the MIT Maritime Consortium. Capital Clean Energy Carriers Corp., HD Korea Shipbuilding and Offshore Engineering, and Delos Navigation Ltd. are also consortium founding members. Innovation members are Foresight-Group, Navios Maritime Partners L.P., Singapore Maritime Institute, and Dorian LPG.“As we consider a net-zero framework for the shipping industry, nuclear propulsion represents a potential solution. Careful investigation remains the priority, with safety and regulatory standards at the forefront,” says Jerry Kalogiratos, CEO of Capital Clean Energy Carriers Corp. “As first movers, we are exploring all options. This handbook lays the technical foundation for the development of nuclear-powered commercial vessels.”Sangmin Park, senior vice president at HD Korea Shipbuilding and Offshore Engineering, says “The ‘Nuclear Ship Safety Handbook’ marks a groundbreaking milestone that bridges shipbuilding excellence and nuclear safety. It drives global collaboration between industry and academia, and paves the way for the safe advancement of the nuclear maritime era.”Maritime at MITMIT has been a leading center of ship research and design for over a century, with work at the Institute today representing significant advancements in fluid mechanics and hydrodynamics, acoustics, offshore mechanics, marine robotics and sensors, and ocean sensing and forecasting. Maritime Consortium projects, including the handbook, reflect national priorities aimed at revitalizing the U.S. shipbuilding and commercial maritime industries.The MIT Maritime Consortium, which launched in 2024, brings together MIT and maritime industry leaders to explore data-powered strategies to reduce harmful emissions, optimize vessel operations, and support economic priorities.“One of our most important efforts is the development of technologies, policies, and regulations to make nuclear propulsion for commercial ships a reality,” says Sapsis. “Over the last year, we have put together an interdisciplinary team with faculty and students from across the Institute. One of the outcomes of this effort is this very detailed document providing detailed guidance on how such effort should be implemented safely.”Handbook contributors come from multiple disciplines and MIT departments, labs, and research centers, including the Center for Ocean Engineering, IDSS, MechE’s Course 2N Program, the MIT Technology and Policy Program, and the Department of Nuclear Science and Engineering.MIT faculty members and research advisors on the project include Sapsis; Christia; Shirvan; MacLean; Jacopo Buongiorno, the Battelle Energy Alliance Professor in Nuclear Science and Engineering, director, Center for Advanced Nuclear Energy Systems, and director of science and technology for the Nuclear Reactor Laboratory; and Captain Andrew Gillespy, professor of the practice and director of the Naval Construction and Engineering (2N) Program.“Proving the viability of nuclear propulsion for civilian ships will entail getting the technologies, the economics and the regulations right,” says Buongiorno. “This handbook is a meaningful initial contribution to the development of a sound regulatory framework.”“We were lucky to have a team of students and knowledgeable professors from so many fields,” says Edmonds. “Before even beginning the outline of the handbook, we did significant archival and history research to understand the existing regulations and overarching story of nuclear ships. Some of the most relevant documents we found were written before 1975, and many of them were stored in the bellows of the NS Savannah.”The NS Savannah, which was built in the late 1950s as a demonstration project for the potential peacetime uses of nuclear energy, was the first nuclear-powered merchant ship. The Savannah was first launched on July 21, 1959, two years after the first nuclear-powered civilian vessel, the Soviet ice-breaker Lenin, and was retired in 1971.Historical context for this project is important, because the reactor technologies envisioned for maritime propulsion today are quite different from the traditional pressurized water reactors used by the U.S. Navy. These new reactors are being developed not just in the maritime context, but also to power ports and data centers on land; they all use low-enriched uranium and are passively cooled. For the maritime industry, Sapsis says, “the technology is there, it’s safe, and it’s ready.”“The Nuclear Ship Safety Handbook” is publicly available on the MIT Maritime Consortium website and from the MIT Libraries.  More

  • in

    MIT engineers solve the sticky-cell problem in bioreactors and other industries

    To help mitigate climate change, companies are using bioreactors to grow algae and other microorganisms that are hundreds of times more efficient at absorbing CO2 than trees. Meanwhile, in the pharmaceutical industry, cell culture is used to manufacture biologic drugs and other advanced treatments, including lifesaving gene and cell therapies.Both processes are hampered by cells’ tendency to stick to surfaces, which leads to a huge amount of waste and downtime for cleaning. A similar problem slows down biofuel production, interferes with biosensors and implants, and makes the food and beverage industry less efficient.Now, MIT researchers have developed an approach for detaching cells from surfaces on demand, using electrochemically generated bubbles. In an open-access paper published in Science Advances, the researchers demonstrated their approach in a lab prototype and showed it could work across a range of cells and surfaces without harming the cells.“We wanted to develop a technology that could be high-throughput and plug-and-play, and that would allow cells to attach and detach on demand to improve the workflow in these industrial processes,” says Professor Kripa Varanasi, senior author of the study. “This is a fundamental issue with cells, and we’ve solved it with a process that can scale. It lends itself to many different applications.”Joining Varanasi on the study are co-first authors Bert Vandereydt, a PhD student in mechanical engineering, and former postdoc Baptiste Blanc.Solving a sticky problem

    Credit: Joy Zheng

    The researchers began with a mission.“We’ve been working on figuring out how we can efficiently capture CO2 across different sources and convert it into valuable products for various end markets,” Varanasi says. “That’s where this photobioreactor and cell detachment comes into the picture.”Photobioreactors are used to grow carbon-absorbing algae cells by creating tightly controlled environments involving water and sunlight. They feature long, winding tubes with clear surfaces to let in the light algae need to grow. When algae stick to those surfaces, they block out the light, requiring cleaning.“You have to shut down and clean up the entire reactor as frequently as every two weeks,” Varanasi says. “It’s a huge operational challenge.”The researchers realized other industries have similar problem due to many cells’ natural adhesion, or stickiness. Each industry has its own solution for cell adhesion depending on how important it is that the cells survive. Some people scrape the surfaces clean, while others use special coatings that are toxic to cells.In the pharmaceutical and biotech industries, cell detachment is typically carried out using enzymes. However, this method poses several challenges — it can damage cell membranes, is time-consuming, and requires large amounts of consumables, resulting in millions of liters of biowaste.To create a better solution, the researchers began by studying other efforts to clear surfaces with bubbles, which mainly involved spraying bubbles onto surfaces and had been largely ineffective.“We realized we needed the bubbles to form on the surfaces where we don’t want these cells to stick, so when the bubbles detach it creates a local fluid flow that creates shear stress at the interface and removes the cells,” Varanasi explains.Electric currents generate bubbles by splitting water into hydrogen and oxygen. But previous attempts at using electricity to detach cells were hampered because the cell culture mediums contain sodium chloride, which turns into bleach when combined with an electric current. The bleach damages the cells, making it impractical for many applications.“The culprit is the anode — that’s where the sodium chloride turns to bleach,” Vandereydt explained. “We figured if we could separate that electrode from the rest of the system, we could prevent bleach from being generated.”To make a better system, the researchers built a 3-square-inch glass surface and deposited a gold electrode on top of it. The layer of gold is so thin it doesn’t block out light. To keep the other electrode separate, the researchers integrated a special membrane that only allows protons to pass through. The set up allowed the researchers to send a current through without generating bleach.To test their setup, they allowed algae cells from a concentrated solution to stick to the surfaces. When they applied a voltage, the bubbles separated the cells from the surfaces without harming them.The researchers also studied the interaction between the bubbles and cells, finding the higher the current density, the more bubbles were created and the more algae was removed. They developed a model for understanding how much current would be needed to remove algae in different settings and matched it with results from experiments involving algae as well as cells from ovarian cancer and bones.“Mammalian cells are orders of magnitude more sensitive than algae cells, but even with those cells, we were able to detach them with no impact to the viability of the cell,” Vandereydt says.Getting to scaleThe researchers say their system could represent a breakthrough in applications where bleach or other chemicals would harm cells. That includes pharmaceutical and food production.“If we can keep these systems running without fouling and other problems, then we can make them much more economical,” Varanasi says.For cell culture plates used in the pharmaceutical industry, the team envisions their system comprising an electrode that could be robotically moved from one culture plate to the next, to detach cells as they’re grown. It could also be coiled around algae harvesting systems.“This has general applicability because it doesn’t rely on any specific biological or chemical treatments, but on a physical force that is system-agnostic,” Varanasi says. “It’s also highly scalable to a lot of different processes, including particle removal.”Varanasi cautions there is much work to be done to scale up the system. But he hopes it can one day make algae and other cell harvesting more efficient.“The burning problem of our time is to somehow capture CO2 in a way that’s economically feasible,” Varanasi says. “These photobioreactors could be used for that, but we have to overcome the cell adhesion problem.”The work was supported, in part, by Eni S.p.A through the MIT Energy Initiative, the Belgian American Educational Foundation Fellowship, and the Maria Zambrano Fellowship. More