More stories

  • in

    Introducing the MIT-GE Vernova Climate and Energy Alliance

    MIT and GE Vernova launched the MIT-GE Vernova Energy and Climate Alliance on Sept. 15, a collaboration to advance research and education focused on accelerating the global energy transition.Through the alliance — an industry-academia initiative conceived by MIT Provost Anantha Chandrakasan and GE Vernova CEO Scott Strazik — GE Vernova has committed $50 million over five years in the form of sponsored research projects and philanthropic funding for research, graduate student fellowships, internships, and experiential learning, as well as professional development programs for GE Vernova leaders.“MIT has a long history of impactful collaborations with industry, and the collaboration between MIT and GE Vernova is a shining example of that legacy,” said Chandrakasan in opening remarks at a launch event. “Together, we are working on energy and climate solutions through interdisciplinary research and diverse perspectives, while providing MIT students the benefit of real-world insights from an industry leader positioned to bring those ideas into the world at scale.”The energy of changeAn independent company since its spinoff from GE in April 2024, GE Vernova is focused on accelerating the global energy transition. The company generates approximately 25 percent of the world’s electricity — with the world’s largest installed base of over 7,000 gas turbines, about 57,000 wind turbines, and leading-edge electrification technology.GE Vernova’s slogan, “The Energy of Change,” is reflected in decisions such as locating its headquarters in Cambridge, Massachusetts — in close proximity to MIT. In pursuing transformative approaches to the energy transition, the company has identified MIT as a key collaborator.A key component of the mission to electrify and decarbonize the world is collaboration, according to CEO Scott Strazik. “We want to inspire, and be inspired by, students as we work together on our generation’s greatest challenge, climate change. We have great ambition for what we want the world to become, but we need collaborators. And we need folks that want to iterate with us on what the world should be from here.”Representing the Healey-Driscoll administration at the launch event were Massachusetts Secretary of Energy and Environmental Affairs Rebecca Tepper and Secretary of the Executive Office of Economic Development Eric Paley. Secretary Tepper highlighted the Mass Leads Act, a $1 billion climate tech and life sciences initiative enacted by Governor Maura Healey last November to strengthen Massachusetts’ leadership in climate tech and AI.“We’re harnessing every part of the state, from hydropower manufacturing facilities to the blue-to-blue economy in our south coast, and right here at the center of our colleges and universities. We want to invent and scale the solutions to climate change in our own backyard,” said Tepper. “That’s been the Massachusetts way for decades.”

    Launch event attendees explore interactive displays in MIT’s Lobby 13.

    Photo: Gretchen Ertl

    Previous item
    Next item

    Real-world problems, insights, and solutionsThe launch celebration featured interactive science displays and student presenters introducing the first round of 13 research projects led by MIT faculty. These projects focus on generating scalable solutions to our most pressing challenges in the areas of electrification, decarbonization, renewables acceleration, and digital solutions. Read more about the funded projects here.Collaborating with industry offers the opportunity for researchers and students to address real-world problems informed by practical insights. The diverse, interdisciplinary perspectives from both industry and academia will significantly strengthen the research supported through the GE Vernova Fellowships announced at the launch event.“I’m excited to talk to the industry experts at GE Vernova about the problems that they work on,” said GE Vernova Fellow Aaron Langham. “I’m looking forward to learning more about how real people and industries use electrical power.”Fellow Julia Estrin echoed a similar sentiment: “I see this as a chance to connect fundamental research with practical applications — using insights from industry to shape innovative solutions in the lab that can have a meaningful impact at scale.”GE Vernova’s commitment to research is also providing support and inspiration for fellows. “This level of substantive enthusiasm for new ideas and technology is what comes from a company that not only looks toward the future, but also has the resources and determination to innovate impactfully,” says Owen Mylotte, a GE Vernova Fellow.The inaugural cohort of eight fellows will continue their research at MIT with tuition support from GE Vernova. Find the full list of fellows and their research topics here.Pipeline of future energy leadersHighlighting the alliance’s emphasis on cultivating student talent and leadership, GE Vernova CEO Scott Strazik introduced four MIT alumni who are now leaders at GE Vernova: Dhanush Mariappan SM ’03, PhD ’19, senior engineering manager in the GE Vernova Advanced Research Center; Brent Brunell SM ’00, technology director in the Advanced Research Center; Paolo Marone MBA ’21, CFO of wind; and Grace Caza MAP ’22, chief of staff in supply chain and operations.The four shared their experiences of working with MIT as students and their hopes for the future of this alliance in the realm of “people development,” as Mariappan highlighted. “Energy transition means leaders. And every one of the innovative research and professional education programs that will come out of this alliance is going to produce the leaders of the energy transition industry.”The alliance is underscoring its commitment to developing future energy leaders by supporting the New Engineering Education Transformation program (NEET) and expanding opportunities for student internships. With 100 new internships for MIT students announced in the days following the launch, GE Vernova is opening broad opportunities for MIT students at all levels to contribute to a sustainable future.“GE Vernova has been a tremendous collaborator every step of the way, with a clear vision of the technical breakthroughs we need to affect change at scale and a deep respect for MIT’s strengths and culture, as well as a hunger to listen and learn from us as well,” said Betar Gallant, alliance director who is also the Kendall Rohsenow Associate Professor of Mechanical Engineering at MIT. “Students, take this opportunity to learn, connect, and appreciate how much you’re valued, and how bright your futures are in this area of decarbonizing our energy systems. Your ideas and insight are going to help us determine and drive what’s next.”

    Event attendees mingle in MIT’s Lobby 13.

    Photo: Gretchen Ertl

    Previous item
    Next item

    Daring to create the future we wantThe launch event transformed MIT’s Lobby 13 with green lighting and animated conversation around the posters and hardware demos on display, reflecting the sense of optimism for the future and the type of change the alliance — and the Commonwealth of Massachusetts — seeks to advance.“Because of this collaboration and the commitment to the work that needs doing, many things will be created,” said Secretary Paley. “People in this room will work together on all kinds of projects that will do incredible things for our economy, for our innovation, for our country, and for our climate.”The alliance builds on MIT’s growing portfolio of initiatives around sustainable energy systems, including the Climate Project at MIT, a presidential initiative focused on developing solutions to some of the toughest barriers to an effective global climate response. “This new alliance is a significant opportunity to move the needle of energy and climate research as we dare to create the future that we want, with the promise of impactful solutions for the world,” said Evelyn Wang, MIT vice president for energy and climate, who attended the launch.To that end, the alliance is supporting critical cross-institution efforts in energy and climate policy, including funding three master’s students in MIT Technology and Policy Program and hosting an annual symposium in February 2026 to advance interdisciplinary research. GE Vernova is also providing philanthropic support to the MIT Human Insight Collaborative. For 2025-26, this support will contribute to addressing global energy poverty by supporting the MIT Abdul Latif Jameel Poverty Action Lab (J-PAL) in its work to expand access to affordable electricity in South Africa.“Our hope to our fellows, our hope to our students is this: While the stakes are high and the urgency has never been higher, the impact that you are going to have over the decades to come has never been greater,” said Roger Martella, chief corporate and sustainability officer at GE Vernova. “You have so much opportunity to move the world in a better direction. We need you to succeed. And our mission is to serve you and enable your success.”With the alliance’s launch — and GE Vernova’s new membership in several other MIT consortium programs related to sustainability, automation and robotics, and AI, including the Initiative for New Manufacturing, MIT Energy Initiative, MIT Climate and Sustainability Consortium, and Center for Transportation and Logistics — it’s evident why Betar Gallant says the company is “all-in at MIT.”The potential for tremendous impact on the energy industry is clear to those involved in the alliance. As GE Vernova Fellow Jack Morris said at the launch, “This is the beginning of something big.” More

  • in

    Ultrasonic device dramatically speeds harvesting of water from the air

    Feeling thirsty? Why not tap into the air? Even in desert conditions, there exists some level of humidity that, with the right material, can be soaked up and squeezed out to produce clean drinking water. In recent years, scientists have developed a host of promising sponge-like materials for this “atmospheric water harvesting.”But recovering the water from these materials usually requires heat — and time. Existing designs rely on heat from the sun to evaporate water from the materials and condense it into droplets. But this step can take hours or even days. Now, MIT engineers have come up with a way to quickly recover water from an atmospheric water harvesting material. Rather than wait for the sun to evaporate water out, the team uses ultrasonic waves to shake the water out.The researchers have developed an ultrasonic device that vibrates at high frequency. When a water-harvesting material, known as a “sorbent,” is placed on the device, the device emits ultrasound waves that are tuned to shake water molecules out of the sorbent. The team found that the device recovers water in minutes, versus the tens of minutes or hours required by thermal designs.

    Play video

    MIT engineers design an ultrasonic system to “shake” water out of an atmospheric water harvester. The new design can recover captured water in minutes rather than hours.

    Unlike heat-based designs, the device does require a power source. The team envisions that the device could be powered by a small solar cell, which could also act as a sensor to detect when the sorbent is full. It could also be programmed to automatically turn on whenever a material has harvested enough moisture to be extracted. In this way, a system could soak up and shake out water from the air over many cycles in a single day.“People have been looking for ways to harvest water from the atmosphere, which could be a big source of water particularly for desert regions and places where there is not even saltwater to desalinate,” says Svetlana Boriskina, principal research scientist in MIT’s Department of Mechanical Engineering. “Now we have a way to recover water quickly and efficiently.”Boriskina and her colleagues report on their new device in a study appearing today in the journal Nature Communications. The study’s first author is Ikra Iftekhar Shuvo, an MIT graduate student in media arts and sciences, along with Carlos Díaz-Marín, Marvin Christen, Michael Lherbette, and Christopher Liem.Precious hoursBoriskina’s group at MIT develops materials that interact with the environment in novel ways. Recently, her group explored atmospheric water harvesting (AWH), and ways that materials can be designed to efficiently absorb water from the air. The hope is that, if they can work reliably, AWH systems would be of most benefit to communities where traditional sources of drinking water — and even saltwater — are scarce.Like other groups, Boriskina’s lab had generally assumed that an AWH system in the field would absorb moisture during the night, and then use the heat from the sun during the day to naturally evaporate the water and condense it for collection.“Any material that’s very good at capturing water doesn’t want to part with that water,” Boriskina explains. “So you need to put a lot of energy and precious hours into pulling water out of the material.”She realized there could be a faster way to recover water after Ikra Shuvo joined her group. Shuvo had been working with ultrasound for wearable medical device applications. When he and Boriskina considered ideas for new projects, they realized that ultrasound could be a way to speed up the recovery step in atmospheric water harvesting.“It clicked: We have this big problem we’re trying to solve, and now Ikra seemed to have a tool that can be used to solve this problem,” Boriskina recalls.Water danceUltrasound, or ultrasonic waves, are acoustic pressure waves that travel at frequencies of over 20 kilohertz (20,000 cycles per second). Such high-frequency waves are not visible or audible to humans. And, as the team found, ultrasound vibrates at just the right frequency to shake water out of a material.“With ultrasound, we can precisely break the weak bonds between water molecules and the sites where they’re sitting,” Shuvo says. “It’s like the water is dancing with the waves, and this targeted disturbance creates momentum that releases the water molecules, and we can see them shake out in droplets.”Shuvo and Boriskina designed a new ultrasonic actuator to recover water from an atmospheric water harvesting material. The heart of the device is a flat ceramic ring that vibrates when voltage is applied. This ring is surrounded by an outer ring that is studded with tiny nozzles. Water droplets that shake out of a material can drop through the nozzle and into collection vessels attached above and below the vibrating ring.They tested the device on a previously designed atmospheric water harvesting material. Using quarter-sized samples of the material, the team first placed each sample in a humidity chamber, set to various humidity levels. Over time, the samples absorbed moisture and became saturated. The researchers then placed each sample on the ultrasonic actuator and powered it on to vibrate at ultrasonic frequencies. In all cases, the device was able to shake out enough water to dry out each sample in just a few minutes.The researchers calculate that, compared to using heat from the sun, the ultrasonic design is 45 times more efficient at extracting water from the same material.“The beauty of this device is that it’s completely complementary and can be an add-on to almost any sorbent material,” says Boriskina, who envisions a practical, household system might consist of a fast-absorbing material and an ultrasonic actuator, each about the size of a window. Once the material is saturated, the actuator would briefly turn on, powered by a solar cell, to shake out the water. The material would then be ready to harvest more water, in multiple cycles throughout a single day.“It’s all about how much water you can extract per day,” she says. “With ultrasound, we can recover water quickly, and cycle again and again. That can add up to a lot per day.”This work was supported, in part, by the MIT Abdul Latif Jameel Water and Food Systems Lab and the MIT-Israel Zuckerman STEM Fund. More

  • in

    MIT Energy Initiative launches Data Center Power Forum

    With global power demand from data centers expected to more than double by 2030, the MIT Energy Initiative (MITEI) in September launched an effort that brings together MIT researchers and industry experts to explore innovative solutions for powering the data-driven future. At its annual research conference, MITEI announced the Data Center Power Forum, a targeted research effort for MITEI member companies interested in addressing the challenges of data center power demand. The Data Center Power Forum builds on lessons from MITEI’s May 2025 symposium on the energy to power the expansion of artificial intelligence (AI) and focus panels related to data centers at the fall 2024 research conference.In the United States, data centers consumed 4 percent of the country’s electricity in 2023, with demand expected to increase to 9 percent by 2030, according to the Electric Power Research Institute. Much of the growth in demand is from the increasing use of AI, which is placing an unprecedented strain on the electric grid. This surge in demand presents a serious challenge for the technology and energy sectors, government policymakers, and everyday consumers, who may see their electric bills skyrocket as a result.“MITEI has long supported research on ways to produce more efficient and cleaner energy and to manage the electric grid. In recent years, MITEI has also funded dozens of research projects relevant to data center energy issues. Building on this history and knowledge base, MITEI’s Data Center Power Forum is convening a specialized community of industry members who have a vital stake in the sustainable growth of AI and the acceleration of solutions for powering data centers and expanding the grid,” says William H. Green, the director of MITEI and the Hoyt C. Hottel Professor of Chemical Engineering.MITEI’s mission is to advance zero- and low-carbon solutions to expand energy access and mitigate climate change. MITEI works with companies from across the energy innovation chain, including in the infrastructure, automotive, electric power, energy, natural resources, and insurance sectors. MITEI member companies have expressed strong interest in the Data Center Power Forum and are committing to support focused research on a wide range of energy issues associated with data center expansion, Green says.MITEI’s Data Center Power Forum will provide its member companies with reliable insights into energy supply, grid load operations and management, the built environment, and electricity market design and regulatory policy for data centers. The forum complements MIT’s deep expertise in adjacent topics such as low-power processors, efficient algorithms, task-specific AI, photonic devices, quantum computing, and the societal consequences of data center expansion. As part of the forum, MITEI’s Future Energy Systems Center is funding projects relevant to data center energy in its upcoming proposal cycles. MITEI Research Scientist Deep Deka has been named the program manager for the forum.“Figuring out how to meet the power demands of data centers is a complicated challenge. Our research is coming at this from multiple directions, from looking at ways to expand transmission capacity within the electrical grid in order to bring power to where it is needed, to ensuring the quality of electrical service for existing users is not diminished when new data centers come online, and to shifting computing tasks to times and places when and where energy is available on the grid,” said Deka.MITEI currently sponsors substantial research related to data center energy topics across several MIT departments. The existing research portfolio includes more than a dozen projects related to data centers, including low- or zero-carbon solutions for energy supply and infrastructure, electrical grid management, and electricity market policy. MIT researchers funded through MITEI’s industry consortium are also designing more energy-efficient power electronics and processors and investigating behind-the-meter low-/no-carbon power plants and energy storage. MITEI-supported experts are studying how to use AI to optimize electrical distribution and the siting of data centers and conducting techno-economic analyses of data center power schemes. MITEI’s consortium projects are also bringing fresh perspectives to data center cooling challenges and considering policy approaches to balance the interests of shareholders. By drawing together industry stakeholders from across the AI and grid value chain, the Data Center Power Forum enables a richer dialog about solutions to power, grid, and carbon management problems in a noncommercial and collaborative setting.“The opportunity to meet and to hold discussions on key data center challenges with other forum members from different sectors, as well as with MIT faculty members and research scientists, is a unique benefit of this MITEI-led effort,” Green says.MITEI addressed the issue of data center power needs with its company members during its fall 2024 Annual Research Conference with a panel session titled, “The extreme challenge of powering data centers in a decarbonized way.” MITEI Director of Research Randall Field led a discussion with representatives from large technology companies Google and Microsoft, known as “hyperscalers,” as well as Madrid-based infrastructure developer Ferrovial S.E. and utility company Exelon Corp. Another conference session addressed the related topic, “Energy storage and grid expansion.” This past spring, MITEI focused its annual Spring Symposium on data centers, hosting faculty members and researchers from MIT and other universities, business leaders, and a representative of the Federal Energy Regulatory Commission for a full day of sessions on the topic, “AI and energy: Peril and promise.”  More

  • in

    What should countries do with their nuclear waste?

    One of the highest-risk components of nuclear waste is iodine-129 (I-129), which stays radioactive for millions of years and accumulates in human thyroids when ingested. In the U.S., nuclear waste containing I-129 is scheduled to be disposed of in deep underground repositories, which scientists say will sufficiently isolate it.Meanwhile, across the globe, France routinely releases low-level radioactive effluents containing iodine-129 and other radionuclides into the ocean. France recycles its spent nuclear fuel, and the reprocessing plant discharges about 153 kilograms of iodine-129 each year, under the French regulatory limit.Is dilution a good solution? What’s the best way to handle spent nuclear fuel? A new study by MIT researchers and their collaborators at national laboratories quantifies I-129 release under three different scenarios: the U.S. approach of disposing spent fuel directly in deep underground repositories, the French approach of dilution and release, and an approach that uses filters to capture I-129 and disposes of them in shallow underground waste repositories.The researchers found France’s current practice of reprocessing releases about 90 percent of the waste’s I-129 into the biosphere. They found low levels of I-129 in ocean water around France and the U.K.’s former reprocessing sites, including the English Channel and North Sea. Although the low level of I-129 in the water in Europe is not considered to pose health risks, the U.S. approach of deep underground disposal leads to far less I-129 being released, the researchers found.The researchers also investigated the effect of environmental regulations and technologies related to I-129 management, to illuminate the tradeoffs associated with different approaches around the world.“Putting these pieces together to provide a comprehensive view of Iodine-129 is important,” says MIT Assistant Professor Haruko Wainwright, a first author on the paper who holds a joint appointment in the departments of Nuclear Science and Engineering and of Civil and Environmental Engineering. “There are scientists that spend their lives trying to clean up iodine-129 at contaminated sites. These scientists are sometimes shocked to learn some countries are releasing so much iodine-129. This work also provides a life-cycle perspective. We’re not just looking at final disposal and solid waste, but also when and where release is happening. It puts all the pieces together.”MIT graduate student Kate Whiteaker SM ’24 led many of the analyses with Wainwright. Their co-authors are Hansell Gonzalez-Raymat, Miles Denham, Ian Pegg, Daniel Kaplan, Nikolla Qafoku, David Wilson, Shelly Wilson, and Carol Eddy-Dilek. The study appears today in Nature Sustainability.Managing wasteIodine-129 is often a key focus for scientists and engineers as they conduct safety assessments of nuclear waste disposal sites around the world. It has a half-life of 15.7 million years, high environmental mobility, and could potentially cause cancers if ingested. The U.S. sets a strict limit on how much I-129 can be released and how much I-129 can be in drinking water — 5.66 nanograms per liter, the lowest such level of any radionuclides.“Iodine-129 is very mobile, so it is usually the highest-dose contributor in safety assessments,” Wainwright says.For the study, the researchers calculated the release of I-129 across three different waste management strategies by combining data from current and former reprocessing sites as well as repository assessment models and simulations.The authors defined the environmental impact as the release of I-129 into the biosphere that humans could be exposed to, as well as its concentrations in surface water. They measured I-129 release per the total electrical energy generated by a 1-gigawatt power plant over one year, denoted as kg/GWe.y.Under the U.S. approach of deep underground disposal with barrier systems, assuming the barrier canisters fail at 1,000 years (a conservative estimate), the researchers found 2.14 x 10–8 kg/GWe.y of I-129 would be released between 1,000 and 1 million years from today.They estimate that 4.51 kg/GWe.y of I-129, or 91 percent of the total, would be released into the biosphere in the scenario where fuel is reprocessed and the effluents are diluted and released. About 3.3 percent of I-129 is captured by gas filters, which are then disposed of in shallow subsurfaces as low-level radioactive waste. A further 5.2 percent remains in the waste stream of the reprocessing plant, which is then disposed of as high-level radioactive waste.If the waste is recycled with gas filters to directly capture I-129, 0.05 kg/GWe.y of the I-129 is released, while 94 percent is disposed of in the low-level disposal sites. For shallow disposal, some kind of human disruption and intrusion is assumed to occur after government or institutional control expires (typically 100-1,000 years). That results in a potential release of the disposed amount to the environment after the control period.Overall, the current practice of recycling spent nuclear fuel releases the majority of I-129 into the environment today, while the direct disposal of spent fuel releases around 1/100,000,000 that amount over 1 million years. When the gas filters are used to capture I-129, the majority of I-129 goes to shallow underground repositories, which could be accidentally released through human intrusion down the line.The researchers also quantified the concentration of I-129 in different surface waters near current and former fuel reprocessing facilities, including the English Channel and the North Sea near reprocessing plants in France and U.K. They also analyzed the U.S. Columbia River downstream of a site in Washington state where material for nuclear weapons was produced during the Cold War, and they studied a similar site in South Carolina. The researchers found far higher concentrations of I-129 within the South Carolina site, where the low-level radioactive effluents were released far from major rivers and hence resulted in less dilution in the environment.“We wanted to quantify the environmental factors and the impact of dilution, which in this case affected concentrations more than discharge amounts,” Wainwright says. “Someone might take our results to say dilution still works: It’s reducing the contaminant concentration and spreading it over a large area. On the other hand, in the U.S., imperfect disposal has led to locally higher surface water concentrations. This provides a cautionary tale that disposal could concentrate contaminants, and should be carefully designed to protect local communities.”Fuel cycles and policyWainwright doesn’t want her findings to dissuade countries from recycling nuclear fuel. She says countries like Japan plan to use increased filtration to capture I-129 when they reprocess spent fuel. Filters with I-129 can be disposed of as low-level waste under U.S. regulations.“Since I-129 is an internal carcinogen without strong penetrating radiation, shallow underground disposal would be appropriate in line with other hazardous waste,” Wainwright says. “The history of environmental protection since the 1960s is shifting from waste dumping and release to isolation. But there are still industries that release waste into the air and water. We have seen that they often end up causing issues in our daily life — such as CO2, mercury, PFAS and others — especially when there are many sources or when bioaccumulation happens. The nuclear community has been leading in waste isolation strategies and technologies since the 1950s. These efforts should be further enhanced and accelerated. But at the same time, if someone does not choose nuclear energy because of waste issues, it would encourage other industries with much lower environmental standards.”The work was supported by MIT’s Climate Fast Forward Faculty Fund and the U.S. Department of Energy. More

  • in

    3 Questions: How AI is helping us monitor and support vulnerable ecosystems

    A recent study from Oregon State University estimated that more than 3,500 animal species are at risk of extinction because of factors including habitat alterations, natural resources being overexploited, and climate change.To better understand these changes and protect vulnerable wildlife, conservationists like MIT PhD student and Computer Science and Artificial Intelligence Laboratory (CSAIL) researcher Justin Kay are developing computer vision algorithms that carefully monitor animal populations. A member of the lab of MIT Department of Electrical Engineering and Computer Science assistant professor and CSAIL principal investigator Sara Beery, Kay is currently working on tracking salmon in the Pacific Northwest, where they provide crucial nutrients to predators like birds and bears, while managing the population of prey, like bugs.With all that wildlife data, though, researchers have lots of information to sort through and many AI models to choose from to analyze it all. Kay and his colleagues at CSAIL and the University of Massachusetts Amherst are developing AI methods that make this data-crunching process much more efficient, including a new approach called “consensus-driven active model selection” (or “CODA”) that helps conservationists choose which AI model to use. Their work was named a Highlight Paper at the International Conference on Computer Vision (ICCV) in October.That research was supported, in part, by the National Science Foundation, Natural Sciences and Engineering Research Council of Canada, and Abdul Latif Jameel Water and Food Systems Lab (J-WAFS). Here, Kay discusses this project, among other conservation efforts.Q: In your paper, you pose the question of which AI models will perform the best on a particular dataset. With as many as 1.9 million pre-trained models available in the HuggingFace Models repository alone, how does CODA help us address that challenge?A: Until recently, using AI for data analysis has typically meant training your own model. This requires significant effort to collect and annotate a representative training dataset, as well as iteratively train and validate models. You also need a certain technical skill set to run and modify AI training code. The way people interact with AI is changing, though — in particular, there are now millions of publicly available pre-trained models that can perform a variety of predictive tasks very well. This potentially enables people to use AI to analyze their data without developing their own model, simply by downloading an existing model with the capabilities they need. But this poses a new challenge: Which model, of the millions available, should they use to analyze their data? Typically, answering this model selection question also requires you to spend a lot of time collecting and annotating a large dataset, albeit for testing models rather than training them. This is especially true for real applications where user needs are specific, data distributions are imbalanced and constantly changing, and model performance may be inconsistent across samples. Our goal with CODA was to substantially reduce this effort. We do this by making the data annotation process “active.” Instead of requiring users to bulk-annotate a large test dataset all at once, in active model selection we make the process interactive, guiding users to annotate the most informative data points in their raw data. This is remarkably effective, often requiring users to annotate as few as 25 examples to identify the best model from their set of candidates. We’re very excited about CODA offering a new perspective on how to best utilize human effort in the development and deployment of machine-learning (ML) systems. As AI models become more commonplace, our work emphasizes the value of focusing effort on robust evaluation pipelines, rather than solely on training.Q: You applied the CODA method to classifying wildlife in images. Why did it perform so well, and what role can systems like this have in monitoring ecosystems in the future?A: One key insight was that when considering a collection of candidate AI models, the consensus of all of their predictions is more informative than any individual model’s predictions. This can be seen as a sort of “wisdom of the crowd:” On average, pooling the votes of all models gives you a decent prior over what the labels of individual data points in your raw dataset should be. Our approach with CODA is based on estimating a “confusion matrix” for each AI model — given the true label for some data point is class X, what is the probability that an individual model predicts class X, Y, or Z? This creates informative dependencies between all of the candidate models, the categories you want to label, and the unlabeled points in your dataset.Consider an example application where you are a wildlife ecologist who has just collected a dataset containing potentially hundreds of thousands of images from cameras deployed in the wild. You want to know what species are in these images, a time-consuming task that computer vision classifiers can help automate. You are trying to decide which species classification model to run on your data. If you have labeled 50 images of tigers so far, and some model has performed well on those 50 images, you can be pretty confident it will perform well on the remainder of the (currently unlabeled) images of tigers in your raw dataset as well. You also know that when that model predicts some image contains a tiger, it is likely to be correct, and therefore that any model that predicts a different label for that image is more likely to be wrong. You can use all these interdependencies to construct probabilistic estimates of each model’s confusion matrix, as well as a probability distribution over which model has the highest accuracy on the overall dataset. These design choices allow us to make more informed choices over which data points to label and ultimately are the reason why CODA performs model selection much more efficiently than past work.There are also a lot of exciting possibilities for building on top of our work. We think there may be even better ways of constructing informative priors for model selection based on domain expertise — for instance, if it is already known that one model performs exceptionally well on some subset of classes or poorly on others. There are also opportunities to extend the framework to support more complex machine-learning tasks and more sophisticated probabilistic models of performance. We hope our work can provide inspiration and a starting point for other researchers to keep pushing the state of the art.Q: You work in the Beerylab, led by Sara Beery, where researchers are combining the pattern-recognition capabilities of machine-learning algorithms with computer vision technology to monitor wildlife. What are some other ways your team is tracking and analyzing the natural world, beyond CODA?A: The lab is a really exciting place to work, and new projects are emerging all the time. We have ongoing projects monitoring coral reefs with drones, re-identifying individual elephants over time, and fusing multi-modal Earth observation data from satellites and in-situ cameras, just to name a few. Broadly, we look at emerging technologies for biodiversity monitoring and try to understand where the data analysis bottlenecks are, and develop new computer vision and machine-learning approaches that address those problems in a widely applicable way. It’s an exciting way of approaching problems that sort of targets the “meta-questions” underlying particular data challenges we face. The computer vision algorithms I’ve worked on that count migrating salmon in underwater sonar video are examples of that work. We often deal with shifting data distributions, even as we try to construct the most diverse training datasets we can. We always encounter something new when we deploy a new camera, and this tends to degrade the performance of computer vision algorithms. This is one instance of a general problem in machine learning called domain adaptation, but when we tried to apply existing domain adaptation algorithms to our fisheries data we realized there were serious limitations in how existing algorithms were trained and evaluated. We were able to develop a new domain adaptation framework, published earlier this year in Transactions on Machine Learning Research, that addressed these limitations and led to advancements in fish counting, and even self-driving and spacecraft analysis.One line of work that I’m particularly excited about is understanding how to better develop and analyze the performance of predictive ML algorithms in the context of what they are actually used for. Usually, the outputs from some computer vision algorithm — say, bounding boxes around animals in images — are not actually the thing that people care about, but rather a means to an end to answer a larger problem — say, what species live here, and how is that changing over time? We have been working on methods to analyze predictive performance in this context and reconsider the ways that we input human expertise into ML systems with this in mind. CODA was one example of this, where we showed that we could actually consider the ML models themselves as fixed and build a statistical framework to understand their performance very efficiently. We have been working recently on similar integrated analyses combining ML predictions with multi-stage prediction pipelines, as well as ecological statistical models. The natural world is changing at unprecedented rates and scales, and being able to quickly move from scientific hypotheses or management questions to data-driven answers is more important than ever for protecting ecosystems and the communities that depend on them. Advancements in AI can play an important role, but we need to think critically about the ways that we design, train, and evaluate algorithms in the context of these very real challenges. More

  • in

    Using classic physical phenomena to solve new problems

    Quenching, a powerful heat transfer mechanism, is remarkably effective at transporting heat away. But in extreme environments, like nuclear power plants and aboard spaceships, a lot rides on the efficiency and speed of the process.It’s why Marco Graffiedi, a fifth-year doctoral student at MIT’s Department of Nuclear Science and Engineering (NSE), is researching the phenomenon to help develop the next generation of spaceships and nuclear plants.Growing up in small-town ItalyGraffiedi’s parents encouraged a sense of exploration, giving him responsibilities for family projects even at a young age. When they restored a countryside cabin in a small town near Palazzolo, in the hills between Florence and Bologna, the then-14-year-old Marco got a project of his own. He had to ensure the animals on the property had enough accessible water without overfilling the storage tank. Marco designed and built a passive hydraulic system that effectively solved the problem and is still functional today.His proclivity for science continued in high school in Lugo, where Graffiedi enjoyed recreating classical physics phenomena, through experiments. Incidentally, the high school is named after Gregorio Ricci-Curbastro, a mathematician who laid the foundation for the theory of relativity — history that is not lost on Graffiedi. After high school, Graffiedi attended the International Physics Olympiad in Bangkok, a formative event that cemented his love for physics.A gradual shift toward engineeringA passion for physics and basic sciences notwithstanding, Graffiedi wondered if he’d be a better fit for engineering, where he could use the study of physics, chemistry, and math as tools to build something.Following that path, he completed a bachelor’s and master’s in mechanical engineering — because an undergraduate degree in Italy takes only three years, pretty much everyone does a master’s, Graffiedi laughs — at the Università di Pisa and the Scuola Superiore Sant’Anna (School of Engineering). The Sant’Anna is a highly selective institution that most students attend to complement their university studies.Graffiedi’s university studies gradually moved him toward the field of environmental engineering. He researched concentrated solar power in order to reduce the cost of solar power by studying the associated thermal cycle and trying to improve solar power collection. While the project was not very successful, it reinforced Graffiedi’s impression of the necessity of alternative energies. Still firmly planted in energy studies, Graffiedi worked on fracture mechanics for his master’s thesis, in collaboration with (what was then) GE Oil and Gas, researching how to improve the effectiveness of centrifugal compressors. And a summer internship at Fermilab had Graffiedi working on the thermal characterization of superconductive coatings.With his studies behind him, Graffiedi was still unsure about this professional path. Through the Edison Program from GE Oil and Gas, where he worked shortly after graduation, Graffiedi got to test drive many fields — from mechanical and thermal engineering to exploring gas turbines and combustion. He eventually became a test engineer, coordinating a team of engineers to test a new upgrade to the company’s gas turbines. “I set up the test bench, understanding how to instrument the machine, collect data, and run the test,” Graffiedi remembers, “there was a lot you need to think about, from a little turbine blade with sensors on it to the location of safety exits on the test bench.”The move toward nuclear engineeringAs fun as the test engineering job was, Graffiedi started to crave more technical knowledge and wanted to pivot to science. As part of his exploration, he came across nuclear energy and, understanding it to be the future, decided to lean on his engineering background to apply to MIT NSE.He found a fit in Professor Matteo Bucci’s group and decided to explore boiling and quenching. The move from science to engineering, and back to science, was now complete.NASA, the primary sponsor of the research, is interested in preventing boiling of cryogenic fuels, because boiling leads to loss of fuel and the resulting vapor will need to be vented to avoid overpressurizing a fuel tank.Graffiedi’s primary focus is on quenching, which will play an important role in refueling in space — and in the cooling of nuclear cores. When a cryogen is used to cool down a surface, it undergoes what is known as the Leidenfrost effect, which means it first forms a thin vapor film that acts as an insulator and prevents further cooling. To facilitate rapid cooling, it’s important to accelerate the collapse of the vapor film. Graffiedi is exploring the mechanics of the quenching process on a microscopic level, studies that are important for land and space applications.Boiling can be used for yet another modern application: to improve the efficiency of cooling systems for data centers. The growth of data centers and electric transportation systems needs effective heat transfer mechanisms to avoid overheating. Immersion cooling using dielectric fluids — fluids that do not conduct electricity — is one way to do so. These fluids remove heat from a surface by leaning on the principle of boiling. For effective boiling, the fluid must overcome the Leidenfrost effect and break the vapor film that forms. The fluid must also have high critical heat flux (CHF), which is the maximum value of the heat flux at which boiling can effectively be used to transfer heat from a heated surface to a liquid. Because dielectric fluids have lower CHF than water, Graffiedi is exploring solutions to enhance these limits. In particular, he is investigating how high electric fields can be used to enhance CHF and even to use boiling as a way to cool electronic components in the absence of gravity. He published this research in Applied Thermal Engineering in June.Beyond boilingGraffiedi’s love of science and engineering shows in his commitment to teaching as well. He has been a teaching assistant for four classes at NSE, winning awards for his contributions. His many additional achievements include winning the Manson Benedict Award presented to an NSE graduate student for excellence in academic performance and professional promise in nuclear science and engineering, and a service award for his role as past president of the MIT Division of the American Nuclear Society.Boston has a fervent Italian community, Graffiedi says, and he enjoys being a part of it. Fittingly, the MIT Italian club is called MITaly. When he’s not at work or otherwise engaged, Graffiedi loves Latin dancing, something he makes time for at least a couple of times a week. While he has his favorite Italian restaurants in the city, Graffiedi is grateful for another set of skills his parents gave him when was just 11: making perfect pizza and pasta. More

  • in

    Burning things to make things

    Around 80 percent of global energy production today comes from the combustion of fossil fuels. Combustion, or the process of converting stored chemical energy into thermal energy through burning, is vital for a variety of common activities including electricity generation, transportation, and domestic uses like heating and cooking — but it also yields a host of environmental consequences, contributing to air pollution and greenhouse gas emissions.Sili Deng, the Doherty Chair in Ocean Utilization and associate professor of mechanical engineering at MIT, is leading research to drive the transition from the heavy dependence on fossil fuels to renewable energy with storage.“I was first introduced to flame synthesis in my junior year in college,” Deng says. “I realized you can actually burn things to make things, [and] that was really fascinating.”

    Play video

    Burning Things to Make ThingsVideo: Department of Mechanical Engineering

    Deng says she ultimately picked combustion as a focus of her work because she likes the intellectual challenge the concept offers. “In combustion you have chemistry, and you have fluid mechanics. Each subject is very rich in science. This also has very strong engineering implications and applications.”Deng’s research group targets three areas: building up fundamental knowledge on combustion processes and emissions; developing alternative fuels and metal combustion to replace fossil fuels; and synthesizing flame-based materials for catalysis and energy storage, which can bring down the cost of manufacturing battery materials.One focus of the team has been on low-cost, low-emission manufacturing of cathode materials for lithium-ion batteries. Lithium-ion batteries play an increasingly critical role in transportation electrification (e.g., batteries for electric vehicles) and grid energy storage for electricity that is generated from renewable energy sources like wind and solar. Deng’s team has developed a technology they call flame-assisted spray pyrolysis, or FASP, which can help reduce the high manufacturing costs associated with cathode materials.FASP is based on flame synthesis, a technology that dates back nearly 3,000 years. In ancient China, this was the primary way black ink materials were made. “[People burned] vegetables or woods, such that afterwards they can collect the solidified smoke,” Deng explains. “For our battery applications, we can try to fit in the same formula, but of course with new tweaks.”The team is also interested in developing alternative fuels, including looking at the use of metals like aluminum to power rockets. “We’re interested in utilizing aluminum as a fuel for civil applications,” Deng says, because aluminum is abundant in the earth, cheap, and it’s available globally. “What we are trying to do is to understand [aluminum combustion] and be able to tailor its ignition and propagation properties.”Among other accolades, Deng is a 2025 recipient of the Hiroshi Tsuji Early Career Researcher Award from the Combustion Institute, an award that recognizes excellence in fundamental or applied combustion science research. More

  • in

    The brain power behind sustainable AI

    How can you use science to build a better gingerbread house?That was something Miranda Schwacke spent a lot of time thinking about. The MIT graduate student in the Department of Materials Science and Engineering (DMSE) is part of Kitchen Matters, a group of grad students who use food and kitchen tools to explain scientific concepts through short videos and outreach events. Past topics included why chocolate “seizes,” or becomes difficult to work with when melting (spoiler: water gets in), and how to make isomalt, the sugar glass that stunt performers jump through in action movies.Two years ago, when the group was making a video on how to build a structurally sound gingerbread house, Schwacke scoured cookbooks for a variable that would produce the most dramatic difference in the cookies.“I was reading about what determines the texture of cookies, and then tried several recipes in my kitchen until I got two gingerbread recipes that I was happy with,” Schwacke says.She focused on butter, which contains water that turns to steam at high baking temperatures, creating air pockets in cookies. Schwacke predicted that decreasing the amount of butter would yield denser gingerbread, strong enough to hold together as a house.“This hypothesis is an example of how changing the structure can influence the properties and performance of material,” Schwacke said in the eight-minute video.That same curiosity about materials properties and performance drives her research on the high energy cost of computing, especially for artificial intelligence. Schwacke develops new materials and devices for neuromorphic computing, which mimics the brain by processing and storing information in the same place. She studies electrochemical ionic synapses — tiny devices that can be “tuned” to adjust conductivity, much like neurons strengthening or weakening connections in the brain.“If you look at AI in particular — to train these really large models — that consumes a lot of energy. And if you compare that to the amount of energy that we consume as humans when we’re learning things, the brain consumes a lot less energy,” Schwacke says. “That’s what led to this idea to find more brain-inspired, energy-efficient ways of doing AI.”Her advisor, Bilge Yildiz, underscores the point: One reason the brain is so efficient is that data doesn’t need to be moved back and forth.“In the brain, the connections between our neurons, called synapses, are where we process information. Signal transmission is there. It is processed, programmed, and also stored in the same place,” says Yildiz, the Breene M. Kerr (1951) Professor in the Department of Nuclear Science and Engineering and DMSE. Schwacke’s devices aim to replicate that efficiency.Scientific rootsThe daughter of a marine biologist mom and an electrical engineer dad, Schwacke was immersed in science from a young age. Science was “always a part of how I understood the world.”“I was obsessed with dinosaurs. I wanted to be a paleontologist when I grew up,” she says. But her interests broadened. At her middle school in Charleston, South Carolina, she joined a FIRST Lego League robotics competition, building robots to complete tasks like pushing or pulling objects. “My parents, my dad especially, got very involved in the school team and helping us design and build our little robot for the competition.”Her mother, meanwhile, studied how dolphin populations are affected by pollution for the National Oceanic and Atmospheric Administration. That had a lasting impact.“That was an example of how science can be used to understand the world, and also to figure out how we can improve the world,” Schwacke says. “And that’s what I’ve always wanted to do with science.”Her interest in materials science came later, in her high school magnet program. There, she was introduced to the interdisciplinary subject, a blend of physics, chemistry, and engineering that studies the structure and properties of materials and uses that knowledge to design new ones.“I always liked that it goes from this very basic science, where we’re studying how atoms are ordering, all the way up to these solid materials that we interact with in our everyday lives — and how that gives them their properties that we can see and play with,” Schwacke says.As a senior, she participated in a research program with a thesis project on dye-sensitized solar cells, a low-cost, lightweight solar technology that uses dye molecules to absorb light and generate electricity.“What drove me was really understanding, this is how we go from light to energy that we can use — and also seeing how this could help us with having more renewable energy sources,” Schwacke says.After high school, she headed across the country to Caltech. “I wanted to try a totally new place,” she says, where she studied materials science, including nanostructured materials thousands of times thinner than a human hair. She focused on materials properties and microstructure — the tiny internal structure that governs how materials behave — which led her to electrochemical systems like batteries and fuel cells.AI energy challengeAt MIT, she continued exploring energy technologies. She met Yildiz during a Zoom meeting in her first year of graduate school, in fall 2020, when the campus was still operating under strict Covid-19 protocols. Yildiz’s lab studies how charged atoms, or ions, move through materials in technologies like fuel cells, batteries, and electrolyzers.The lab’s research into brain-inspired computing fired Schwacke’s imagination, but she was equally drawn to Yildiz’s way of talking about science.“It wasn’t based on jargon and emphasized a very basic understanding of what was going on — that ions are going here, and electrons are going here — to understand fundamentally what’s happening in the system,” Schwacke says.That mindset shaped her approach to research. Her early projects focused on the properties these devices need to work well — fast operation, low energy use, and compatibility with semiconductor technology — and on using magnesium ions instead of hydrogen, which can escape into the environment and make devices unstable.Her current project, the focus of her PhD thesis, centers on understanding how the insertion of magnesium ions into tungsten oxide, a metal oxide whose electrical properties can be precisely tuned, changes its electrical resistance. In these devices, tungsten oxide serves as a channel layer, where resistance controls signal strength, much like synapses regulate signals in the brain.“I am trying to understand exactly how these devices change the channel conductance,” Schwacke says.Schwacke’s research was recognized with a MathWorks Fellowship from the School of Engineering in 2023 and 2024. The fellowship supports graduate students who leverage tools like MATLAB or Simulink in their work; Schwacke applied MATLAB for critical data analysis and visualization.Yildiz describes Schwacke’s research as a novel step toward solving one of AI’s biggest challenges.“This is electrochemistry for brain-inspired computing,” Yildiz says. “It’s a new context for electrochemistry, but also with an energy implication, because the energy consumption of computing is unsustainably increasing. We have to find new ways of doing computing with much lower energy, and this is one way that can help us move in that direction.”Like any pioneering work, it comes with challenges, especially in bridging the concepts between electrochemistry and semiconductor physics.“Our group comes from a solid-state chemistry background, and when we started this work looking into magnesium, no one had used magnesium in these kinds of devices before,” Schwacke says. “So we were looking at the magnesium battery literature for inspiration and different materials and strategies we could use. When I started this, I wasn’t just learning the language and norms for one field — I was trying to learn it for two fields, and also translate between the two.”She also grapples with a challenge familiar to all scientists: how to make sense of messy data.“The main challenge is being able to take my data and know that I’m interpreting it in a way that’s correct, and that I understand what it actually means,” Schwacke says.She overcomes hurdles by collaborating closely with colleagues across fields, including neuroscience and electrical engineering, and sometimes by just making small changes to her experiments and watching what happens next.Community mattersSchwacke is not just active in the lab. In Kitchen Matters, she and her fellow DMSE grad students set up booths at local events like the Cambridge Science Fair and Steam It Up, an after-school program with hands-on activities for kids.“We did ‘pHun with Food’ with ‘fun’ spelled with a pH, so we had cabbage juice as a pH indicator,” Schwacke says. “We let the kids test the pH of lemon juice and vinegar and dish soap, and they had a lot of fun mixing the different liquids and seeing all the different colors.”She has also served as the social chair and treasurer for DMSE’s graduate student group, the Graduate Materials Council. As an undergraduate at Caltech, she led workshops in science and technology for Robogals, a student-run group that encourages young women to pursue careers in science, and assisted students in applying for the school’s Summer Undergraduate Research Fellowships.For Schwacke, these experiences sharpened her ability to explain science to different audiences, a skill she sees as vital whether she’s presenting at a kids’ fair or at a research conference.“I always think, where is my audience starting from, and what do I need to explain before I can get into what I’m doing so that it’ll all make sense to them?” she says.Schwacke sees the ability to communicate as central to building community, which she considers an important part of doing research. “It helps with spreading ideas. It always helps to get a new perspective on what you’re working on,” she says. “I also think it keeps us sane during our PhD.”Yildiz sees Schwacke’s community involvement as an important part of her resume. “She’s doing all these activities to motivate the broader community to do research, to be interested in science, to pursue science and technology, but that ability will help her also progress in her own research and academic endeavors.”After her PhD, Schwacke wants to take that ability to communicate with her to academia, where she’d like to inspire the next generation of scientists and engineers. Yildiz has no doubt she’ll thrive.“I think she’s a perfect fit,” Yildiz says. “She’s brilliant, but brilliance by itself is not enough. She’s persistent, resilient. You really need those on top of that.” More