More stories

  • in

    From nanoscale to global scale: Advancing MIT’s special initiatives in manufacturing, health, and climate

    “MIT.nano is essential to making progress in high-priority areas where I believe that MIT has a responsibility to lead,” opened MIT president Sally Kornbluth at the 2025 Nano Summit. “If we harness our collective efforts, we can make a serious positive impact.”It was these collective efforts that drove discussions at the daylong event hosted by MIT.nano and focused on the importance of nanoscience and nanotechnology across MIT’s special initiatives — projects deemed critical to MIT’s mission to help solve the world’s greatest challenges. With each new talk, common themes were reemphasized: collaboration across fields, solutions that can scale up from lab to market, and the use of nanoscale science to enact grand-scale change.“MIT.nano has truly set itself apart, in the Institute’s signature way, with an emphasis on cross-disciplinary collaboration and open access,” said Kornbluth. “Today, you’re going to hear about the transformative impact of nanoscience and nanotechnology, and how working with the very small can help us do big things for the world together.”Collaborating on healthAngela Koehler, faculty director of the MIT Health and Life Sciences Collaborative (MIT HEALS) and the Charles W. and Jennifer C. Johnson Professor of Biological Engineering, opened the first session with a question: How can we build a community across campus to tackle some of the most transformative problems in human health? In response, three speakers shared their work enabling new frontiers in medicine.Ana Jaklenec, principal research scientist at the Koch Institute for Integrative Cancer Research, spoke about single-injection vaccines, and how her team looked to the techniques used in fabrication of electrical engineering components to see how multiple pieces could be packaged into a tiny device. “MIT.nano was instrumental in helping us develop this technology,” she said. “We took something that you can do in microelectronics and the semiconductor industry and brought it to the pharmaceutical industry.”While Jaklenec applied insight from electronics to her work in health care, Giovanni Traverso, the Karl Van Tassel Career Development Professor of Mechanical Engineering, who is also a gastroenterologist at Brigham and Women’s Hospital, found inspiration in nature, studying the cephalopod squid and remora fish to design ingestible drug delivery systems. Representing the industry side of life sciences, Mirai Bio senior vice president Jagesh Shah SM ’95, PhD ’99 presented his company’s precision-targeted lipid nanoparticles for therapeutic delivery. Shah, as well as the other speakers, emphasized the importance of collaboration between industry and academia to make meaningful impact, and the need to strengthen the pipeline for young scientists.Manufacturing, from the classroom to the workforcePaving the way for future generations was similarly emphasized in the second session, which highlighted MIT’s Initiative for New Manufacturing (MIT INM). “MIT’s dedication to manufacturing is not only about technology research and education, it’s also about understanding the landscape of manufacturing, domestically and globally,” said INM co-director A. John Hart, the Class of 1922 Professor and head of the Department of Mechanical Engineering. “It’s about getting people — our graduates who are budding enthusiasts of manufacturing — out of campus and starting and scaling new companies,” he said.On progressing from lab to market, Dan Oran PhD ’21 shared his career trajectory from technician to PhD student to founding his own company, Irradiant Technologies. “How are companies like Dan’s making the move from the lab to prototype to pilot production to demonstration to commercialization?” asked the next speaker, Elisabeth Reynolds, professor of the practice in urban studies and planning at MIT. “The U.S. capital market has not historically been well organized for that kind of support.” She emphasized the challenge of scaling innovations from prototype to production, and the need for workforce development.“Attracting and retaining workforce is a major pain point for manufacturing businesses,” agreed John Liu, principal research scientist in mechanical engineering at MIT. To keep new ideas flowing from the classroom to the factory floor, Liu proposes a new worker type in advanced manufacturing — the technologist — someone who can be a bridge to connect the technicians and the engineers.Bridging ecosystems with nanoscienceBridging people, disciplines, and markets to affect meaningful change was also emphasized by Benedetto Marelli, mission director for the MIT Climate Project and associate professor of civil and environmental engineering at MIT.“If we’re going to have a tangible impact on the trajectory of climate change in the next 10 years, we cannot do it alone,” he said. “We need to take care of ecology, health, mobility, the built environment, food, energy, policies, and trade and industry — and think about these as interconnected topics.”Faculty speakers in this session offered a glimpse of nanoscale solutions for climate resiliency. Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering, presented his group’s work on using nanoparticles to turn waste methane and urea into renewable materials. Desirée Plata, the School of Engineering Distinguished Climate and Energy Professor, spoke about scaling carbon dioxide removal systems. Mechanical engineering professor Kripa Varanasi highlighted, among other projects, his lab’s work on improving agricultural spraying so pesticides adhere to crops, reducing agricultural pollution and cost.In all of these presentations, the MIT faculty highlighted the tie between climate and the economy. “The economic systems that we have today are depleting to our resources, inherently polluting,” emphasized Plata. “The goal here is to use sustainable design to transition the global economy.”What do people do at MIT.nano?This is where MIT.nano comes in, offering shared access facilities where researchers can design creative solutions to these global challenges. “What do people do at MIT.nano?” asked associate director for Fab.nano Jorg Scholvin ’00, MNG ’01, PhD ’06 in the session on MIT.nano’s ecosystem. With 1,500 individuals and over 20 percent of MIT faculty labs using MIT.nano, it’s a difficult question to quickly answer. However, in a rapid-fire research showcase, students and postdocs gave a response that spanned 3D transistors and quantum devices to solar solutions and art restoration. Their work reflects the challenges and opportunities shared at the Nano Summit: developing technologies ready to scale, uniting disciplines to tackle complex problems, and gaining hands-on experience that prepares them to contribute to the future of hard tech.The researchers’ enthusiasm carried the excitement and curiosity that President Kornbluth mentioned in her opening remarks, and that many faculty emphasized throughout the day. “The solutions to the problems we heard about today may come from inventions that don’t exist yet,” said Strano. “These are some of the most creative people, here at MIT. I think we inspire each other.”Robert N. Noyce (1953) Cleanroom at MIT.nanoCollaborative inspiration is not new to the MIT culture. The Nano Summit sessions focused on where we are today, and where we might be going in the future, but also reflected on how we arrived at this moment. Honoring visionaries of nanoscience and nanotechnology, President Emeritus L. Rafael Reif delivered the closing remarks and an exciting announcement — the dedication of the MIT.nano cleanroom complex. Made possible through a gift by Ray Stata SB ’57, SM ’58, this research space, 45,000 square feet of ISO 5, 6, and 7 cleanrooms, will be named the Robert N. Noyce (1953) Cleanroom.“Ray Stata was — and is — the driving force behind nanoscale research at MIT,” said Reif. “I want to thank Ray, whose generosity has allowed MIT to honor Robert Noyce in such a fitting way.”Ray Stata co-founded Analog Devices in 1965, and Noyce co-founded Fairchild Semiconductor in 1957, and later Intel in 1968. Noyce, widely regarded as the “Mayor of Silicon Valley,” became chair of the Semiconductor Industry Association in 1977, and over the next 40 years, semiconductor technology advanced a thousandfold, from micrometers to nanometers.“Noyce was a pioneer of the semiconductor industry,” said Stata. “It is due to his leadership and remarkable contributions that electronics technology is where it is today. It is an honor to be able to name the MIT.nano cleanroom after Bob Noyce, creating a permanent tribute to his vision and accomplishments in the heart of the MIT campus.”To conclude his remarks and the 2025 Nano Summit, Reif brought the nano journey back to today, highlighting technology giants such as Lisa Su ’90, SM ’91, PhD ’94, for whom Building 12, the home of MIT.nano, is named. “MIT has educated a large number of remarkable leaders in the semiconductor space,” said Reif. “Now, with the Robert Noyce Cleanroom, this amazing MIT community is ready to continue to shape the future with the next generation of nano discoveries — and the next generation of nano leaders, who will become living legends in their own time.” More

  • in

    Burning things to make things

    Around 80 percent of global energy production today comes from the combustion of fossil fuels. Combustion, or the process of converting stored chemical energy into thermal energy through burning, is vital for a variety of common activities including electricity generation, transportation, and domestic uses like heating and cooking — but it also yields a host of environmental consequences, contributing to air pollution and greenhouse gas emissions.Sili Deng, the Doherty Chair in Ocean Utilization and associate professor of mechanical engineering at MIT, is leading research to drive the transition from the heavy dependence on fossil fuels to renewable energy with storage.“I was first introduced to flame synthesis in my junior year in college,” Deng says. “I realized you can actually burn things to make things, [and] that was really fascinating.”

    Play video

    Burning Things to Make ThingsVideo: Department of Mechanical Engineering

    Deng says she ultimately picked combustion as a focus of her work because she likes the intellectual challenge the concept offers. “In combustion you have chemistry, and you have fluid mechanics. Each subject is very rich in science. This also has very strong engineering implications and applications.”Deng’s research group targets three areas: building up fundamental knowledge on combustion processes and emissions; developing alternative fuels and metal combustion to replace fossil fuels; and synthesizing flame-based materials for catalysis and energy storage, which can bring down the cost of manufacturing battery materials.One focus of the team has been on low-cost, low-emission manufacturing of cathode materials for lithium-ion batteries. Lithium-ion batteries play an increasingly critical role in transportation electrification (e.g., batteries for electric vehicles) and grid energy storage for electricity that is generated from renewable energy sources like wind and solar. Deng’s team has developed a technology they call flame-assisted spray pyrolysis, or FASP, which can help reduce the high manufacturing costs associated with cathode materials.FASP is based on flame synthesis, a technology that dates back nearly 3,000 years. In ancient China, this was the primary way black ink materials were made. “[People burned] vegetables or woods, such that afterwards they can collect the solidified smoke,” Deng explains. “For our battery applications, we can try to fit in the same formula, but of course with new tweaks.”The team is also interested in developing alternative fuels, including looking at the use of metals like aluminum to power rockets. “We’re interested in utilizing aluminum as a fuel for civil applications,” Deng says, because aluminum is abundant in the earth, cheap, and it’s available globally. “What we are trying to do is to understand [aluminum combustion] and be able to tailor its ignition and propagation properties.”Among other accolades, Deng is a 2025 recipient of the Hiroshi Tsuji Early Career Researcher Award from the Combustion Institute, an award that recognizes excellence in fundamental or applied combustion science research. More

  • in

    Solar energy startup Active Surfaces wins inaugural PITCH.nano competition

    The inaugural PITCH.nano competition, hosted by MIT.nano’s hard technology accelerator START.nano, provided a platform for early-stage startups to present their innovations to MIT and Boston’s hard-tech startup ecosystem.The grand prize winner was Active Surfaces, a startup that is generating renewable energy exactly where it is going to be used through lightweight, flexible solar cells. Active Surfaces says its ultralight, peel-and-stick panels will reimagine how we deploy photovoltaics in the built environment.Shiv Bhakta MBA ’24, SM ’24, CEO and co-founder, delivered the winning presentation to an audience of entrepreneurs, investors, startup incubators, and industry partners at PITCH.nano on Sept. 30. Active Surfaces received the grand prize of 25,000 nanoBucks — equivalent to $25,000 that can be spent at MIT.nano facilities.Why has MIT.nano chosen to embrace startup activity as much as we do? asked Vladimir Bulović, MIT.nano faculty director, at the start of PITCH.nano. “We need to make sure that entrepreneurs can be born out of MIT and can take the next technical ideas developed in the lab out into the market, so they can make the next millions of jobs that the world needs.”The journey of a hard-tech entrepreneur takes at least 10 years and 100 million dollars, explained Bulović. By linking open tool facilities to startup needs, MIT.nano can make those first few years a little bit easier, bringing more startups to the scale-up stage.“Getting VCs [venture capitalists] to invest in hard tech is challenging,” explained Joyce Wu SM ’00, PhD ’07, START.nano program manager. “Through START.nano, we provide discounted access to MIT.nano’s cleanrooms, characterization tools, and laboratories for startups to build their prototypes and attract investment earlier and with reduced spend. Our goal is to support the translation of fundamental research to real-world solutions in hard tech.”In addition to discounted access to tools, START.nano helps early-stage companies become part of the MIT and Cambridge innovation network. PITCH.nano, inspired by the MIT 100K Competition, was launched as a new opportunity this year to introduce these hard-tech ventures to the investor and industry community. Twelve startups delivered presentations that were evaluated by a panel of four judges who are, themselves, venture capitalists and startup founders.“It is amazing to see the quality, diversity, and ingenuity of this inspiring group of startups,” said judge Brendan Smith PhD ’18, CEO of SiTration, a company that was part of the inaugural START.nano cohort. “Together, these founders are demonstrating the power of fundamental hard-tech innovation to solve the world’s greatest challenges, in a way that is both scalable and profitable.”Startups who presented at PITCH.nano spanned a wide range of focus areas. In the fields of climate, energy, and materials, the audience heard from Addis Energy, Copernic Catalysts, Daqus Energy, VioNano Innovations, Active Surfaces, and Metal Fuels; in life sciences, Acorn Genetics, Advanced Silicon Group, and BioSens8; and in quantum and photonics, Qunett, nOhm Devices, and Brightlight Photonics. The common thread for these companies: They are all using MIT.nano to advance their innovations.“MIT.nano has been instrumental in compressing our time to market, especially as a company building a novel, physical product,” said Bhakta. “Access to world-class characterization tools — normally out of reach for startups — lets us validate scale-up much faster. The START.nano community accelerates problem-solving, and the nanoBucks award is directly supporting the development of our next prototypes headed to pilot.”In addition to the grand prize, a 5,000 nanoBucks audience choice award went to Advanced Silicon Group, a startup that is developing a next-generation biosensor to improve testing in pharma and health tech.Now in its fifth year, START.nano has supported 40 companies spanning a diverse set of market areas — life sciences, clean tech, semiconductors, photonics, quantum, materials, and software. Fourteen START.nano companies have graduated from the program, proving that START.nano is indeed succeeding in its mission to help early-stage ventures advance from prototype to manufacturing. “I believe MIT.nano has a fantastic opportunity here,” said judge Davide Marini, PhD ’03, co-founder and CEO of Inkbit, “to create the leading incubator for hard tech entrepreneurs worldwide.”START.nano accepts applications on a monthly basis. The program is made possible through the generous support of FEMSA. More

  • in

    Concrete “battery” developed at MIT now packs 10 times the power

    Concrete already builds our world, and now it’s one step closer to powering it, too. Made by combining cement, water, ultra-fine carbon black (with nanoscale particles), and electrolytes, electron-conducting carbon concrete (ec3, pronounced “e-c-cubed”) creates a conductive “nanonetwork” inside concrete that could enable everyday structures like walls, sidewalks, and bridges to store and release electrical energy. In other words, the concrete around us could one day double as giant “batteries.”As MIT researchers report in a new PNAS paper, optimized electrolytes and manufacturing processes have increased the energy storage capacity of the latest ec3 supercapacitors by an order of magnitude. In 2023, storing enough energy to meet the daily needs of the average home would have required about 45 cubic meters of ec3, roughly the amount of concrete used in a typical basement. Now, with the improved electrolyte, that same task can be achieved with about 5 cubic meters, the volume of a typical basement wall.“A key to the sustainability of concrete is the development of ‘multifunctional concrete,’ which integrates functionalities like this energy storage, self-healing, and carbon sequestration. Concrete is already the world’s most-used construction material, so why not take advantage of that scale to create other benefits?” asks Admir Masic, lead author of the new study, MIT Electron-Conducting Carbon-Cement-Based Materials Hub (EC³ Hub) co-director, and associate professor of civil and environmental engineering (CEE) at MIT.The improved energy density was made possible by a deeper understanding of how the nanocarbon black network inside ec3 functions and interacts with electrolytes. Using focused ion beams for the sequential removal of thin layers of the ec3 material, followed by high-resolution imaging of each slice with a scanning electron microscope (a technique called FIB-SEM tomography), the team across the EC³ Hub and MIT Concrete Sustainability Hub was able to reconstruct the conductive nanonetwork at the highest resolution yet. This approach allowed the team to discover that the network is essentially a fractal-like “web” that surrounds ec3 pores, which is what allows the electrolyte to infiltrate and for current to flow through the system. “Understanding how these materials ‘assemble’ themselves at the nanoscale is key to achieving these new functionalities,” adds Masic.Equipped with their new understanding of the nanonetwork, the team experimented with different electrolytes and their concentrations to see how they impacted energy storage density. As Damian Stefaniuk, first author and EC³ Hub research scientist, highlights, “we found that there is a wide range of electrolytes that could be viable candidates for ec3. This even includes seawater, which could make this a good material for use in coastal and marine applications, perhaps as support structures for offshore wind farms.”At the same time, the team streamlined the way they added electrolytes to the mix. Rather than curing ec3 electrodes and then soaking them in electrolyte, they added the electrolyte directly into the mixing water. Since electrolyte penetration was no longer a limitation, the team could cast thicker electrodes that stored more energy.The team achieved the greatest performance when they switched to organic electrolytes, especially those that combined quaternary ammonium salts — found in everyday products like disinfectants — with acetonitrile, a clear, conductive liquid often used in industry. A cubic meter of this version of ec3 — about the size of a refrigerator — can store over 2 kilowatt-hours of energy. That’s about enough to power an actual refrigerator for a day.While batteries maintain a higher energy density, ec3 can in principle be incorporated directly into a wide range of architectural elements — from slabs and walls to domes and vaults — and last as long as the structure itself.“The Ancient Romans made great advances in concrete construction. Massive structures like the Pantheon stand to this day without reinforcement. If we keep up their spirit of combining material science with architectural vision, we could be at the brink of a new architectural revolution with multifunctional concretes like ec3,” proposes Masic.Taking inspiration from Roman architecture, the team built a miniature ec3 arch to show how structural form and energy storage can work together. Operating at 9 volts, the arch supported its own weight and additional load while powering an LED light.However, something unique happened when the load on the arch increased: the light flickered. This is likely due to the way stress impacts electrical contacts or the distribution of charges. “There may be a kind of self-monitoring capacity here. If we think of an ec3 arch at architectural scale, its output may fluctuate when it’s impacted by a stressor like high winds. We may be able to use this as a signal of when and to what extent a structure is stressed, or monitor its overall health in real time,” envisions Masic.The latest developments in ec³ technology bring it a step closer to real-world scalability. It’s already been used to heat sidewalk slabs in Sapporo, Japan, due to its thermally conductive properties, representing a potential alternative to salting. “With these higher energy densities and demonstrated value across a broader application space, we now have a powerful and flexible tool that can help us address a wide range of persistent energy challenges,” explains Stefaniuk. “One of our biggest motivations was to help enable the renewable energy transition. Solar power, for example, has come a long way in terms of efficiency. However, it can only generate power when there’s enough sunlight. So, the question becomes: How do you meet your energy needs at night, or on cloudy days?”Franz-Josef Ulm, EC³ Hub co-director and CEE professor, continues the thread: “The answer is that you need a way to store and release energy. This has usually meant a battery, which often relies on scarce or harmful materials. We believe that ec3 is a viable substitute, letting our buildings and infrastructure meet our energy storage needs.” The team is working toward applications like parking spaces and roads that could charge electric vehicles, as well as homes that can operate fully off the grid.“What excites us most is that we’ve taken a material as ancient as concrete and shown that it can do something entirely new,” says James Weaver, a co-author on the paper who is an associate professor of design technology and materials science and engineering at Cornell University, as well as a former EC³ Hub researcher. “By combining modern nanoscience with an ancient building block of civilization, we’re opening a door to infrastructure that doesn’t just support our lives, it powers them.” More

  • in

    Responding to the climate impact of generative AI

    In part 2 of our two-part series on generative artificial intelligence’s environmental impacts, MIT News explores some of the ways experts are working to reduce the technology’s carbon footprint.The energy demands of generative AI are expected to continue increasing dramatically over the next decade.For instance, an April 2025 report from the International Energy Agency predicts that the global electricity demand from data centers, which house the computing infrastructure to train and deploy AI models, will more than double by 2030, to around 945 terawatt-hours. While not all operations performed in a data center are AI-related, this total amount is slightly more than the energy consumption of Japan.Moreover, an August 2025 analysis from Goldman Sachs Research forecasts that about 60 percent of the increasing electricity demands from data centers will be met by burning fossil fuels, increasing global carbon emissions by about 220 million tons. In comparison, driving a gas-powered car for 5,000 miles produces about 1 ton of carbon dioxide.These statistics are staggering, but at the same time, scientists and engineers at MIT and around the world are studying innovations and interventions to mitigate AI’s ballooning carbon footprint, from boosting the efficiency of algorithms to rethinking the design of data centers.Considering carbon emissionsTalk of reducing generative AI’s carbon footprint is typically centered on “operational carbon” — the emissions used by the powerful processors, known as GPUs, inside a data center. It often ignores “embodied carbon,” which are emissions created by building the data center in the first place, says Vijay Gadepally, senior scientist at MIT Lincoln Laboratory, who leads research projects in the Lincoln Laboratory Supercomputing Center.Constructing and retrofitting a data center, built from tons of steel and concrete and filled with air conditioning units, computing hardware, and miles of cable, consumes a huge amount of carbon. In fact, the environmental impact of building data centers is one reason companies like Meta and Google are exploring more sustainable building materials. (Cost is another factor.)Plus, data centers are enormous buildings — the world’s largest, the China Telecomm-Inner Mongolia Information Park, engulfs roughly 10 million square feet — with about 10 to 50 times the energy density of a normal office building, Gadepally adds. “The operational side is only part of the story. Some things we are working on to reduce operational emissions may lend themselves to reducing embodied carbon, too, but we need to do more on that front in the future,” he says.Reducing operational carbon emissionsWhen it comes to reducing operational carbon emissions of AI data centers, there are many parallels with home energy-saving measures. For one, we can simply turn down the lights.“Even if you have the worst lightbulbs in your house from an efficiency standpoint, turning them off or dimming them will always use less energy than leaving them running at full blast,” Gadepally says.In the same fashion, research from the Supercomputing Center has shown that “turning down” the GPUs in a data center so they consume about three-tenths the energy has minimal impacts on the performance of AI models, while also making the hardware easier to cool.Another strategy is to use less energy-intensive computing hardware.Demanding generative AI workloads, such as training new reasoning models like GPT-5, usually need many GPUs working simultaneously. The Goldman Sachs analysis estimates that a state-of-the-art system could soon have as many as 576 connected GPUs operating at once.But engineers can sometimes achieve similar results by reducing the precision of computing hardware, perhaps by switching to less powerful processors that have been tuned to handle a specific AI workload.There are also measures that boost the efficiency of training power-hungry deep-learning models before they are deployed.Gadepally’s group found that about half the electricity used for training an AI model is spent to get the last 2 or 3 percentage points in accuracy. Stopping the training process early can save a lot of that energy.“There might be cases where 70 percent accuracy is good enough for one particular application, like a recommender system for e-commerce,” he says.Researchers can also take advantage of efficiency-boosting measures.For instance, a postdoc in the Supercomputing Center realized the group might run a thousand simulations during the training process to pick the two or three best AI models for their project.By building a tool that allowed them to avoid about 80 percent of those wasted computing cycles, they dramatically reduced the energy demands of training with no reduction in model accuracy, Gadepally says.Leveraging efficiency improvementsConstant innovation in computing hardware, such as denser arrays of transistors on semiconductor chips, is still enabling dramatic improvements in the energy efficiency of AI models.Even though energy efficiency improvements have been slowing for most chips since about 2005, the amount of computation that GPUs can do per joule of energy has been improving by 50 to 60 percent each year, says Neil Thompson, director of the FutureTech Research Project at MIT’s Computer Science and Artificial Intelligence Laboratory and a principal investigator at MIT’s Initiative on the Digital Economy.“The still-ongoing ‘Moore’s Law’ trend of getting more and more transistors on chip still matters for a lot of these AI systems, since running operations in parallel is still very valuable for improving efficiency,” says Thomspon.Even more significant, his group’s research indicates that efficiency gains from new model architectures that can solve complex problems faster, consuming less energy to achieve the same or better results, is doubling every eight or nine months.Thompson coined the term “negaflop” to describe this effect. The same way a “negawatt” represents electricity saved due to energy-saving measures, a “negaflop” is a computing operation that doesn’t need to be performed due to algorithmic improvements.These could be things like “pruning” away unnecessary components of a neural network or employing compression techniques that enable users to do more with less computation.“If you need to use a really powerful model today to complete your task, in just a few years, you might be able to use a significantly smaller model to do the same thing, which would carry much less environmental burden. Making these models more efficient is the single-most important thing you can do to reduce the environmental costs of AI,” Thompson says.Maximizing energy savingsWhile reducing the overall energy use of AI algorithms and computing hardware will cut greenhouse gas emissions, not all energy is the same, Gadepally adds.“The amount of carbon emissions in 1 kilowatt hour varies quite significantly, even just during the day, as well as over the month and year,” he says.Engineers can take advantage of these variations by leveraging the flexibility of AI workloads and data center operations to maximize emissions reductions. For instance, some generative AI workloads don’t need to be performed in their entirety at the same time.Splitting computing operations so some are performed later, when more of the electricity fed into the grid is from renewable sources like solar and wind, can go a long way toward reducing a data center’s carbon footprint, says Deepjyoti Deka, a research scientist in the MIT Energy Initiative.Deka and his team are also studying “smarter” data centers where the AI workloads of multiple companies using the same computing equipment are flexibly adjusted to improve energy efficiency.“By looking at the system as a whole, our hope is to minimize energy use as well as dependence on fossil fuels, while still maintaining reliability standards for AI companies and users,” Deka says.He and others at MITEI are building a flexibility model of a data center that considers the differing energy demands of training a deep-learning model versus deploying that model. Their hope is to uncover the best strategies for scheduling and streamlining computing operations to improve energy efficiency.The researchers are also exploring the use of long-duration energy storage units at data centers, which store excess energy for times when it is needed.With these systems in place, a data center could use stored energy that was generated by renewable sources during a high-demand period, or avoid the use of diesel backup generators if there are fluctuations in the grid.“Long-duration energy storage could be a game-changer here because we can design operations that really change the emission mix of the system to rely more on renewable energy,” Deka says.In addition, researchers at MIT and Princeton University are developing a software tool for investment planning in the power sector, called GenX, which could be used to help companies determine the ideal place to locate a data center to minimize environmental impacts and costs.Location can have a big impact on reducing a data center’s carbon footprint. For instance, Meta operates a data center in Lulea, a city on the coast of northern Sweden where cooler temperatures reduce the amount of electricity needed to cool computing hardware.Thinking farther outside the box (way farther), some governments are even exploring the construction of data centers on the moon where they could potentially be operated with nearly all renewable energy.AI-based solutionsCurrently, the expansion of renewable energy generation here on Earth isn’t keeping pace with the rapid growth of AI, which is one major roadblock to reducing its carbon footprint, says Jennifer Turliuk MBA ’25, a short-term lecturer, former Sloan Fellow, and former practice leader of climate and energy AI at the Martin Trust Center for MIT Entrepreneurship.The local, state, and federal review processes required for a new renewable energy projects can take years.Researchers at MIT and elsewhere are exploring the use of AI to speed up the process of connecting new renewable energy systems to the power grid.For instance, a generative AI model could streamline interconnection studies that determine how a new project will impact the power grid, a step that often takes years to complete.And when it comes to accelerating the development and implementation of clean energy technologies, AI could play a major role.“Machine learning is great for tackling complex situations, and the electrical grid is said to be one of the largest and most complex machines in the world,” Turliuk adds.For instance, AI could help optimize the prediction of solar and wind energy generation or identify ideal locations for new facilities.It could also be used to perform predictive maintenance and fault detection for solar panels or other green energy infrastructure, or to monitor the capacity of transmission wires to maximize efficiency.By helping researchers gather and analyze huge amounts of data, AI could also inform targeted policy interventions aimed at getting the biggest “bang for the buck” from areas such as renewable energy, Turliuk says.To help policymakers, scientists, and enterprises consider the multifaceted costs and benefits of AI systems, she and her collaborators developed the Net Climate Impact Score.The score is a framework that can be used to help determine the net climate impact of AI projects, considering emissions and other environmental costs along with potential environmental benefits in the future.At the end of the day, the most effective solutions will likely result from collaborations among companies, regulators, and researchers, with academia leading the way, Turliuk adds.“Every day counts. We are on a path where the effects of climate change won’t be fully known until it is too late to do anything about it. This is a once-in-a-lifetime opportunity to innovate and make AI systems less carbon-intense,” she says. More

  • in

    Jessika Trancik named director of the Sociotechnical Systems Research Center

    Jessika Trancik, a professor in MIT’s Institute for Data, Systems, and Society, has been named the new director of the Sociotechnical Systems Research Center (SSRC), effective July 1. The SSRC convenes and supports researchers focused on problems and solutions at the intersection of technology and its societal impacts.Trancik conducts research on technology innovation and energy systems. At the Trancik Lab, she and her team develop methods drawing on engineering knowledge, data science, and policy analysis. Their work examines the pace and drivers of technological change, helping identify where innovation is occurring most rapidly, how emerging technologies stack up against existing systems, and which performance thresholds matter most for real-world impact. Her models have been used to inform government innovation policy and have been applied across a wide range of industries.“Professor Trancik’s deep expertise in the societal implications of technology, and her commitment to developing impactful solutions across industries, make her an excellent fit to lead SSRC,” says Maria C. Yang, interim dean of engineering and William E. Leonhard (1940) Professor of Mechanical Engineering.Much of Trancik’s research focuses on the domain of energy systems, and establishing methods for energy technology evaluation, including of their costs, performance, and environmental impacts. She covers a wide range of energy services — including electricity, transportation, heating, and industrial processes. Her research has applications in solar and wind energy, energy storage, low-carbon fuels, electric vehicles, and nuclear fission. Trancik is also known for her research on extreme events in renewable energy availability.A prolific researcher, Trancik has helped measure progress and inform the development of solar photovoltaics, batteries, electric vehicle charging infrastructure, and other low-carbon technologies — and anticipate future trends. One of her widely cited contributions includes quantifying learning rates and identifying where targeted investments can most effectively accelerate innovation. These tools have been used by U.S. federal agencies, international organizations, and the private sector to shape energy R&D portfolios, climate policy, and infrastructure planning.Trancik is committed to engaging and informing the public on energy consumption. She and her team developed the app carboncounter.com, which helps users choose cars with low costs and low environmental impacts.As an educator, Trancik teaches courses for students across MIT’s five schools and the MIT Schwarzman College of Computing.“The question guiding my teaching and research is how do we solve big societal challenges with technology, and how can we be more deliberate in developing and supporting technologies to get us there?” Trancik said in an article about course IDS.521/IDS.065 (Energy Systems for Climate Change Mitigation).Trancik received her undergraduate degree in materials science and engineering from Cornell University. As a Rhodes Scholar, she completed her PhD in materials science at the University of Oxford. She subsequently worked for the United Nations in Geneva, Switzerland, and the Earth Institute at Columbia University. After serving as an Omidyar Research Fellow at the Santa Fe Institute, she joined MIT in 2010 as a faculty member.Trancik succeeds Fotini Christia, the Ford International Professor of Social Sciences in the Department of Political Science and director of IDSS, who previously served as director of SSRC. More

  • in

    Using liquid air for grid-scale energy storage

    As the world moves to reduce carbon emissions, solar and wind power will play an increasing role on electricity grids. But those renewable sources only generate electricity when it’s sunny or windy. So to ensure a reliable power grid — one that can deliver electricity 24/7 — it’s crucial to have a means of storing electricity when supplies are abundant and delivering it later, when they’re not. And sometimes large amounts of electricity will need to be stored not just for hours, but for days, or even longer.Some methods of achieving “long-duration energy storage” are promising. For example, with pumped hydro energy storage, water is pumped from a lake to another, higher lake when there’s extra electricity and released back down through power-generating turbines when more electricity is needed. But that approach is limited by geography, and most potential sites in the United States have already been used. Lithium-ion batteries could provide grid-scale storage, but only for about four hours. Longer than that and battery systems get prohibitively expensive.A team of researchers from MIT and the Norwegian University of Science and Technology (NTNU) has been investigating a less-familiar option based on an unlikely-sounding concept: liquid air, or air that is drawn in from the surroundings, cleaned and dried, and then cooled to the point that it liquefies. “Liquid air energy storage” (LAES) systems have been built, so the technology is technically feasible. Moreover, LAES systems are totally clean and can be sited nearly anywhere, storing vast amounts of electricity for days or longer and delivering it when it’s needed. But there haven’t been conclusive studies of its economic viability. Would the income over time warrant the initial investment and ongoing costs? With funding from the MIT Energy Initiative’s Future Energy Systems Center, the researchers developed a model that takes detailed information on LAES systems and calculates when and where those systems would be economically viable, assuming future scenarios in line with selected decarbonization targets as well as other conditions that may prevail on future energy grids.They found that under some of the scenarios they modeled, LAES could be economically viable in certain locations. Sensitivity analyses showed that policies providing a subsidy on capital expenses could make LAES systems economically viable in many locations. Further calculations showed that the cost of storing a given amount of electricity with LAES would be lower than with more familiar systems such as pumped hydro and lithium-ion batteries. They conclude that LAES holds promise as a means of providing critically needed long-duration storage when future power grids are decarbonized and dominated by intermittent renewable sources of electricity.The researchers — Shaylin A. Cetegen, a PhD candidate in the MIT Department of Chemical Engineering (ChemE); Professor Emeritus Truls Gundersen of the NTNU Department of Energy and Process Engineering; and MIT Professor Emeritus Paul I. Barton of ChemE — describe their model and their findings in a new paper published in the journal Energy.The LAES technology and its benefitsLAES systems consists of three steps: charging, storing, and discharging. When supply on the grid exceeds demand and prices are low, the LAES system is charged. Air is then drawn in and liquefied. A large amount of electricity is consumed to cool and liquefy the air in the LAES process. The liquid air is then sent to highly insulated storage tanks, where it’s held at a very low temperature and atmospheric pressure. When the power grid needs added electricity to meet demand, the liquid air is first pumped to a higher pressure and then heated, and it turns back into a gas. This high-pressure, high-temperature, vapor-phase air expands in a turbine that generates electricity to be sent back to the grid.According to Cetegen, a primary advantage of LAES is that it’s clean. “There are no contaminants involved,” she says. “It takes in and releases only ambient air and electricity, so it’s as clean as the electricity that’s used to run it.” In addition, a LAES system can be built largely from commercially available components and does not rely on expensive or rare materials. And the system can be sited almost anywhere, including near other industrial processes that produce waste heat or cold that can be used by the LAES system to increase its energy efficiency.Economic viabilityIn considering the potential role of LAES on future power grids, the first question is: Will LAES systems be attractive to investors? Answering that question requires calculating the technology’s net present value (NPV), which represents the sum of all discounted cash flows — including revenues, capital expenditures, operating costs, and other financial factors — over the project’s lifetime. (The study assumed a cash flow discount rate of 7 percent.)To calculate the NPV, the researchers needed to determine how LAES systems will perform in future energy markets. In those markets, various sources of electricity are brought online to meet the current demand, typically following a process called “economic dispatch:” The lowest-cost source that’s available is always deployed next. Determining the NPV of liquid air storage therefore requires predicting how that technology will fare in future markets competing with other sources of electricity when demand exceeds supply — and also accounting for prices when supply exceeds demand, so excess electricity is available to recharge the LAES systems.For their study, the MIT and NTNU researchers designed a model that starts with a description of an LAES system, including details such as the sizes of the units where the air is liquefied and the power is recovered, and also capital expenses based on estimates reported in the literature. The model then draws on state-of-the-art pricing data that’s released every year by the National Renewable Energy Laboratory (NREL) and is widely used by energy modelers worldwide. The NREL dataset forecasts prices, construction and retirement of specific types of electricity generation and storage facilities, and more, assuming eight decarbonization scenarios for 18 regions of the United States out to 2050.The new model then tracks buying and selling in energy markets for every hour of every day in a year, repeating the same schedule for five-year intervals. Based on the NREL dataset and details of the LAES system — plus constraints such as the system’s physical storage capacity and how often it can switch between charging and discharging — the model calculates how much money LAES operators would make selling power to the grid when it’s needed and how much they would spend buying electricity when it’s available to recharge their LAES system. In line with the NREL dataset, the model generates results for 18 U.S. regions and eight decarbonization scenarios, including 100 percent decarbonization by 2035 and 95 percent decarbonization by 2050, and other assumptions about future energy grids, including high-demand growth plus high and low costs for renewable energy and for natural gas.Cetegen describes some of their results: “Assuming a 100-megawatt (MW) system — a standard sort of size — we saw economic viability pop up under the decarbonization scenario calling for 100 percent decarbonization by 2035.” So, positive NPVs (indicating economic viability) occurred only under the most aggressive — therefore the least realistic — scenario, and they occurred in only a few southern states, including Texas and Florida, likely because of how those energy markets are structured and operate.The researchers also tested the sensitivity of NPVs to different storage capacities, that is, how long the system could continuously deliver power to the grid. They calculated the NPVs of a 100 MW system that could provide electricity supply for one day, one week, and one month. “That analysis showed that under aggressive decarbonization, weekly storage is more economically viable than monthly storage, because [in the latter case] we’re paying for more storage capacity than we need,” explains Cetegen.Improving the NPV of the LAES systemThe researchers next analyzed two possible ways to improve the NPV of liquid air storage: by increasing the system’s energy efficiency and by providing financial incentives. Their analyses showed that increasing the energy efficiency, even up to the theoretical limit of the process, would not change the economic viability of LAES under the most realistic decarbonization scenarios. On the other hand, a major improvement resulted when they assumed policies providing subsidies on capital expenditures on new installations. Indeed, assuming subsidies of between 40 percent and 60 percent made the NPVs for a 100 MW system become positive under all the realistic scenarios.Thus, their analysis showed that financial incentives could be far more effective than technical improvements in making LAES economically viable. While engineers may find that outcome disappointing, Cetegen notes that from a broader perspective, it’s good news. “You could spend your whole life trying to optimize the efficiency of this process, and it wouldn’t translate to securing the investment needed to scale the technology,” she says. “Policies can take a long time to implement as well. But theoretically you could do it overnight. So if storage is needed [on a future decarbonized grid], then this is one way to encourage adoption of LAES right away.”Cost comparison with other energy storage technologiesCalculating the economic viability of a storage technology is highly dependent on the assumptions used. As a result, a different measure — the “levelized cost of storage” (LCOS) — is typically used to compare the costs of different storage technologies. In simple terms, the LCOS is the cost of storing each unit of energy over the lifetime of a project, not accounting for any income that results.On that measure, the LAES technology excels. The researchers’ model yielded an LCOS for liquid air storage of about $60 per megawatt-hour, regardless of the decarbonization scenario. That LCOS is about a third that of lithium-ion battery storage and half that of pumped hydro. Cetegen cites another interesting finding: the LCOS of their assumed LAES system varied depending on where it’s being used. The standard practice of reporting a single LCOS for a given energy storage technology may not provide the full picture.Cetegen has adapted the model and is now calculating the NPV and LCOS for energy storage using lithium-ion batteries. But she’s already encouraged by the LCOS of liquid air storage. “While LAES systems may not be economically viable from an investment perspective today, that doesn’t mean they won’t be implemented in the future,” she concludes. “With limited options for grid-scale storage expansion and the growing need for storage technologies to ensure energy security, if we can’t find economically viable alternatives, we’ll likely have to turn to least-cost solutions to meet storage needs. This is why the story of liquid air storage is far from over. We believe our findings justify the continued exploration of LAES as a key energy storage solution for the future.” More

  • in

    Decarbonizing heavy industry with thermal batteries

    Whether you’re manufacturing cement, steel, chemicals, or paper, you need a large amount of heat. Almost without exception, manufacturers around the world create that heat by burning fossil fuels.In an effort to clean up the industrial sector, some startups are changing manufacturing processes for specific materials. Some are even changing the materials themselves. Daniel Stack SM ’17, PhD ’21 is trying to address industrial emissions across the board by replacing the heat source.Since coming to MIT in 2014, Stack has worked to develop thermal batteries that use electricity to heat up a conductive version of ceramic firebricks, which have been used as heat stores and insulators for centuries. In 2021, Stack co-founded Electrified Thermal Solutions, which has since demonstrated that its firebricks can store heat efficiently for hours and discharge it by heating air or gas up to 3,272 degrees Fahrenheit — hot enough to power the most demanding industrial applications.Achieving temperatures north of 3,000 F represents a breakthrough for the electric heating industry, as it enables some of the world’s hardest-to-decarbonize sectors to utilize renewable energy for the first time. It also unlocks a new, low-cost model for using electricity when it’s at its cheapest and cleanest.“We have a global perspective at Electrified Thermal, but in the U.S. over the last five years, we’ve seen an incredible opportunity emerge in energy prices that favors flexible offtake of electricity,” Stack says. “Throughout the middle of the country, especially in the wind belt, electricity prices in many places are negative for more than 20 percent of the year, and the trend toward decreasing electricity pricing during off-peak hours is a nationwide phenomenon. Technologies like our Joule Hive Thermal Battery will enable us to access this inexpensive, clean electricity and compete head to head with fossil fuels on price for industrial heating needs, without even factoring in the positive climate impact.”A new approach to an old technologyStack’s research plans changed quickly when he joined MIT’s Department of Nuclear Science and Engineering as a master’s student in 2014.“I went to MIT excited to work on the next generation of nuclear reactors, but what I focused on almost from day one was how to heat up bricks,” Stack says. “It wasn’t what I expected, but when I talked to my advisor, [Principal Research Scientist] Charles Forsberg, about energy storage and why it was valuable to not just nuclear power but the entire energy transition, I realized there was no project I would rather work on.”Firebricks are ubiquitous, inexpensive clay bricks that have been used for millennia in fireplaces and ovens. In 2017, Forsberg and Stack co-authored a paper showing firebricks’ potential to store heat from renewable resources, but the system still used electric resistance heaters — like the metal coils in toasters and space heaters — which limited its temperature output.For his doctoral work, Stack worked with Forsberg to make firebricks that were electrically conductive, replacing the resistance heaters so the bricks produced the heat directly.“Electric heaters are your biggest limiter: They burn out too fast, they break down, they don’t get hot enough,” Stack explains. “The idea was to skip the heaters because firebricks themselves are really cheap, abundant materials that can go to flame-like temperatures and hang out there for days.”Forsberg and Stacks were able to create conductive firebricks by tweaking the chemical composition of traditional firebricks. Electrified Thermal’s bricks are 98 percent similar to existing firebricks and are produced using the same processes, allowing existing manufacturers to make them inexpensively.Toward the end of his PhD program, Stack realized the invention could be commercialized. He started taking classes at the MIT Sloan School of Management and spending time at the Martin Trust Center for MIT Entrepreneurship. He also entered the StartMIT program and the I-Corps program, and received support from the U.S. Department of Energy and MIT’s Venture Mentoring Service (VMS).“Through the Boston ecosystem, the MIT ecosystem, and with help from the Department of Energy, we were able to launch this from the lab at MIT,” Stack says. “What we spun out was an electrically conductive firebrick, or what we refer to as an e-Brick.”Electrified Thermal contains its firebrick arrays in insulated, off-the-shelf metal boxes. Although the system is highly configurable depending on the end use, the company’s standard system can collect and release about 5 megawatts of energy and store about 25 megawatt-hours.The company has demonstrated its system’s ability to produce high temperatures and has been cycling its system at its headquarters in Medford, Massachusetts. That work has collectively earned Electrified Thermal $40 million from various the Department of Energy offices to scale the technology and work with manufacturers.“Compared to other electric heating, we can run hotter and last longer than any other solution on the market,” Stack says. “That means replacing fossil fuels at a lot of industrial sites that couldn’t otherwise decarbonize.”Scaling to solve a global problemElectrified Thermal is engaging with hundreds of industrial companies, including manufacturers of cement, steel, glass, basic and specialty chemicals, food and beverage, and pulp and paper.“The industrial heating challenge affects everyone under the sun,” Stack says. “They all have fundamentally the same problem, which is getting their heat in a way that is affordable and zero carbon for the energy transition.”The company is currently building a megawatt-scale commercial version of its system, which it expects to be operational in the next seven months.“Next year will be a huge proof point to the industry,” Stack says. “We’ll be using the commercial system to showcase a variety of operating points that customers need to see, and we’re hoping to be running systems on customer sites by the end of the year. It’ll be a huge achievement and a first for electric heating because no other solution in the market can put out the kind of temperatures that we can put out.”By working with manufacturers to produce its firebricks and casings, Electrified Thermal hopes to be able to deploy its systems rapidly and at low cost across a massive industry.“From the very beginning, we engineered these e-bricks to be rapidly scalable and rapidly producible within existing supply chains and manufacturing processes,” Stack says. “If you want to decarbonize heavy industry, there will be no cheaper way than turning electricity into heat from zero-carbon electricity assets. We’re seeking to be the premier technology that unlocks those capabilities, with double digit percentages of global energy flowing through our system as we accomplish the energy transition.” More