More stories

  • in

    Q&A: The climate impact of generative AI

    Vijay Gadepally, a senior staff member at MIT Lincoln Laboratory, leads a number of projects at the Lincoln Laboratory Supercomputing Center (LLSC) to make computing platforms, and the artificial intelligence systems that run on them, more efficient. Here, Gadepally discusses the increasing use of generative AI in everyday tools, its hidden environmental impact, and some of the ways that Lincoln Laboratory and the greater AI community can reduce emissions for a greener future.Q: What trends are you seeing in terms of how generative AI is being used in computing?A: Generative AI uses machine learning (ML) to create new content, like images and text, based on data that is inputted into the ML system. At the LLSC we design and build some of the largest academic computing platforms in the world, and over the past few years we’ve seen an explosion in the number of projects that need access to high-performance computing for generative AI. We’re also seeing how generative AI is changing all sorts of fields and domains — for example, ChatGPT is already influencing the classroom and the workplace faster than regulations can seem to keep up.We can imagine all sorts of uses for generative AI within the next decade or so, like powering highly capable virtual assistants, developing new drugs and materials, and even improving our understanding of basic science. We can’t predict everything that generative AI will be used for, but I can certainly say that with more and more complex algorithms, their compute, energy, and climate impact will continue to grow very quickly.Q: What strategies is the LLSC using to mitigate this climate impact?A: We’re always looking for ways to make computing more efficient, as doing so helps our data center make the most of its resources and allows our scientific colleagues to push their fields forward in as efficient a manner as possible.As one example, we’ve been reducing the amount of power our hardware consumes by making simple changes, similar to dimming or turning off lights when you leave a room. In one experiment, we reduced the energy consumption of a group of graphics processing units by 20 percent to 30 percent, with minimal impact on their performance, by enforcing a power cap. This technique also lowered the hardware operating temperatures, making the GPUs easier to cool and longer lasting.Another strategy is changing our behavior to be more climate-aware. At home, some of us might choose to use renewable energy sources or intelligent scheduling. We are using similar techniques at the LLSC — such as training AI models when temperatures are cooler, or when local grid energy demand is low.We also realized that a lot of the energy spent on computing is often wasted, like how a water leak increases your bill but without any benefits to your home. We developed some new techniques that allow us to monitor computing workloads as they are running and then terminate those that are unlikely to yield good results. Surprisingly, in a number of cases we found that the majority of computations could be terminated early without compromising the end result.Q: What’s an example of a project you’ve done that reduces the energy output of a generative AI program?A: We recently built a climate-aware computer vision tool. Computer vision is a domain that’s focused on applying AI to images; so, differentiating between cats and dogs in an image, correctly labeling objects within an image, or looking for components of interest within an image.In our tool, we included real-time carbon telemetry, which produces information about how much carbon is being emitted by our local grid as a model is running. Depending on this information, our system will automatically switch to a more energy-efficient version of the model, which typically has fewer parameters, in times of high carbon intensity, or a much higher-fidelity version of the model in times of low carbon intensity.By doing this, we saw a nearly 80 percent reduction in carbon emissions over a one- to two-day period. We recently extended this idea to other generative AI tasks such as text summarization and found the same results. Interestingly, the performance sometimes improved after using our technique!Q: What can we do as consumers of generative AI to help mitigate its climate impact?A: As consumers, we can ask our AI providers to offer greater transparency. For example, on Google Flights, I can see a variety of options that indicate a specific flight’s carbon footprint. We should be getting similar kinds of measurements from generative AI tools so that we can make a conscious decision on which product or platform to use based on our priorities.We can also make an effort to be more educated on generative AI emissions in general. Many of us are familiar with vehicle emissions, and it can help to talk about generative AI emissions in comparative terms. People may be surprised to know, for example, that one image-generation task is roughly equivalent to driving four miles in a gas car, or that it takes the same amount of energy to charge an electric car as it does to generate about 1,500 text summarizations.There are many cases where customers would be happy to make a trade-off if they knew the trade-off’s impact.Q: What do you see for the future?A: Mitigating the climate impact of generative AI is one of those problems that people all over the world are working on, and with a similar goal. We’re doing a lot of work here at Lincoln Laboratory, but its only scratching at the surface. In the long term, data centers, AI developers, and energy grids will need to work together to provide “energy audits” to uncover other unique ways that we can improve computing efficiencies. We need more partnerships and more collaboration in order to forge ahead.If you’re interested in learning more, or collaborating with Lincoln Laboratory on these efforts, please contact Vijay Gadepally.

    Play video

    Video: MIT Lincoln Laboratory More

  • in

    Designing tiny filters to solve big problems

    For many industrial processes, the typical way to separate gases, liquids, or ions is with heat, using slight differences in boiling points to purify mixtures. These thermal processes account for roughly 10 percent of the energy use in the United States.MIT chemical engineer Zachary Smith wants to reduce costs and carbon footprints by replacing these energy-intensive processes with highly efficient filters that can separate gases, liquids, and ions at room temperature.In his lab at MIT, Smith is designing membranes with tiny pores that can filter tiny molecules based on their size. These membranes could be useful for purifying biogas, capturing carbon dioxide from power plant emissions, or generating hydrogen fuel.“We’re taking materials that have unique capabilities for separating molecules and ions with precision, and applying them to applications where the current processes are not efficient, and where there’s an enormous carbon footprint,” says Smith, an associate professor of chemical engineering.Smith and several former students have founded a company called Osmoses that is working toward developing these materials for large-scale use in gas purification. Removing the need for high temperatures in these widespread industrial processes could have a significant impact on energy consumption, potentially reducing it by as much as 90 percent.“I would love to see a world where we could eliminate thermal separations, and where heat is no longer a problem in creating the things that we need and producing the energy that we need,” Smith says.Hooked on researchAs a high school student, Smith was drawn to engineering but didn’t have many engineering role models. Both of his parents were physicians, and they always encouraged him to work hard in school.“I grew up without knowing many engineers, and certainly no chemical engineers. But I knew that I really liked seeing how the world worked. I was always fascinated by chemistry and seeing how mathematics helped to explain this area of science,” recalls Smith, who grew up near Harrisburg, Pennsylvania. “Chemical engineering seemed to have all those things built into it, but I really had no idea what it was.”At Penn State University, Smith worked with a professor named Henry “Hank” Foley on a research project designing carbon-based materials to create a “molecular sieve” for gas separation. Through a time-consuming and iterative layering process, he created a sieve that could purify oxygen and nitrogen from air.“I kept adding more and more coatings of a special material that I could subsequently carbonize, and eventually I started to get selectivity. In the end, I had made a membrane that could sieve molecules that only differed by 0.18 angstrom in size,” he says. “I got hooked on research at that point, and that’s what led me to do more things in the area of membranes.”After graduating from college in 2008, Smith pursued graduate studies in chemical engineering at the University of Texas at Austin. There, he continued developing membranes for gas separation, this time using a different class of materials — polymers. By controlling polymer structure, he was able to create films with pores that filter out specific molecules, such as carbon dioxide or other gases.“Polymers are a type of material that you can actually form into big devices that can integrate into world-class chemical plants. So, it was exciting to see that there was a scalable class of materials that could have a real impact on addressing questions related to CO2 and other energy-efficient separations,” Smith says.After finishing his PhD, he decided he wanted to learn more chemistry, which led him to a postdoctoral fellowship at the University of California at Berkeley.“I wanted to learn how to make my own molecules and materials. I wanted to run my own reactions and do it in a more systematic way,” he says.At Berkeley, he learned how make compounds called metal-organic frameworks (MOFs) — cage-like molecules that have potential applications in gas separation and many other fields. He also realized that while he enjoyed chemistry, he was definitely a chemical engineer at heart.“I learned a ton when I was there, but I also learned a lot about myself,” he says. “As much as I love chemistry, work with chemists, and advise chemists in my own group, I’m definitely a chemical engineer, really focused on the process and application.”Solving global problemsWhile interviewing for faculty jobs, Smith found himself drawn to MIT because of the mindset of the people he met.“I began to realize not only how talented the faculty and the students were, but the way they thought was very different than other places I had been,” he says. “It wasn’t just about doing something that would move their field a little bit forward. They were actually creating new fields. There was something inspirational about the type of people that ended up at MIT who wanted to solve global problems.”In his lab at MIT, Smith is now tackling some of those global problems, including water purification, critical element recovery, renewable energy, battery development, and carbon sequestration.In a close collaboration with Yan Xia, a professor at Stanford University, Smith recently developed gas separation membranes that incorporate a novel type of polymer known as “ladder polymers,” which are currently being scaled for deployment at his startup. Historically, using polymers for gas separation has been limited by a tradeoff between permeability and selectivity — that is, membranes that permit a faster flow of gases through the membrane tend to be less selective, allowing impurities to get through.Using ladder polymers, which consist of double strands connected by rung-like bonds, the researchers were able to create gas separation membranes that are both highly permeable and very selective. The boost in permeability — a 100- to 1,000-fold improvement over earlier materials — could enable membranes to replace some of the high-energy techniques now used to separate gases, Smith says.“This allows you to envision large-scale industrial problems solved with miniaturized devices,” he says. “If you can really shrink down the system, then the solutions we’re developing in the lab could easily be applied to big industries like the chemicals industry.”These developments and others have been part of a number of advancements made by collaborators, students, postdocs, and researchers who are part of Smith’s team.“I have a great research team of talented and hard-working students and postdocs, and I get to teach on topics that have been instrumental in my own professional career,” Smith says. “MIT has been a playground to explore and learn new things. I am excited for what my team will discover next, and grateful for an opportunity to help solve many important global problems.” More

  • in

    Minimizing the carbon footprint of bridges and other structures

    Awed as a young child by the majesty of the Golden Gate Bridge in San Francisco, civil engineer and MIT Morningside Academy for Design (MAD) Fellow Zane Schemmer has retained his fascination with bridges: what they look like, why they work, and how they’re designed and built.He weighed the choice between architecture and engineering when heading off to college, but, motivated by the why and how of structural engineering, selected the latter. Now he incorporates design as an iterative process in the writing of algorithms that perfectly balance the forces involved in discrete portions of a structure to create an overall design that optimizes function, minimizes carbon footprint, and still produces a manufacturable result.While this may sound like an obvious goal in structural design, it’s not. It’s new. It’s a more holistic way of looking at the design process that can optimize even down to the materials, angles, and number of elements in the nodes or joints that connect the larger components of a building, bridge, tower, etc.According to Schemmer, there hasn’t been much progress on optimizing structural design to minimize embodied carbon, and the work that exists often results in designs that are “too complex to be built in real life,” he says. The embodied carbon of a structure is the total carbon dioxide emissions of its life cycle: from the extraction or manufacture of its materials to their transport and use and through the demolition of the structure and disposal of the materials. Schemmer, who works with Josephine V. Carstensen, the Gilbert W. Winslow Career Development Associate Professor of Civil and Environmental Engineering at MIT, is focusing on the portion of that cycle that runs through construction.In September, at the IASS 2024 symposium “Redefining the Art of Structural Design in Zurich,” Schemmer and Carstensen presented their work on Discrete Topology Optimization algorithms that are able to minimize the embodied carbon in a bridge or other structure by up to 20 percent. This comes through materials selection that considers not only a material’s appearance and its ability to get the job done, but also the ease of procurement, its proximity to the building site, and the carbon embodied in its manufacture and transport.“The real novelty of our algorithm is its ability to consider multiple materials in a highly constrained solution space to produce manufacturable designs with a user-specified force flow,” Schemmer says. “Real-life problems are complex and often have many constraints associated with them. In traditional formulations, it can be difficult to have a long list of complicated constraints. Our goal is to incorporate these constraints to make it easier to take our designs out of the computer and create them in real life.”Take, for instance, a steel tower, which could be a “super lightweight, efficient design solution,” Schemmer explains. Because steel is so strong, you don’t need as much of it compared to concrete or timber to build a big building. But steel is also very carbon-intensive to produce and transport. Shipping it across the country or especially from a different continent can sharply increase its embodied carbon price tag. Schemmer’s topology optimization will replace some of the steel with timber elements or decrease the amount of steel in other elements to create a hybrid structure that will function effectively and minimize the carbon footprint. “This is why using the same steel in two different parts of the world can lead to two different optimized designs,” he explains.Schemmer, who grew up in the mountains of Utah, earned a BS and MS in civil and environmental engineering from University of California at Berkeley, where his graduate work focused on seismic design. He describes that education as providing a “very traditional, super-strong engineering background that tackled some of the toughest engineering problems,” along with knowledge of structural engineering’s traditions and current methods.But at MIT, he says, a lot of the work he sees “looks at removing the constraints of current societal conventions of doing things, and asks how could we do things if it was in a more ideal form; what are we looking at then? Which I think is really cool,” he says. “But I think sometimes too, there’s a jump between the most-perfect version of something and where we are now, that there needs to be a bridge between those two. And I feel like my education helps me see that bridge.”The bridge he’s referring to is the topology optimization algorithms that make good designs better in terms of decreased global warming potential.“That’s where the optimization algorithm comes in,” Schemmer says. “In contrast to a standard structure designed in the past, the algorithm can take the same design space and come up with a much more efficient material usage that still meets all the structural requirements, be up to code, and have everything we want from a safety standpoint.”That’s also where the MAD Design Fellowship comes in. The program provides yearlong fellowships with full financial support to graduate students from all across the Institute who network with each other, with the MAD faculty, and with outside speakers who use design in new ways in a surprising variety of fields. This helps the fellows gain a better understanding of how to use iterative design in their own work.“Usually people think of their own work like, ‘Oh, I had this background. I’ve been looking at this one way for a very long time.’ And when you look at it from an outside perspective, I think it opens your mind to be like, ‘Oh my God. I never would have thought about doing this that way. Maybe I should try that.’ And then we can move to new ideas, new inspiration for better work,” Schemmer says.He chose civil and structural engineering over architecture some seven years ago, but says that “100 years ago, I don’t think architecture and structural engineering were two separate professions. I think there was an understanding of how things looked and how things worked, and it was merged together. Maybe from an efficiency standpoint, it’s better to have things done separately. But I think there’s something to be said for having knowledge about how the whole system works, potentially more intermingling between the free-form architectural design and the mathematical design of a civil engineer. Merging it back together, I think, has a lot of benefits.”Which brings us back to the Golden Gate Bridge, Schemmer’s longtime favorite. You can still hear that excited 3-year-old in his voice when he talks about it.“It’s so iconic,” he says. “It’s connecting these two spits of land that just rise straight up out of the ocean. There’s this fog that comes in and out a lot of days. It’s a really magical place, from the size of the cable strands and everything. It’s just, ‘Wow.’ People built this over 100 years ago, before the existence of a lot of the computational tools that we have now. So, all the math, everything in the design, was all done by hand and from the mind. Nothing was computerized, which I think is crazy to think about.”As Schemmer continues work on his doctoral degree at MIT, the MAD fellowship will expose him to many more awe-inspiring ideas in other fields, leading him to incorporate some of these in some way with his engineering knowledge to design better ways of building bridges and other structures. More

  • in

    How hard is it to prevent recurring blackouts in Puerto Rico?

    Researchers at MIT’s Laboratory for Information and Decision Systems (LIDS) have shown that using decision-making software and dynamic monitoring of weather and energy use can significantly improve resiliency in the face of weather-related outages, and can also help to efficiently integrate renewable energy sources into the grid.The researchers point out that the system they suggest might have prevented or at least lessened the kind of widespread power outage that Puerto Rico experienced last week by providing analysis to guide rerouting of power through different lines and thus limit the spread of the outage.The computer platform, which the researchers describe as DyMonDS, for Dynamic Monitoring and Decision Systems, can be used to enhance the existing operating and planning practices used in the electric industry. The platform supports interactive information exchange and decision-making between the grid operators and grid-edge users — all the distributed power sources, storage systems and software that contribute to the grid. It also supports optimization of available resources and controllable grid equipment as system conditions vary. It further lends itself to implementing cooperative decision-making by different utility- and non-utility-owned electric power grid users, including portfolios of mixed resources, users, and storage. Operating and planning the interactions of the end-to-end high-voltage transmission grid with local distribution grids and microgrids represents another major potential use of this platform.This general approach was illustrated using a set of publicly-available data on both meteorology and details of electricity production and distribution in Puerto Rico. An extended AC Optimal Power Flow software developed by SmartGridz Inc. is used for system-level optimization of controllable equipment. This provides real-time guidance for deciding how much power, and through which transmission lines, should be channeled by adjusting plant dispatch and voltage-related set points, and in extreme cases, where to reduce or cut power in order to maintain physically-implementable service for as many customers as possible. The team found that the use of such a system can help to ensure that the greatest number of critical services maintain power even during a hurricane, and at the same time can lead to a substantial decrease in the need for construction of new power plants thanks to more efficient use of existing resources.The findings are described in a paper in the journal Foundations and Trends in Electric Energy Systems, by MIT LIDS researchers Marija Ilic and Laurentiu Anton, along with recent alumna Ramapathi Jaddivada.“Using this software,” Ilic says, they show that “even during bad weather, if you predict equipment failures, and by using that information exchange, you can localize the effect of equipment failures and still serve a lot of customers, 50 percent of customers, when otherwise things would black out.”Anton says that “the way many grids today are operated is sub-optimal.” As a result, “we showed how much better they could do even under normal conditions, without any failures, by utilizing this software.” The savings resulting from this optimization, under everyday conditions, could be in the tens of percents, they say.The way utility systems plan currently, Ilic says, “usually the standard is that they have to build enough capacity and operate in real time so that if one large piece of equipment fails, like a large generator or transmission line, you still serve customers in an uninterrupted way. That’s what’s called N-minus-1.” Under this policy, if one major component of the system fails, they should be able to maintain service for at least 30 minutes. That system allows utilities to plan for how much reserve generating capacity they need to have on hand. That’s expensive, Ilic points out, because it means maintaining this reserve capacity all the time, even under normal operating conditions when it’s not needed.In addition, “right now there are no criteria for what I call N-minus-K,” she says. If bad weather causes five pieces of equipment to fail at once, “there is no software to help utilities decide what to schedule” in terms of keeping the most customers, and the most important services such as hospitals and emergency services, provided with power. They showed that even with 50 percent of the infrastructure out of commission, it would still be possible to keep power flowing to a large proportion of customers.Their work on analyzing the power situation in Puerto Rico started after the island had been devastated by hurricanes Irma and Maria. Most of the electric generation capacity is in the south, yet the largest loads are in San Juan, in the north, and Mayaguez in the west. When transmission lines get knocked down, a lot of rerouting of power needs to happen quickly.With the new systems, “the software finds the optimal adjustments for set points,” for example, changing voltages can allow for power to be redirected through less-congested lines, or can be increased to lessen power losses, Anton says.The software also helps in the long-term planning for the grid. As many fossil-fuel power plants are scheduled to be decommissioned soon in Puerto Rico, as they are in many other places, planning for how to replace that power without having to resort to greenhouse gas-emitting sources is a key to achieving carbon-reduction goals. And by analyzing usage patterns, the software can guide the placement of new renewable power sources where they can most efficiently provide power where and when it’s needed.As plants are retired or as components are affected by weather, “We wanted to ensure the dispatchability of power when the load changes,” Anton says, “but also when crucial components are lost, to ensure the robustness at each step of the retirement schedule.”One thing they found was that “if you look at how much generating capacity exists, it’s more than the peak load, even after you retire a few fossil plants,” Ilic says. “But it’s hard to deliver.” Strategic planning of new distribution lines could make a big difference.Jaddivada, director of innovation at SmartGridz, says that “we evaluated different possible architectures in Puerto Rico, and we showed the ability of this software to ensure uninterrupted electricity service. This is the most important challenge utilities have today. They have to go through a computationally tedious process to make sure the grid functions for any possible outage in the system. And that can be done in a much more efficient way through the software that the company  developed.”The project was a collaborative effort between the MIT LIDS researchers and others at MIT Lincoln Laboratory, the Pacific Northwest National Laboratory, with overall help of SmartGridz software.  More

  • in

    Unlocking the hidden power of boiling — for energy, space, and beyond

    Most people take boiling water for granted. For Associate Professor Matteo Bucci, uncovering the physics behind boiling has been a decade-long journey filled with unexpected challenges and new insights.The seemingly simple phenomenon is extremely hard to study in complex systems like nuclear reactors, and yet it sits at the core of a wide range of important industrial processes. Unlocking its secrets could thus enable advances in efficient energy production, electronics cooling, water desalination, medical diagnostics, and more.“Boiling is important for applications way beyond nuclear,” says Bucci, who earned tenure at MIT in July. “Boiling is used in 80 percent of the power plants that produce electricity. My research has implications for space propulsion, energy storage, electronics, and the increasingly important task of cooling computers.”Bucci’s lab has developed new experimental techniques to shed light on a wide range of boiling and heat transfer phenomena that have limited energy projects for decades. Chief among those is a problem caused by bubbles forming so quickly they create a band of vapor across a surface that prevents further heat transfer. In 2023, Bucci and collaborators developed a unifying principle governing the problem, known as the boiling crisis, which could enable more efficient nuclear reactors and prevent catastrophic failures.For Bucci, each bout of progress brings new possibilities — and new questions to answer.“What’s the best paper?” Bucci asks. “The best paper is the next one. I think Alfred Hitchcock used to say it doesn’t matter how good your last movie was. If your next one is poor, people won’t remember it. I always tell my students that our next paper should always be better than the last. It’s a continuous journey of improvement.”From engineering to bubblesThe Italian village where Bucci grew up had a population of about 1,000 during his childhood. He gained mechanical skills by working in his father’s machine shop and by taking apart and reassembling appliances like washing machines and air conditioners to see what was inside. He also gained a passion for cycling, competing in the sport until he attended the University of Pisa for undergraduate and graduate studies.In college, Bucci was fascinated with matter and the origins of life, but he also liked building things, so when it came time to pick between physics and engineering, he decided nuclear engineering was a good middle ground.“I have a passion for construction and for understanding how things are made,” Bucci says. “Nuclear engineering was a very unlikely but obvious choice. It was unlikely because in Italy, nuclear was already out of the energy landscape, so there were very few of us. At the same time, there were a combination of intellectual and practical challenges, which is what I like.”For his PhD, Bucci went to France, where he met his wife, and went on to work at a French national lab. One day his department head asked him to work on a problem in nuclear reactor safety known as transient boiling. To solve it, he wanted to use a method for making measurements pioneered by MIT Professor Jacopo Buongiorno, so he received grant money to become a visiting scientist at MIT in 2013. He’s been studying boiling at MIT ever since.Today Bucci’s lab is developing new diagnostic techniques to study boiling and heat transfer along with new materials and coatings that could make heat transfer more efficient. The work has given researchers an unprecedented view into the conditions inside a nuclear reactor.“The diagnostics we’ve developed can collect the equivalent of 20 years of experimental work in a one-day experiment,” Bucci says.That data, in turn, led Bucci to a remarkably simple model describing the boiling crisis.“The effectiveness of the boiling process on the surface of nuclear reactor cladding determines the efficiency and the safety of the reactor,” Bucci explains. “It’s like a car that you want to accelerate, but there is an upper limit. For a nuclear reactor, that upper limit is dictated by boiling heat transfer, so we are interested in understanding what that upper limit is and how we can overcome it to enhance the reactor performance.”Another particularly impactful area of research for Bucci is two-phase immersion cooling, a process wherein hot server parts bring liquid to boil, then the resulting vapor condenses on a heat exchanger above to create a constant, passive cycle of cooling.“It keeps chips cold with minimal waste of energy, significantly reducing the electricity consumption and carbon dioxide emissions of data centers,” Bucci explains. “Data centers emit as much CO2 as the entire aviation industry. By 2040, they will account for over 10 percent of emissions.”Supporting studentsBucci says working with students is the most rewarding part of his job. “They have such great passion and competence. It’s motivating to work with people who have the same passion as you.”“My students have no fear to explore new ideas,” Bucci adds. “They almost never stop in front of an obstacle — sometimes to the point where you have to slow them down and put them back on track.”In running the Red Lab in the Department of Nuclear Science and Engineering, Bucci tries to give students independence as well as support.“We’re not educating students, we’re educating future researchers,” Bucci says. “I think the most important part of our work is to not only provide the tools, but also to give the confidence and the self-starting attitude to fix problems. That can be business problems, problems with experiments, problems with your lab mates.”Some of the more unique experiments Bucci’s students do require them to gather measurements while free falling in an airplane to achieve zero gravity.“Space research is the big fantasy of all the kids,” says Bucci, who joins students in the experiments about twice a year. “It’s very fun and inspiring research for students. Zero g gives you a new perspective on life.”Applying AIBucci is also excited about incorporating artificial intelligence into his field. In 2023, he was a co-recipient of a multi-university research initiative (MURI) project in thermal science dedicated solely to machine learning. In a nod to the promise AI holds in his field, Bucci also recently founded a journal called AI Thermal Fluids to feature AI-driven research advances.“Our community doesn’t have a home for people that want to develop machine-learning techniques,” Bucci says. “We wanted to create an avenue for people in computer science and thermal science to work together to make progress. I think we really need to bring computer scientists into our community to speed this process up.”Bucci also believes AI can be used to process huge reams of data gathered using the new experimental techniques he’s developed as well as to model phenomena researchers can’t yet study.“It’s possible that AI will give us the opportunity to understand things that cannot be observed, or at least guide us in the dark as we try to find the root causes of many problems,” Bucci says. More

  • in

    Helping students bring about decarbonization, from benchtop to global energy marketplace

    MIT students are adept at producing research and innovations at the cutting edge of their fields. But addressing a problem as large as climate change requires understanding the world’s energy landscape, as well as the ways energy technologies evolve over time.Since 2010, the course IDS.521/IDS.065 (Energy Systems for Climate Change Mitigation) has equipped students with the skills they need to evaluate the various energy decarbonization pathways available to the world. The work is designed to help them maximize their impact on the world’s emissions by making better decisions along their respective career paths.“The question guiding my teaching and research is how do we solve big societal challenges with technology, and how can we be more deliberate in developing and supporting technologies to get us there?” says Professor Jessika Trancik, who started the course to help fill a gap in knowledge about the ways technologies evolve and scale over time.Since its inception in 2010, the course has attracted graduate students from across MIT’s five schools. The course has also recently opened to undergraduate students and been adapted to an online course for professionals.Class sessions alternate between lectures and student discussions that lead up to semester-long projects in which groups of students explore specific strategies and technologies for reducing global emissions. This year’s projects span several topics, including how quickly transmission infrastructure is expanding, the relationship between carbon emissions and human development, and how to decarbonize the production of key chemicals.The curriculum is designed to help students identify the most promising ways to mitigate climate change whether they plan to be scientists, engineers, policymakers, investors, urban planners, or just more informed citizens.“We’re coming at this issue from both sides,” explains Trancik, who is part of MIT’s Institute for Data, Systems, and Society. “Engineers are used to designing a technology to work as well as possible here and now, but not always thinking over a longer time horizon about a technology evolving and succeeding in the global marketplace. On the flip side, for students at the macro level, often studies in policy and economics of technological change don’t fully account for the physical and engineering constraints of rates of improvement. But all of that information allows you to make better decisions.”Bridging the gapAs a young researcher working on low-carbon polymers and electrode materials for solar cells, Trancik always wondered how the materials she worked on would scale in the real world. They might achieve promising performance benchmarks in the lab, but would they actually make a difference in mitigating climate change? Later, she began focusing increasingly on developing methods for predicting how technologies might evolve.“I’ve always been interested in both the macro and the micro, or even nano, scales,” Trancik says. “I wanted to know how to bridge these new technologies we’re working on with the big picture of where we want to go.”Trancik’ described her technology-grounded approach to decarbonization in a paper that formed the basis for IDS.065. In the paper, she presented a way to evaluate energy technologies against climate-change mitigation goals while focusing on the technology’s evolution.“That was a departure from previous approaches, which said, given these technologies with fixed characteristics and assumptions about their rates of change, how do I choose the best combination?” Trancik explains. “Instead we asked: Given a goal, how do we develop the best technologies to meet that goal? That inverts the problem in a way that’s useful to engineers developing these technologies, but also to policymakers and investors that want to use the evolution of technologies as a tool for achieving their objectives.”This past semester, the class took place every Tuesday and Thursday in a classroom on the first floor of the Stata Center. Students regularly led discussions where they reflected on the week’s readings and offered their own insights.“Students always share their takeaways and get to ask open questions of the class,” says Megan Herrington, a PhD candidate in the Department of Chemical Engineering. “It helps you understand the readings on a deeper level because people with different backgrounds get to share their perspectives on the same questions and problems. Everybody comes to class with their own lens, and the class is set up to highlight those differences.”The semester begins with an overview of climate science, the origins of emissions reductions goals, and technology’s role in achieving those goals. Students then learn how to evaluate technologies against decarbonization goals.But technologies aren’t static, and neither is the world. Later lessons help students account for the change of technologies over time, identifying the mechanisms for that change and even forecasting rates of change.Students also learn about the role of government policy. This year, Trancik shared her experience traveling to the COP29 United Nations Climate Change Conference.“It’s not just about technology,” Trancik says. “It’s also about the behaviors that we engage in and the choices we make. But technology plays a major role in determining what set of choices we can make.”From the classroom to the worldStudents in the class say it has given them a new perspective on climate change mitigation.“I have really enjoyed getting to see beyond the research people are doing at the benchtop,” says Herrington. “It’s interesting to see how certain materials or technologies that aren’t scalable yet may fit into a larger transformation in energy delivery and consumption. It’s also been interesting to pull back the curtain on energy systems analysis to understand where the metrics we cite in energy-related research originate from, and to anticipate trajectories of emerging technologies.”Onur Talu, a first-year master’s student in the Technology and Policy Program, says the class has made him more hopeful.“I came into this fairly pessimistic about the climate,” says Talu, who has worked for clean technology startups in the past. “This class has taught me different ways to look at the problem of climate change mitigation and developing renewable technologies. It’s also helped put into perspective how much we’ve accomplished so far.”Several student projects from the class over the years have been developed into papers published in peer-reviewed journals. They have also been turned into tools, like carboncounter.com, which plots the emissions and costs of cars and has been featured in The New York Times.Former class students have also launched startups; Joel Jean SM ’13, PhD ’17, for example, started Swift Solar. Others have drawn on the course material to develop impactful careers in government and academia, such as Patrick Brown PhD ’16 at the National Renewable Energy Laboratory and Leah Stokes SM ’15, PhD ’15 at the University of California at Santa Barbara.Overall, students say the course helps them take a more informed approach to applying their skills toward addressing climate change.“It’s not enough to just know how bad climate change could be,” says Yu Tong, a first-year master’s student in civil and environmental engineering. “It’s also important to understand how technology can work to mitigate climate change from both a technological and market perspective. It’s about employing technology to solve these issues rather than just working in a vacuum.” More

  • in

    MIT spinout Commonwealth Fusion Systems unveils plans for the world’s first fusion power plant

    America is one step closer to tapping into a new and potentially limitless clean energy source today, with the announcement from MIT spinout Commonwealth Fusion Systems (CFS) that it plans to build the world’s first grid-scale fusion power plant in Chesterfield County, Virginia.The announcement is the latest milestone for the company, which has made groundbreaking progress toward harnessing fusion — the reaction that powers the sun — since its founders first conceived of their approach in an MIT classroom in 2012. CFS is now commercializing a suite of advanced technologies developed in MIT research labs.“This moment exemplifies the power of MIT’s mission, which is to create knowledge that serves the nation and the world, whether via the classroom, the lab, or out in communities,” MIT Vice President for Research Ian Waitz says. “From student coursework 12 years ago to today’s announcement of the siting in Virginia of the world’s first fusion power plant, progress has been amazingly rapid. At the same time, we owe this progress to over 65 years of sustained investment by the U.S. federal government in basic science and energy research.”The new fusion power plant, named ARC, is expected to come online in the early 2030s and generate about 400 megawatts of clean, carbon-free electricity — enough energy to power large industrial sites or about 150,000 homes.The plant will be built at the James River Industrial Park outside of Richmond through a nonfinancial collaboration with Dominion Energy Virginia, which will provide development and technical expertise along with leasing rights for the site. CFS will independently finance, build, own, and operate the power plant.The plant will support Virginia’s economic and clean energy goals by generating what is expected to be billions of dollars in economic development and hundreds of jobs during its construction and long-term operation.More broadly, ARC will position the U.S. to lead the world in harnessing a new form of safe and reliable energy that could prove critical for economic prosperity and national security, including for meeting increasing electricity demands driven by needs like artificial intelligence.“This will be a watershed moment for fusion,” says CFS co-founder Dennis Whyte, the Hitachi America Professor of Engineering at MIT. “It sets the pace in the race toward commercial fusion power plants. The ambition is to build thousands of these power plants and to change the world.”Fusion can generate energy from abundant fuels like hydrogen and lithium isotopes, which can be sourced from seawater, and leave behind no emissions or toxic waste. However, harnessing fusion in a way that produces more power than it takes in has proven difficult because of the high temperatures needed to create and maintain the fusion reaction. Over the course of decades, scientists and engineers have worked to make the dream of fusion power plants a reality.In 2012, teaching the MIT class 22.63 (Principles of Fusion Engineering), Whyte challenged a group of graduate students to design a fusion device that would use a new kind of superconducting magnet to confine the plasma used in the reaction. It turned out the magnets enabled a more compact and economic reactor design. When Whyte reviewed his students’ work, he realized that could mean a new development path for fusion.Since then, a huge amount of capital and expertise has rushed into the once fledgling fusion industry. Today there are dozens of private fusion companies around the world racing to develop the first net-energy fusion power plants, many utilizing the new superconducting magnets. CFS, which Whyte founded with several students from his class, has attracted more than $2 billion in funding.“It all started with that class, where our ideas kept evolving as we challenged the standard assumptions that came with fusion,” Whyte says. “We had this new superconducting technology, so much of the common wisdom was no longer valid. It was a perfect forum for students, who can challenge the status quo.”Since the company’s founding in 2017, it has collaborated with researchers in MIT’s Plasma Science and Fusion Center (PFSC) on a range of initiatives, from validating the underlying plasma physics for the first demonstration machine to breaking records with a new kind of magnet to be used in commercial fusion power plants. Each piece of progress moves the U.S. closer to harnessing a revolutionary new energy source.CFS is currently completing development of its fusion demonstration machine, SPARC, at its headquarters in Devens, Massachusetts. SPARC is expected to produce its first plasma in 2026 and net fusion energy shortly after, demonstrating for the first time a commercially relevant design that will produce more power than it consumes. SPARC will pave the way for ARC, which is expected to deliver power to the grid in the early 2030s.“There’s more challenging engineering and science to be done in this field, and we’re very enthusiastic about the progress that CFS and the researchers on our campus are making on those problems,” Waitz says. “We’re in a ‘hockey stick’ moment in fusion energy, where things are moving incredibly quickly now. On the other hand, we can’t forget about the much longer part of that hockey stick, the sustained support for very complex, fundamental research that underlies great innovations. If we’re going to continue to lead the world in these cutting-edge technologies, continued investment in those areas will be crucial.” More

  • in

    In a unique research collaboration, students make the case for less e-waste

    Brought together as part of the Social and Ethical Responsibilities of Computing (SERC) initiative within the MIT Schwarzman College of Computing, a community of students known as SERC Scholars is collaborating to examine the most urgent problems humans face in the digital landscape.Each semester, students from all levels from across MIT are invited to join a different topical working group led by a SERC postdoctoral associate. Each group delves into a specific issue — such as surveillance or data ownership — culminating in a final project presented at the end of the term.Typically, students complete the program with hands-on experience conducting research in a new cross-disciplinary field. However, one group of undergraduate and graduate students recently had the unique opportunity to enhance their resume by becoming published authors of a case study about the environmental and climate justice implications of the electronics hardware life cycle.Although it’s not uncommon for graduate students to co-author case studies, it’s unusual for undergraduates to earn this opportunity — and for their audience to be other undergraduates around the world.“Our team was insanely interdisciplinary,” says Anastasia Dunca, a junior studying computer science and one of the co-authors. “I joined the SERC Scholars Program because I liked the idea of being part of a cohort from across MIT working on a project that utilized all of our skillsets. It also helps [undergraduates] learn the ins and outs of computing ethics research.”Case study co-author Jasmin Liu, an MBA student in the MIT Sloan School of Management, sees the program as a platform to learn about the intersection of technology, society, and ethics: “I met team members spanning computer science, urban planning, to art/culture/technology. I was excited to work with a diverse team because I know complex problems must be approached with many different perspectives. Combining my background in humanities and business with the expertise of others allowed us to be more innovative and comprehensive.”Christopher Rabe, a former SERC postdoc who facilitated the group, says, “I let the students take the lead on identifying the topic and conducting the research.” His goal for the group was to challenge students across disciplines to develop a working definition of climate justice.From mining to e-wasteThe SERC Scholars’ case study, “From Mining to E-waste: The Environmental and Climate Justice Implications of the Electronics Hardware Life Cycle,” was published by the MIT Case Studies in Social and Ethical Responsibilities of Computing.The ongoing case studies series, which releases new issues twice a year on an open-source platform, is enabling undergraduate instructors worldwide to incorporate research-based education materials on computing ethics into their existing class syllabi.This particular case study broke down the electronics life cycle from mining to manufacturing, usage, and disposal. It offered an in-depth look at how this cycle promotes inequity in the Global South. Mining for the average of 60 minerals that power everyday devices lead to illegal deforestation, compromising air quality in the Amazon, and triggering armed conflict in Congo. Manufacturing leads to proven health risks for both formal and informal workers, some of whom are child laborers.Life cycle assessment and circular economy are proposed as mechanisms for analyzing environmental and climate justice issues in the electronics life cycle. Rather than posing solutions, the case study offers readers entry points for further discussion and for assessing their own individual responsibility as producers of e-waste.Crufting and crafting a case studyDunca joined Rabe’s working group, intrigued by the invitation to conduct a rigorous literature review examining issues like data center resource and energy use, manufacturing waste, ethical issues with AI, and climate change. Rabe quickly realized that a common thread among all participants was an interest in understanding and reducing e-waste and its impact on the environment.“I came in with the idea of us co-authoring a case study,” Rabe said. However, the writing-intensive process was initially daunting to those students who were used to conducting applied research. Once Rabe created sub-groups with discrete tasks, the steps for researching, writing, and iterating a case study became more approachable.For Ellie Bultena, an undergraduate student studying linguistics and philosophy and a contributor to the study, that meant conducting field research on the loading dock of MIT’s Stata Center, where students and faculty go “crufting” through piles of clunky printers, broken computers, and used lab equipment discarded by the Institute’s labs, departments, and individual users.Although not a formally sanctioned activity on-campus, “crufting” is the act of gleaning usable parts from these junk piles to be repurposed into new equipment or art. Bultena’s respondents, who opted to be anonymous, said that MIT could do better when it comes to the amount of e-waste generated and suggested that formal strategies could be implemented to encourage community members to repair equipment more easily or recycle more formally.Rabe, now an education program director at the MIT Environmental Solutions Initiative, is hopeful that through the Zero-Carbon Campus Initiative, which commits MIT to eliminating all direct emissions by 2050, MIT will ultimately become a model for other higher education institutions.Although the group lacked the time and resources to travel to communities in the Global South that they profiled in their case study, members leaned into exhaustive secondary research, collecting data on how some countries are irresponsibly dumping e-waste. In contrast, others have developed alternative solutions that can be duplicated elsewhere and scaled.“We source materials, manufacture them, and then throw them away,” Lelia Hampton says. A PhD candidate in electrical engineering and computer science and another co-author, Hampton jumped at the opportunity to serve in a writing role, bringing together the sub-groups research findings. “I’d never written a case study, and it was exciting. Now I want to write 10 more.”The content directly informed Hampton’s dissertation research, which “looks at applying machine learning to climate justice issues such as urban heat islands.” She said that writing a case study that is accessible to general audiences upskilled her for the non-profit organization she’s determined to start. “It’s going to provide communities with free resources and data needed to understand how they are impacted by climate change and begin to advocate against injustice,” Hampton explains.Dunca, Liu, Rabe, Bultena, and Hampton are joined on the case study by fellow authors Mrinalini Singha, a graduate student in the Art, Culture, and Technology program; Sungmoon Lim, a graduate student in urban studies and planning and EECS; Lauren Higgins, an undergraduate majoring in political science; and Madeline Schlegal, a Northeastern University co-op student.Taking the case study to classrooms around the worldAlthough PhD candidates have contributed to previous case studies in the series, this publication is the first to be co-authored with MIT undergraduates. Like any other peer-reviewed journal, before publication, the SERC Scholars’ case study was anonymously reviewed by senior scholars drawn from various fields.The series editor, David Kaiser, also served as one of SERC’s inaugural associate deans and helped shape the program. “The case studies, by design, are short, easy to read, and don’t take up lots of time,” Kaiser explained. “They are gateways for students to explore, and instructors can cover a topic that has likely already been on their mind.” This semester, Kaiser, the Germeshausen Professor of the History of Science and a professor of physics, is teaching STS.004 (Intersections: Science, Technology, and the World), an undergraduate introduction to the field of science, technology, and society. The last month of the semester has been dedicated wholly to SERC case studies, one of which is: “From Mining to E-Waste.”Hampton was visibly moved to hear that the case study is being used at MIT but also by some of the 250,000 visitors to the SERC platform, many of whom are based in the Global South and directly impacted by the issues she and her cohort researched. “Many students are focused on climate, whether through computer science, data science, or mechanical engineering. I hope that this case study educates them on environmental and climate aspects of e-waste and computing.” More