More stories

  • in

    How can we reduce the carbon footprint of global computing?

    The voracious appetite for energy from the world’s computers and communications technology presents a clear threat for the globe’s warming climate. That was the blunt assessment from presenters in the intensive two-day Climate Implications of Computing and Communications workshop held on March 3 and 4, hosted by MIT’s Climate and Sustainability Consortium (MCSC), MIT-IBM Watson AI Lab, and the Schwarzman College of Computing.

    The virtual event featured rich discussions and highlighted opportunities for collaboration among an interdisciplinary group of MIT faculty and researchers and industry leaders across multiple sectors — underscoring the power of academia and industry coming together.

    “If we continue with the existing trajectory of compute energy, by 2040, we are supposed to hit the world’s energy production capacity. The increase in compute energy and demand has been increasing at a much faster rate than the world energy production capacity increase,” said Bilge Yildiz, the Breene M. Kerr Professor in the MIT departments of Nuclear Science and Engineering and Materials Science and Engineering, one of the workshop’s 18 presenters. This computing energy projection draws from the Semiconductor Research Corporations’s decadal report.To cite just one example: Information and communications technology already account for more than 2 percent of global energy demand, which is on a par with the aviation industries emissions from fuel.“We are the very beginning of this data-driven world. We really need to start thinking about this and act now,” said presenter Evgeni Gousev, senior director at Qualcomm.  Innovative energy-efficiency optionsTo that end, the workshop presentations explored a host of energy-efficiency options, including specialized chip design, data center architecture, better algorithms, hardware modifications, and changes in consumer behavior. Industry leaders from AMD, Ericsson, Google, IBM, iRobot, NVIDIA, Qualcomm, Tertill, Texas Instruments, and Verizon outlined their companies’ energy-saving programs, while experts from across MIT provided insight into current research that could yield more efficient computing.Panel topics ranged from “Custom hardware for efficient computing” to “Hardware for new architectures” to “Algorithms for efficient computing,” among others.

    Visual representation of the conversation during the workshop session entitled “Energy Efficient Systems.”

    Image: Haley McDevitt

    Previous item
    Next item

    The goal, said Yildiz, is to improve energy efficiency associated with computing by more than a million-fold.“I think part of the answer of how we make computing much more sustainable has to do with specialized architectures that have very high level of utilization,” said Darío Gil, IBM senior vice president and director of research, who stressed that solutions should be as “elegant” as possible.  For example, Gil illustrated an innovative chip design that uses vertical stacking to reduce the distance data has to travel, and thus reduces energy consumption. Surprisingly, more effective use of tape — a traditional medium for primary data storage — combined with specialized hard drives (HDD), can yield a dramatic savings in carbon dioxide emissions.Gil and presenters Bill Dally, chief scientist and senior vice president of research of NVIDIA; Ahmad Bahai, CTO of Texas Instruments; and others zeroed in on storage. Gil compared data to a floating iceberg in which we can have fast access to the “hot data” of the smaller visible part while the “cold data,” the large underwater mass, represents data that tolerates higher latency. Think about digital photo storage, Gil said. “Honestly, are you really retrieving all of those photographs on a continuous basis?” Storage systems should provide an optimized mix of of HDD for hot data and tape for cold data based on data access patterns.Bahai stressed the significant energy saving gained from segmenting standby and full processing. “We need to learn how to do nothing better,” he said. Dally spoke of mimicking the way our brain wakes up from a deep sleep, “We can wake [computers] up much faster, so we don’t need to keep them running in full speed.”Several workshop presenters spoke of a focus on “sparsity,” a matrix in which most of the elements are zero, as a way to improve efficiency in neural networks. Or as Dally said, “Never put off till tomorrow, where you could put off forever,” explaining efficiency is not “getting the most information with the fewest bits. It’s doing the most with the least energy.”Holistic and multidisciplinary approaches“We need both efficient algorithms and efficient hardware, and sometimes we need to co-design both the algorithm and the hardware for efficient computing,” said Song Han, a panel moderator and assistant professor in the Department of Electrical Engineering and Computer Science (EECS) at MIT.Some presenters were optimistic about innovations already underway. According to Ericsson’s research, as much as 15 percent of the carbon emissions globally can be reduced through the use of existing solutions, noted Mats Pellbäck Scharp, head of sustainability at Ericsson. For example, GPUs are more efficient than CPUs for AI, and the progression from 3G to 5G networks boosts energy savings.“5G is the most energy efficient standard ever,” said Scharp. “We can build 5G without increasing energy consumption.”Companies such as Google are optimizing energy use at their data centers through improved design, technology, and renewable energy. “Five of our data centers around the globe are operating near or above 90 percent carbon-free energy,” said Jeff Dean, Google’s senior fellow and senior vice president of Google Research.Yet, pointing to the possible slowdown in the doubling of transistors in an integrated circuit — or Moore’s Law — “We need new approaches to meet this compute demand,” said Sam Naffziger, AMD senior vice president, corporate fellow, and product technology architect. Naffziger spoke of addressing performance “overkill.” For example, “we’re finding in the gaming and machine learning space we can make use of lower-precision math to deliver an image that looks just as good with 16-bit computations as with 32-bit computations, and instead of legacy 32b math to train AI networks, we can use lower-energy 8b or 16b computations.”

    Visual representation of the conversation during the workshop session entitled “Wireless, networked, and distributed systems.”

    Image: Haley McDevitt

    Previous item
    Next item

    Other presenters singled out compute at the edge as a prime energy hog.“We also have to change the devices that are put in our customers’ hands,” said Heidi Hemmer, senior vice president of engineering at Verizon. As we think about how we use energy, it is common to jump to data centers — but it really starts at the device itself, and the energy that the devices use. Then, we can think about home web routers, distributed networks, the data centers, and the hubs. “The devices are actually the least energy-efficient out of that,” concluded Hemmer.Some presenters had different perspectives. Several called for developing dedicated silicon chipsets for efficiency. However, panel moderator Muriel Medard, the Cecil H. Green Professor in EECS, described research at MIT, Boston University, and Maynooth University on the GRAND (Guessing Random Additive Noise Decoding) chip, saying, “rather than having obsolescence of chips as the new codes come in and in different standards, you can use one chip for all codes.”Whatever the chip or new algorithm, Helen Greiner, CEO of Tertill (a weeding robot) and co-founder of iRobot, emphasized that to get products to market, “We have to learn to go away from wanting to get the absolute latest and greatest, the most advanced processor that usually is more expensive.” She added, “I like to say robot demos are a dime a dozen, but robot products are very infrequent.”Greiner emphasized consumers can play a role in pushing for more energy-efficient products — just as drivers began to demand electric cars.Dean also sees an environmental role for the end user.“We have enabled our cloud customers to select which cloud region they want to run their computation in, and they can decide how important it is that they have a low carbon footprint,” he said, also citing other interfaces that might allow consumers to decide which air flights are more efficient or what impact installing a solar panel on their home would have.However, Scharp said, “Prolonging the life of your smartphone or tablet is really the best climate action you can do if you want to reduce your digital carbon footprint.”Facing increasing demandsDespite their optimism, the presenters acknowledged the world faces increasing compute demand from machine learning, AI, gaming, and especially, blockchain. Panel moderator Vivienne Sze, associate professor in EECS, noted the conundrum.“We can do a great job in making computing and communication really efficient. But there is this tendency that once things are very efficient, people use more of it, and this might result in an overall increase in the usage of these technologies, which will then increase our overall carbon footprint,” Sze said.Presenters saw great potential in academic/industry partnerships, particularly from research efforts on the academic side. “By combining these two forces together, you can really amplify the impact,” concluded Gousev.Presenters at the Climate Implications of Computing and Communications workshop also included: Joel Emer, professor of the practice in EECS at MIT; David Perreault, the Joseph F. and Nancy P. Keithley Professor of EECS at MIT; Jesús del Alamo, MIT Donner Professor and professor of electrical engineering in EECS at MIT; Heike Riel, IBM Fellow and head science and technology at IBM; and Takashi Ando, principal research staff member at IBM Research. The recorded workshop sessions are available on YouTube. More

  • in

    Machine learning, harnessed to extreme computing, aids fusion energy development

    MIT research scientists Pablo Rodriguez-Fernandez and Nathan Howard have just completed one of the most demanding calculations in fusion science — predicting the temperature and density profiles of a magnetically confined plasma via first-principles simulation of plasma turbulence. Solving this problem by brute force is beyond the capabilities of even the most advanced supercomputers. Instead, the researchers used an optimization methodology developed for machine learning to dramatically reduce the CPU time required while maintaining the accuracy of the solution.

    Fusion energyFusion offers the promise of unlimited, carbon-free energy through the same physical process that powers the sun and the stars. It requires heating the fuel to temperatures above 100 million degrees, well above the point where the electrons are stripped from their atoms, creating a form of matter called plasma. On Earth, researchers use strong magnetic fields to isolate and insulate the hot plasma from ordinary matter. The stronger the magnetic field, the better the quality of the insulation that it provides.

    Rodriguez-Fernandez and Howard have focused on predicting the performance expected in the SPARC device, a compact, high-magnetic-field fusion experiment, currently under construction by the MIT spin-out company Commonwealth Fusion Systems (CFS) and researchers from MIT’s Plasma Science and Fusion Center. While the calculation required an extraordinary amount of computer time, over 8 million CPU-hours, what was remarkable was not how much time was used, but how little, given the daunting computational challenge.

    The computational challenge of fusion energyTurbulence, which is the mechanism for most of the heat loss in a confined plasma, is one of the science’s grand challenges and the greatest problem remaining in classical physics. The equations that govern fusion plasmas are well known, but analytic solutions are not possible in the regimes of interest, where nonlinearities are important and solutions encompass an enormous range of spatial and temporal scales. Scientists resort to solving the equations by numerical simulation on computers. It is no accident that fusion researchers have been pioneers in computational physics for the last 50 years.

    One of the fundamental problems for researchers is reliably predicting plasma temperature and density given only the magnetic field configuration and the externally applied input power. In confinement devices like SPARC, the external power and the heat input from the fusion process are lost through turbulence in the plasma. The turbulence itself is driven by the difference in the extremely high temperature of the plasma core and the relatively cool temperatures of the plasma edge (merely a few million degrees). Predicting the performance of a self-heated fusion plasma therefore requires a calculation of the power balance between the fusion power input and the losses due to turbulence.

    These calculations generally start by assuming plasma temperature and density profiles at a particular location, then computing the heat transported locally by turbulence. However, a useful prediction requires a self-consistent calculation of the profiles across the entire plasma, which includes both the heat input and turbulent losses. Directly solving this problem is beyond the capabilities of any existing computer, so researchers have developed an approach that stitches the profiles together from a series of demanding but tractable local calculations. This method works, but since the heat and particle fluxes depend on multiple parameters, the calculations can be very slow to converge.

    However, techniques emerging from the field of machine learning are well suited to optimize just such a calculation. Starting with a set of computationally intensive local calculations run with the full-physics, first-principles CGYRO code (provided by a team from General Atomics led by Jeff Candy) Rodriguez-Fernandez and Howard fit a surrogate mathematical model, which was used to explore and optimize a search within the parameter space. The results of the optimization were compared to the exact calculations at each optimum point, and the system was iterated to a desired level of accuracy. The researchers estimate that the technique reduced the number of runs of the CGYRO code by a factor of four.

    New approach increases confidence in predictionsThis work, described in a recent publication in the journal Nuclear Fusion, is the highest fidelity calculation ever made of the core of a fusion plasma. It refines and confirms predictions made with less demanding models. Professor Jonathan Citrin, of the Eindhoven University of Technology and leader of the fusion modeling group for DIFFER, the Dutch Institute for Fundamental Energy Research, commented: “The work significantly accelerates our capabilities in more routinely performing ultra-high-fidelity tokamak scenario prediction. This algorithm can help provide the ultimate validation test of machine design or scenario optimization carried out with faster, more reduced modeling, greatly increasing our confidence in the outcomes.” 

    In addition to increasing confidence in the fusion performance of the SPARC experiment, this technique provides a roadmap to check and calibrate reduced physics models, which run with a small fraction of the computational power. Such models, cross-checked against the results generated from turbulence simulations, will provide a reliable prediction before each SPARC discharge, helping to guide experimental campaigns and improving the scientific exploitation of the device. It can also be used to tweak and improve even simple data-driven models, which run extremely quickly, allowing researchers to sift through enormous parameter ranges to narrow down possible experiments or possible future machines.

    The research was funded by CFS, with computational support from the National Energy Research Scientific Computing Center, a U.S. Department of Energy Office of Science User Facility. More

  • in

    Using excess heat to improve electrolyzers and fuel cells

    Reducing the use of fossil fuels will have unintended consequences for the power-generation industry and beyond. For example, many industrial chemical processes use fossil-fuel byproducts as precursors to things like asphalt, glycerine, and other important chemicals. One solution to reduce the impact of the loss of fossil fuels on industrial chemical processes is to store and use the heat that nuclear fission produces. New MIT research has dramatically improved a way to put that heat toward generating chemicals through a process called electrolysis. 

    Electrolyzers are devices that use electricity to split water (H2O) and generate molecules of hydrogen (H2) and oxygen (O2). Hydrogen is used in fuel cells to generate electricity and drive electric cars or drones or in industrial operations like the production of steel, ammonia, and polymers. Electrolyzers can also take in water and carbon dioxide (CO2) and produce oxygen and ethylene (C2H4), a chemical used in polymers and elsewhere.

    There are three main types of electrolyzers. One type works at room temperature, but has downsides; they’re inefficient and require rare metals, such as platinum. A second type is more efficient but runs at high temperatures, above 700 degrees Celsius. But metals corrode at that temperature, and the devices need expensive sealing and insulation. The third type would be a Goldilocks solution for nuclear heat if it were perfected, running at 300-600 C and requiring mostly cheap materials like stainless steel. These cells have never been operated as efficiently as theory says they should. The new work, published this month in Nature, both illuminates the problem and offers a solution.

    A sandwich mystery

    The intermediate-temperature devices use what are called protonic ceramic electrochemical cells. Each cell is a sandwich, with a dense electrolyte layered between two porous electrodes. Water vapor is pumped into the top electrode. A wire on the side connects the two electrodes, and externally generated electricity runs from the top to the bottom. The voltage pulls electrons out of the water, which splits the molecule, releasing oxygen. A hydrogen atom without an electron is just a proton. The protons get pulled through the electrolyte to rejoin with the electrons at the bottom electrode and form H2 molecules, which are then collected.

    On its own, the electrolyte in the middle, made mainly of barium, cerium, and zirconium, conducts protons very well. “But when we put the same material into this three-layer device, the proton conductivity of the full cell is pretty bad,” says Yanhao Dong, a postdoc in MIT’s Department of Nuclear Science and Engineering and a paper co-author. “Its conductivity is only about 50 percent of the bulk form’s. We wondered why there’s an inconsistency here.”

    A couple of clues pointed them in the right direction. First, if they don’t prepare the cell very carefully, the top layer, only about 20 microns (.02 millimeters) thick, doesn’t stay attached. “Sometimes if you use just Scotch tape, it will peel off,” Dong says. Second, when they looked at a cross section of a device using a scanning electron microscope, they saw that the top surface of the electrolyte layer was flat, whereas the bottom surface of the porous electrode sitting on it was bumpy, and the two came into contact in only a few places. They didn’t bond well. That precarious interface leads to both structural de-lamination and poor proton passage from the electrode to the electrolyte.

    Acidic solution

    The solution turned out to be simple: researchers roughed up the top of the electrolyte. Specifically, they applied acid for 10 minutes, which etched grooves into the surface. Ju Li, the Battelle Energy Alliance Professor in Nuclear Engineering and professor of materials science and engineering at MIT, and a paper co-author, likens it to sandblasting a surface before applying paint to increase adhesion. Their acid-treated cells produced about 200 percent more hydrogen per area at 1.5 volts at 600 C than did any previous cell of its type, and worked well down to 350 C with very little performance decay over extended operation. 

    “The authors reported a surprisingly simple yet highly effective surface treatment to dramatically improve the interface,” says Liangbing Hu, the director of the Center for Materials Innovation at the Maryland Energy Innovation Institute, who was not involved in the work. He calls the cell performance “exceptional.”

    “We are excited and surprised” by the results, Dong says. “The engineering solution seems quite simple. And that’s actually good, because it makes it very applicable to real applications.” In a practical product, many such cells would be stacked together to form a module. MIT’s partner in the project, Idaho National Laboratory, is very strong in engineering and prototyping, so Li expects to see electrolyzers based on this technology at scale before too long. “At the materials level, this is a breakthrough that shows that at a real-device scale you can work at this sweet spot of temperature of 350 to 600 degrees Celsius for nuclear fission and fusion reactors,” he says.

    “Reduced operating temperature enables cheaper materials for the large-scale assembly, including the stack,” says Idaho National Laboratory researcher and paper co-author Dong Ding. “The technology operates within the same temperature range as several important, current industrial processes, including ammonia production and CO2 reduction. Matching these temperatures will expedite the technology’s adoption within the existing industry.”

    “This is very significant for both Idaho National Lab and us,” Li adds, “because it bridges nuclear energy and renewable electricity.” He notes that the technology could also help fuel cells, which are basically electrolyzers run in reverse, using green hydrogen or hydrocarbons to generate electricity. According to Wei Wu, a materials scientist at Idaho National Laboratory and a paper co-author, “this technique is quite universal and compatible with other solid electrochemical devices.”

    Dong says it’s rare for a paper to advance both science and engineering to such a degree. “We are happy to combine those together and get both very good scientific understanding and also very good real-world performance.”

    This work, done in collaboration with Idaho National Laboratory, New Mexico State University, and the University of Nebraska–Lincoln, was funded, in part, by the U.S. Department of Energy. More

  • in

    Developing electricity-powered, low-emissions alternatives to carbon-intensive industrial processes

    On April 11, 2022, MIT announced five multiyear flagship projects in the first-ever Climate Grand Challenges, a new initiative to tackle complex climate problems and deliver breakthrough solutions to the world as quickly as possible. This is the second article in a five-part series highlighting the most promising concepts to emerge from the competition, and the interdisciplinary research teams behind them.

    One of the biggest leaps that humankind could take to drastically lower greenhouse gas emissions globally would be the complete decarbonization of industry. But without finding low-cost, environmentally friendly substitutes for industrial materials, the traditional production of steel, cement, ammonia, and ethylene will continue pumping out billions of tons of carbon annually; these sectors alone are responsible for at least one third of society’s global greenhouse gas emissions. 

    A major problem is that industrial manufacturers, whose success depends on reliable, cost-efficient, and large-scale production methods, are too heavily invested in processes that have historically been powered by fossil fuels to quickly switch to new alternatives. It’s a machine that kicked on more than 100 years ago, and which MIT electrochemical engineer Yet-Ming Chiang says we can’t shut off without major disruptions to the world’s massive supply chain of these materials. What’s needed, Chiang says, is a broader, collaborative clean energy effort that takes “targeted fundamental research, all the way through to pilot demonstrations that greatly lowers the risk for adoption of new technology by industry.”

    This would be a new approach to decarbonization of industrial materials production that relies on largely unexplored but cleaner electrochemical processes. New production methods could be optimized and integrated into the industrial machine to make it run on low-cost, renewable electricity in place of fossil fuels. 

    Recognizing this, Chiang, the Kyocera Professor in the Department of Materials Science and Engineering, teamed with research collaborator Bilge Yildiz, the Breene M. Kerr Professor of Nuclear Science and Engineering and professor of materials science and engineering, with key input from Karthish Manthiram, visiting professor in the Department of Chemical Engineering, to submit a project proposal to the MIT Climate Grand Challenges. Their plan: to create an innovation hub on campus that would bring together MIT researchers individually investigating decarbonization of steel, cement, ammonia, and ethylene under one roof, combining research equipment and directly collaborating on new methods to produce these four key materials.

    Many researchers across MIT have already signed on to join the effort, including Antoine Allanore, associate professor of metallurgy, who specializes in the development of sustainable materials and manufacturing processes, and Elsa Olivetti, the Esther and Harold E. Edgerton Associate Professor in the Department of Materials Science and Engineering, who is an expert in materials economics and sustainability. Other MIT faculty currently involved include Fikile Brushett, Betar Gallant, Ahmed Ghoniem, William Green, Jeffrey Grossman, Ju Li, Yuriy Román-Leshkov, Yang Shao-Horn, Robert Stoner, Yogesh Surendranath, Timothy Swager, and Kripa Varanasi.

    “The team we brought together has the expertise needed to tackle these challenges, including electrochemistry — using electricity to decarbonize these chemical processes — and materials science and engineering, process design and scale-up technoeconomic analysis, and system integration, which is all needed for this to go out from our labs to the field,” says Yildiz.

    Selected from a field of more than 100 proposals, their Center for Electrification and Decarbonization of Industry (CEDI) will be the first such institute worldwide dedicated to testing and scaling the most innovative and promising technologies in sustainable chemicals and materials. CEDI will work to facilitate rapid translation of lab discoveries into affordable, scalable industry solutions, with potential to offset as much as 15 percent of greenhouse gas emissions. The team estimates that some CEDI projects already underway could be commercialized within three years.

    “The real timeline is as soon as possible,” says Chiang.

    To achieve CEDI’s ambitious goals, a physical location is key, staffed with permanent faculty, as well as undergraduates, graduate students, and postdocs. Yildiz says the center’s success will depend on engaging student researchers to carry forward with research addressing the biggest ongoing challenges to decarbonization of industry.

    “We are training young scientists, students, on the learned urgency of the problem,” says Yildiz. “We empower them with the skills needed, and even if an individual project does not find the implementation in the field right away, at least, we would have trained the next generation that will continue to go after them in the field.”

    Chiang’s background in electrochemistry showed him how the efficiency of cement production could benefit from adopting clean electricity sources, and Yildiz’s work on ethylene, the source of plastic and one of industry’s most valued chemicals, has revealed overlooked cost benefits to switching to electrochemical processes with less expensive starting materials. With industry partners, they hope to continue these lines of fundamental research along with Allanore, who is focused on electrifying steel production, and Manthiram, who is developing new processes for ammonia. Olivetti will focus on understanding risks and barriers to implementation. This multilateral approach aims to speed up the timeline to industry adoption of new technologies at the scale needed for global impact.

    “One of the points of emphasis in this whole center is going to be applying technoeconomic analysis of what it takes to be successful at a technical and economic level, as early in the process as possible,” says Chiang.

    The impact of large-scale industry adoption of clean energy sources in these four key areas that CEDI plans to target first would be profound, as these sectors are currently responsible for 7.5 billion tons of emissions annually. There is the potential for even greater impact on emissions as new knowledge is applied to other industrial products beyond the initial four targets of steel, cement, ammonia, and ethylene. Meanwhile, the center will stand as a hub to attract new industry, government stakeholders, and research partners to collaborate on urgently needed solutions, both newly arising and long overdue.

    When Chiang and Yildiz first met to discuss ideas for MIT Climate Grand Challenges, they decided they wanted to build a climate research center that functioned unlike any other to help pivot large industry toward decarbonization. Beyond considering how new solutions will impact industry’s bottom line, CEDI will also investigate unique synergies that could arise from the electrification of industry, like processes that would create new byproducts that could be the feedstock to other industry processes, reducing waste and increasing efficiencies in the larger system. And because industry is so good at scaling, those added benefits would be widespread, finally replacing century-old technologies with critical updates designed to improve production and markedly reduce industry’s carbon footprint sooner rather than later.

    “Everything we do, we’re going to try to do with urgency,” Chiang says. “The fundamental research will be done with urgency, and the transition to commercialization, we’re going to do with urgency.” More

  • in

    MIT announces five flagship projects in first-ever Climate Grand Challenges competition

    MIT today announced the five flagship projects selected in its first-ever Climate Grand Challenges competition. These multiyear projects will define a dynamic research agenda focused on unraveling some of the toughest unsolved climate problems and bringing high-impact, science-based solutions to the world on an accelerated basis.

    Representing the most promising concepts to emerge from the two-year competition, the five flagship projects will receive additional funding and resources from MIT and others to develop their ideas and swiftly transform them into practical solutions at scale.

    “Climate Grand Challenges represents a whole-of-MIT drive to develop game-changing advances to confront the escalating climate crisis, in time to make a difference,” says MIT President L. Rafael Reif. “We are inspired by the creativity and boldness of the flagship ideas and by their potential to make a significant contribution to the global climate response. But given the planet-wide scale of the challenge, success depends on partnership. We are eager to work with visionary leaders in every sector to accelerate this impact-oriented research, implement serious solutions at scale, and inspire others to join us in confronting this urgent challenge for humankind.”

    Brief descriptions of the five Climate Grand Challenges flagship projects are provided below.

    Bringing Computation to the Climate Challenge

    This project leverages advances in artificial intelligence, machine learning, and data sciences to improve the accuracy of climate models and make them more useful to a variety of stakeholders — from communities to industry. The team is developing a digital twin of the Earth that harnesses more data than ever before to reduce and quantify uncertainties in climate projections.

    Research leads: Raffaele Ferrari, the Cecil and Ida Green Professor of Oceanography in the Department of Earth, Atmospheric and Planetary Sciences, and director of the Program in Atmospheres, Oceans, and Climate; and Noelle Eckley Selin, director of the Technology and Policy Program and professor with a joint appointment in the Institute for Data, Systems, and Society and the Department of Earth, Atmospheric and Planetary Sciences

    Center for Electrification and Decarbonization of Industry

    This project seeks to reinvent and electrify the processes and materials behind hard-to-decarbonize industries like steel, cement, ammonia, and ethylene production. A new innovation hub will perform targeted fundamental research and engineering with urgency, pushing the technological envelope on electricity-driven chemical transformations.

    Research leads: Yet-Ming Chiang, the Kyocera Professor of Materials Science and Engineering, and Bilge Yıldız, the Breene M. Kerr Professor in the Department of Nuclear Science and Engineering and professor in the Department of Materials Science and Engineering

    Preparing for a new world of weather and climate extremes

    This project addresses key gaps in knowledge about intensifying extreme events such as floods, hurricanes, and heat waves, and quantifies their long-term risk in a changing climate. The team is developing a scalable climate-change adaptation toolkit to help vulnerable communities and low-carbon energy providers prepare for these extreme weather events.

    Research leads: Kerry Emanuel, the Cecil and Ida Green Professor of Atmospheric Science in the Department of Earth, Atmospheric and Planetary Sciences and co-director of the MIT Lorenz Center; Miho Mazereeuw, associate professor of architecture and urbanism in the Department of Architecture and director of the Urban Risk Lab; and Paul O’Gorman, professor in the Program in Atmospheres, Oceans, and Climate in the Department of Earth, Atmospheric and Planetary Sciences

    The Climate Resilience Early Warning System

    The CREWSnet project seeks to reinvent climate change adaptation with a novel forecasting system that empowers underserved communities to interpret local climate risk, proactively plan for their futures incorporating resilience strategies, and minimize losses. CREWSnet will initially be demonstrated in southwestern Bangladesh, serving as a model for similarly threatened regions around the world.

    Research leads: John Aldridge, assistant leader of the Humanitarian Assistance and Disaster Relief Systems Group at MIT Lincoln Laboratory, and Elfatih Eltahir, the H.M. King Bhumibol Professor of Hydrology and Climate in the Department of Civil and Environmental Engineering

    Revolutionizing agriculture with low-emissions, resilient crops

    This project works to revolutionize the agricultural sector with climate-resilient crops and fertilizers that have the ability to dramatically reduce greenhouse gas emissions from food production.

    Research lead: Christopher Voigt, the Daniel I.C. Wang Professor in the Department of Biological Engineering

    “As one of the world’s leading institutions of research and innovation, it is incumbent upon MIT to draw on our depth of knowledge, ingenuity, and ambition to tackle the hard climate problems now confronting the world,” says Richard Lester, MIT associate provost for international activities. “Together with collaborators across industry, finance, community, and government, the Climate Grand Challenges teams are looking to develop and implement high-impact, path-breaking climate solutions rapidly and at a grand scale.”

    The initial call for ideas in 2020 yielded nearly 100 letters of interest from almost 400 faculty members and senior researchers, representing 90 percent of MIT departments. After an extensive evaluation, 27 finalist teams received a total of $2.7 million to develop comprehensive research and innovation plans. The projects address four broad research themes:

    To select the winning projects, research plans were reviewed by panels of international experts representing relevant scientific and technical domains as well as experts in processes and policies for innovation and scalability.

    “In response to climate change, the world really needs to do two things quickly: deploy the solutions we already have much more widely, and develop new solutions that are urgently needed to tackle this intensifying threat,” says Maria Zuber, MIT vice president for research. “These five flagship projects exemplify MIT’s strong determination to bring its knowledge and expertise to bear in generating new ideas and solutions that will help solve the climate problem.”

    “The Climate Grand Challenges flagship projects set a new standard for inclusive climate solutions that can be adapted and implemented across the globe,” says MIT Chancellor Melissa Nobles. “This competition propels the entire MIT research community — faculty, students, postdocs, and staff — to act with urgency around a worsening climate crisis, and I look forward to seeing the difference these projects can make.”

    “MIT’s efforts on climate research amid the climate crisis was a primary reason that I chose to attend MIT, and remains a reason that I view the Institute favorably. MIT has a clear opportunity to be a thought leader in the climate space in our own MIT way, which is why CGC fits in so well,” says senior Megan Xu, who served on the Climate Grand Challenges student committee and is studying ways to make the food system more sustainable.

    The Climate Grand Challenges competition is a key initiative of “Fast Forward: MIT’s Climate Action Plan for the Decade,” which the Institute published in May 2021. Fast Forward outlines MIT’s comprehensive plan for helping the world address the climate crisis. It consists of five broad areas of action: sparking innovation, educating future generations, informing and leveraging government action, reducing MIT’s own climate impact, and uniting and coordinating all of MIT’s climate efforts. More

  • in

    Finding the questions that guide MIT fusion research

    “One of the things I learned was, doing good science isn’t so much about finding the answers as figuring out what the important questions are.”

    As Martin Greenwald retires from the responsibilities of senior scientist and deputy director of the MIT Plasma Science and Fusion Center (PSFC), he reflects on his almost 50 years of science study, 43 of them as a researcher at MIT, pursuing the question of how to make the carbon-free energy of fusion a reality.

    Most of Greenwald’s important questions about fusion began after graduating from MIT with a BS in both physics and chemistry. Beginning graduate work at the University of California at Berkeley, he felt compelled to learn more about fusion as an energy source that could have “a real societal impact.” At the time, researchers were exploring new ideas for devices that could create and confine fusion plasmas. Greenwald worked on Berkeley’s “alternate concept” TORMAC, a Toroidal Magnetic Cusp. “It didn’t work out very well,” he laughs. “The first thing I was known for was making the measurements that shut down the program.”

    Believing the temperature of the plasma generated by the device would not be as high as his group leader expected, Greenwald developed hardware that could measure the low temperatures predicted by his own “back of the envelope calculations.” As he anticipated, his measurements showed that “this was not a fusion plasma; this was hardly a confined plasma at all.”

    With a PhD from Berkeley, Greenwald returned to MIT for a research position at the PSFC, attracted by the center’s “esprit de corps.”

    He arrived in time to participate in the final experiments on Alcator A, the first in a series of tokamaks built at MIT, all characterized by compact size and featuring high-field magnets. The tokamak design was then becoming favored as the most effective route to fusion: its doughnut-shaped vacuum chamber, surrounded by electromagnets, could confine the turbulent plasma long enough, while increasing its heat and density, to make fusion occur.

    Alcator A showed that the energy confinement time improves in relation to increasing plasma density. MIT’s succeeding device, Alcator C, was designed to use higher magnetic fields, boosting expectations that it would reach higher densities and better confinement. To attain these goals, however, Greenwald had to pursue a new technique that increased density by injecting pellets of frozen fuel into the plasma, a method he likens to throwing “snowballs in hell.” This work was notable for the creation of a new regime of enhanced plasma confinement on Alcator C. In those experiments, a confined plasma surpassed for the first time one of the two Lawson criteria — the minimum required value for the product of the plasma density and confinement time — for making net power from fusion. This had been a milestone for fusion research since their publication by John Lawson in 1957.

    Greenwald continued to make a name for himself as part of a larger study into the physics of the Compact Ignition Tokamak — a high-field burning plasma experiment that the U.S. program was proposing to build in the late 1980s. The result, unexpectedly, was a new scaling law, later known as the “Greenwald Density Limit,” and a new theory for the mechanism of the limit. It has been used to accurately predict performance on much larger machines built since.

    The center’s next tokamak, Alcator C-Mod, started operation in 1993 and ran for more than 20 years, with Greenwald as the chair of its Experimental Program Committee. Larger than Alcator C, the new device supported a highly shaped plasma, strong radiofrequency heating, and an all-metal plasma-facing first wall. All of these would eventually be required in a fusion power system.

    C-Mod proved to be MIT’s most enduring fusion experiment to date, producing important results for 20 years. During that time Greenwald contributed not only to the experiments, but to mentoring the next generation. Research scientist Ryan Sweeney notes that “Martin quickly gained my trust as a mentor, in part due to his often casual dress and slightly untamed hair, which are embodiments of his transparency and his focus on what matters. He can quiet a room of PhDs and demand attention not by intimidation, but rather by his calmness and his ability to bring clarity to complicated problems, be they scientific or human in nature.”

    Greenwald worked closely with the group of students who, in PSFC Director Dennis Whyte’s class, came up with the tokamak concept that evolved into SPARC. MIT is now pursuing this compact, high-field tokamak with Commonwealth Fusion Systems, a startup that grew out of the collective enthusiasm for this concept, and the growing realization it could work. Greenwald now heads the Physics Group for the SPARC project at MIT. He has helped confirm the device’s physics basis in order to predict performance and guide engineering decisions.

    “Martin’s multifaceted talents are thoroughly embodied by, and imprinted on, SPARC” says Whyte. “First, his leadership in its plasma confinement physics validation and publication place SPARC on a firm scientific footing. Secondly, the impact of the density limit he discovered, which shows that fuel density increases with magnetic field and decreasing the size of the tokamak, is critical in obtaining high fusion power density not just in SPARC, but in future power plants. Third, and perhaps most impressive, is Martin’s mentorship of the SPARC generation of leadership.”

    Greenwald’s expertise and easygoing personality have made him an asset as head of the PSFC Office for Computer Services and group leader for data acquisition and computing, and sought for many professional committees. He has been an APS Fellow since 2000, and was an APS Distinguished Lecturer in Plasma Physics (2001-02). He was also presented in 2014 with a Leadership Award from Fusion Power Associates. He is currently an associate editor for Physics of Plasmas and a member of the Lawrence Livermore National Laboratory Physical Sciences Directorate External Review Committee.

    Although leaving his full-time responsibilities, Greenwald will remain at MIT as a visiting scientist, a role he says will allow him to “stick my nose into everything without being responsible for anything.”

    “At some point in the race you have to hand off the baton,“ he says. “And it doesn’t mean you’re not interested in the outcome; and it doesn’t mean you’re just going to walk away into the stands. I want to be there at the end when we succeed.” More

  • in

    Leveraging science and technology against the world’s top problems

    Looking back on nearly a half-century at MIT, Richard K. Lester, associate provost and Japan Steel Industry Professor, sees a “somewhat eccentric professional trajectory.”

    But while his path has been irregular, there has been a clearly defined through line, Lester says: the emergence of new science and new technologies, the potential of these developments to shake up the status quo and address some of society’s most consequential problems, and what the outcomes might mean for America’s place in the world.

    Perhaps no assignment in Lester’s portfolio better captures this theme than the new MIT Climate Grand Challenges competition. Spearheaded by Lester and Maria Zuber, MIT vice president for research, and launched at the height of the pandemic in summer 2020, this initiative is designed to mobilize the entire MIT research community around tackling “the really hard, challenging problems currently standing in the way of an effective global response to the climate emergency,” says Lester. “The focus is on those problems where progress requires developing and applying frontier knowledge in the natural and social sciences and cutting-edge technologies. This is the MIT community swinging for the fences in areas where we have a comparative advantage.”This is a passion project for him, not least because it has engaged colleagues from nearly all of MIT’s departments. After nearly 100 initial ideas were submitted by more than 300 faculty, 27 teams were named finalists and received funding to develop comprehensive research and innovation plans in such areas as decarbonizing complex industries; risk forecasting and adaptation; advancing climate equity; and carbon removal, management, and storage. In April, a small subset of this group will become multiyear flagship projects, augmenting the work of existing MIT units that are pursuing climate research. Lester is sunny in the face of these extraordinarily complex problems. “This is a bottom-up effort with exciting proposals, and where the Institute is collectively committed — it’s MIT at its best.”

    Nuclear to the core

    This initiative carries a particular resonance for Lester, who remains deeply engaged in nuclear engineering. “The role of nuclear energy is central and will need to become even more central if we’re to succeed in addressing the climate challenge,” he says. He also acknowledges that for nuclear energy technologies — both fission and fusion — to play a vital role in decarbonizing the economy, they must not just win “in the court of public opinion, but in the marketplace,” he says. “Over the years, my research has sought to elucidate what needs to be done to overcome these obstacles.”

    In fact, Lester has been campaigning for much of his career for a U.S. nuclear innovation agenda, a commitment that takes on increased urgency as the contours of the climate crisis sharpen. He argues for the rapid development and testing of nuclear technologies that can complement the renewable but intermittent energy sources of sun and wind. Whether powerful, large-scale, molten-salt-cooled reactors or small, modular, light water reactors, nuclear batteries or promising new fusion projects, U.S. energy policy must embrace nuclear innovation, says Lester, or risk losing the high-stakes race for a sustainable future.

    Chancing into a discipline

    Lester’s introduction to nuclear science was pure happenstance.

    Born in the English industrial city of Leeds, he grew up in a musical family and played piano, violin, and then viola. “It was a big part of my life,” he says, and for a time, music beckoned as a career. He tumbled into a chemical engineering concentration at Imperial College, London, after taking a job in a chemical factory following high school. “There’s a certain randomness to life, and in my case, it’s reflected in my choice of major, which had a very large impact on my ultimate career.”

    In his second year, Lester talked his way into running a small experiment in the university’s research reactor, on radiation effects in materials. “I got hooked, and began thinking of studying nuclear engineering.” But there were few graduate programs in British universities at the time. Then serendipity struck again. The instructor of Lester’s single humanities course at Imperial had previously taught at MIT, and suggested Lester take a look at the nuclear program there. “I will always be grateful to him (and, indirectly, to MIT’s Humanities program) for opening my eyes to the existence of this institution where I’ve spent my whole adult life,” says Lester.

    He arrived at MIT with the notion of mitigating the harms of nuclear weapons. It was a time when the nuclear arms race “was an existential threat in everyone’s life,” he recalls. He targeted his graduate studies on nuclear proliferation. But he also encountered an electrifying study by MIT meteorologist Jule Charney. “Professor Charney produced one of the first scientific assessments of the effects on climate of increasing CO2 concentrations in the atmosphere, with quantitative estimates that have not fundamentally changed in 40 years.”

    Lester shifted directions. “I came to MIT to work on nuclear security, but stayed in the nuclear field because of the contributions that it can and must make in addressing climate change,” he says.

    Research and policy

    His path forward, Lester believed, would involve applying his science and technology expertise to critical policy problems, grounded in immediate, real-world concerns, and aiming for broad policy impacts. Even as a member of NSE, he joined with colleagues from many MIT departments to study American industrial practices and what was required to make them globally competitive, and then founded MIT’s Industrial Performance Center (IPC). Working at the IPC with interdisciplinary teams of faculty and students on the sources of productivity and innovation, his research took him to many countries at different stages of industrialization, including China, Taiwan, Japan, and Brazil.

    Lester’s wide-ranging work yielded books (including the MIT Press bestseller “Made in America”), advisory positions with governments, corporations, and foundations, and unexpected collaborations. “My interests were always fairly broad, and being at MIT made it possible to team up with world-leading scholars and extraordinary students not just in nuclear engineering, but in many other fields such as political science, economics, and management,” he says.

    Forging cross-disciplinary ties and bringing creative people together around a common goal proved a valuable skill as Lester stepped into positions of ever-greater responsibility at the Institute. He didn’t exactly relish the prospect of a desk job, though. “I religiously avoided administrative roles until I felt I couldn’t keep avoiding them,” he says.

    Today, as associate provost, he tends to MIT’s international activities — a daunting task given increasing scrutiny of research universities’ globe-spanning research partnerships and education of foreign students. But even in the midst of these consuming chores, Lester remains devoted to his home department. “Being a nuclear engineer is a central part of my identity,” he says.

    To students entering the nuclear field nearly 50 years after he did, who are understandably “eager to fix everything that seems wrong immediately,” he has a message: “Be patient. The hard things, the ones that are really worth doing, will take a long time to do.” Putting the climate crisis behind us will take two generations, Lester believes. Current students will start the job, but it will also take the efforts of their children’s generation before it is done.  “So we need you to be energetic and creative, of course, but whatever you do we also need you to be patient and to have ‘stick-to-itiveness’ — and maybe also a moral compass that our generation has lacked.” More

  • in

    Finding her way to fusion

    “I catch myself startling people in public.”

    Zoe Fisher’s animated hands carry part of the conversation as she describes how her naturally loud and expressive laughter turned heads in the streets of Yerevan. There during MIT’s Independent Activities period (IAP), she was helping teach nuclear science at the American University of Armenia, before returning to MIT to pursue fusion research at the Plasma Science and Fusion Center (PSFC).

    Startling people may simply be in Fisher’s DNA. She admits that when she first arrived at MIT, knowing nothing about nuclear science and engineering (NSE), she chose to join that department’s Freshman Pre-Orientation Program (FPOP) “for the shock value.” It was a choice unexpected by family, friends, and mostly herself. Now in her senior year, a 2021 recipient of NSE’s Irving Kaplan Award for academic achievements by a junior and entering a fifth-year master of science program in nuclear fusion, Fisher credits that original spontaneous impulse for introducing her to a subject she found so compelling that, after exploring multiple possibilities, she had to return to it.

    Fisher’s venture to Armenia, under the guidance of NSE associate professor Areg Danagoulian, is not the only time she has taught oversees with MISTI’s Global Teaching Labs, though it is the first time she has taught nuclear science, not to mention thermodynamics and materials science. During IAP 2020 she was a student teacher at a German high school, teaching life sciences, mathematics, and even English to grades five through 12. And after her first year she explored the transportation industry with a mechanical engineering internship in Tuscany, Italy.

    By the time she was ready to declare her NSE major she had sampled the alternatives both overseas and at home, taking advantage of MIT’s Undergraduate Research Opportunities Program (UROP). Drawn to fusion’s potential as an endless source of carbon-free energy on earth, she decided to try research at the PSFC, to see if the study was a good fit. 

    Much fusion research at MIT has favored heating hydrogen fuel inside a donut-shaped device called a tokamak, creating plasma that is hot and dense enough for fusion to occur. Because plasma will follow magnetic field lines, these devices are wrapped with magnets to keep the hot fuel from damaging the chamber walls.

    Fisher was assigned to SPARC, the PSFC’s new tokamak collaboration with MIT startup Commonwealth Fusion Systems (CSF), which uses a game-changing high-temperature superconducting (HTS) tape to create fusion magnets that minimize tokamak size and maximize performance. Working on a database reference book for SPARC materials, she was finding purpose even in the most repetitive tasks. “Which is how I knew I wanted to stay in fusion,” she laughs.

    Fisher’s latest UROP assignment takes her — literally — deeper into SPARC research. She works in a basement laboratory in building NW13 nicknamed “The Vault,” on a proton accelerator whose name conjures an underworld: DANTE. Supervised by PSFC Director Dennis Whyte and postdoc David Fischer, she is exploring the effects of radiation damage on the thin HTS tape that is key to SPARC’s design, and ultimately to the success of ARC, a prototype working fusion power plant.

    Because repetitive bombardment with neutrons produced during the fusion process can diminish the superconducting properties of the HTS, it is crucial to test the tape repeatedly. Fisher assists in assembling and testing the experimental setups for irradiating the HTS samples. Fisher recalls her first project was installing a “shutter” that would allow researchers to control exactly how much radiation reached the tape without having to turn off the entire experiment.

    “You could just push the button — block the radiation — then unblock it. It sounds super simple, but it took many trials. Because first I needed the right size solenoid, and then I couldn’t find a piece of metal that was small enough, and then we needed cryogenic glue…. To this day the actual final piece is made partially of paper towels.”

    She shrugs and laughs. “It worked, and it was the cheapest option.”

    Fisher is always ready to find the fun in fusion. Referring to DANTE as “A really cool dude,” she admits, “He’s perhaps a bit fickle. I may or may not have broken him once.” During a recent IAP seminar, she joined other PSFC UROP students to discuss her research, and expanded on how a mishap can become a gateway to understanding.

    “The grad student I work with and I got to repair almost the entire internal circuit when we blew the fuse — which originally was a really bad thing. But it ended up being great because we figured out exactly how it works.”

    Fisher’s upbeat spirit makes her ideal not only for the challenges of fusion research, but for serving the MIT community. As a student representative for NSE’s Diversity, Equity and Inclusion Committee, she meets monthly with the goal of growing and supporting diversity within the department.

    “This opportunity is impactful because I get my voice, and the voices of my peers, taken seriously,” she says. “Currently, we are spending most of our efforts trying to identify and eliminate hurdles based on race, ethnicity, gender, and income that prevent people from pursuing — and applying to — NSE.”

    To break from the lab and committees, she explores the Charles River as part of MIT’s varsity sailing team, refusing to miss a sunset. She also volunteers as an FPOP mentor, seeking to provide incoming first-years with the kind of experience that will make them want to return to the topic, as she did.

    She looks forward to continuing her studies on the HTS tapes she has been irradiating, proposing to send a current pulse above the critical current through the tape, to possibly anneal any defects from radiation, which would make repairs on future fusion power plants much easier.

    Fisher credits her current path to her UROP mentors and their infectious enthusiasm for the carbon-free potential of fusion energy.

    “UROPing around the PSFC showed me what I wanted to do with my life,” she says. “Who doesn’t want to save the world?” More