More stories

  • in

    Pursuing progress at the nanoscale

    Last fall, a team of five senior undergraduate nuclear engineering students met once a week for dinners where they took turns cooking and debated how to tackle a particularly daunting challenge set forth in their program’s capstone course, 22.033 (Nuclear Systems Design Project).

    In past semesters, students had free reign to identify any real-world problem that interested them to solve through team-driven prototyping and design. This past fall worked a little differently. The team continued the trend of tackling daunting problems, but instead got an assignment to explore a particular design challenge on MIT’s campus. Rising to the challenge, the team spent the semester seeking a feasible way to introduce a highly coveted technology at MIT.

    Housed inside a big blue dome is the MIT Nuclear Reactor Laboratory (NRL). The reactor is used to conduct a wide range of science experiments, but in recent years, there have been multiple attempts to implement an instrument at the reactor that could probe the structure of materials, molecules, and devices. With this technology, researchers could model the structure of a wide range of materials and complex liquids made of polymers or containing nanoscale inhomogeneities that differ from the larger mass. On campus, researchers for the first time could conduct experiments to better understand the properties and functions of anything placed in front of a neutron beam emanating from the reactor core.

    The impact of this would be immense. If the reactor could be adapted to conduct this advanced technique, known as small-angle neutron scattering (SANS), it would open up a whole new world of research at MIT.

    “It’s essentially using the nuclear reactor as an incredibly high-performance camera that researchers from all over MIT would be very interested in using, including nuclear science and engineering, chemical engineering, biological engineering, and materials science, who currently use this tool at other institutions,” says Zachary Hartwig, Nuclear Systems Design Project professor and the MIT Robert N. Noyce Career Development Professor.

    SANS instruments have been installed at fewer than 20 facilities worldwide, and MIT researchers have previously considered implementing the capability at the reactor to help MIT expand community-wide access to SANS. Last fall, this mission went from long-time campus dream to potential reality as it became the design challenge that Hartwig’s students confronted. Despite having no experience with SANS, the team embraced the challenge, taking the first steps to figure out how to bring this technology to campus.

    “I really loved the idea that what we were doing could have a very real impact,” says Zoe Fisher, Nuclear Systems Design Project team member and now graduate nuclear engineering student.

    Each fall, Hartwig uses the course to introduce students to real-world challenges with strict constraints on solutions, and last fall’s project came with plenty of thorny design questions for students to tackle. First was the size limitation posed by the space available at MIT’s reactor. In SANS facilities around the world, the average length of the instrument is 30 meters, but at NRL, the space available is approximately 7.5 meters. Second, these instruments can cost up to $30 million, which is far outside NRL’s proposed budget of $3 million. That meant not only did students need to design an instrument that would work in a smaller space, but also one that could be built for a tenth of the typical cost.

    “The challenge was not just implementing one of these instruments,” Hartwig says. “It was whether the students could significantly innovate beyond the ‘traditional’ approach to doing SANS to meet the daunting constraints that we have at the MIT Reactor.”

    Because NRL actually wants to pursue this project, the students had to get creative, and their creative potential was precisely why the idea arose to get them involved, says Jacopo Buongiorno, the director of science and technology at NRL and Tokyo Electric Power Company Professor in Nuclear Engineering. “Involvement in real-world projects that answer questions about feasibility and cost of new technology and capabilities is a key element of a successful undergraduate education at MIT,” Buongiorno says.

    Students say it would have been impossible to tackle the problem without the help of co-instructor Boris Khaykovich, a research scientist at NRL who specializes in neutron instrumentation.

    Over the past two decades, Khaykovich has watched as SANS became the most popular technique for analyzing material structure. As the amount of available SANS beam time at the few facilities that exist became more competitive, access declined. Today only the experiments passing the most stringent review get access. What Khaykovich hopes to bring to MIT is improved access to SANS by designing an instrument that will be suitable for a majority of run-of-the-mill experiments, even if it’s not as powerful as state-of-the-art national SANS facilities. Such an instrument can still serve a wider range of researchers who currently have few opportunities to pursue SANS experiments.

    “In the U.S., we don’t have a simple, small, day-to-day SANS instrument,” Khaykovich says.

    With Khaykovich’s help, nuclear engineering undergraduate student Liam Hines says his team was able to go much further with their assessment than they would’ve starting from scratch, with no background in SANS. This project was unlike anything they’d ever been asked of as MIT students, and for students like Hines, who contributed to NRL research his entire time on campus, it was a project that hit close to home. “We were imagining this thing that might be designed at MIT,” Hines says.

    Fisher and Hines were joined by undergraduate nuclear engineering student team members Francisco Arellano, Jovier Jimenez, and Brendan Vaughan. Together, they devised a design that surprised both Khaykovich and Hartwig, identifying creative solutions that overcame all limitations and significantly reduced cost.

    Their team’s final project featured an adaptation of a conical design that was recently experimentally tested in Japan, but not generally used. The conical design allowed them to maximize precision while working within the other constraints, resulting in an instrument design that exceeded Hartwig’s expectations. The students also showed the feasibility of using an alternative type of glass-based low-cost neutron detector to calibrate the scattering data. By avoiding the need for a traditional detector based on helium-3, which is increasingly scarce and exorbitantly expensive, such a detector would dramatically reduce cost and increase availability. Their final presentation indicated the day-to-day SANS instrument could be built at only 4.5 meters long and with an estimated cost less than $1 million.

    Khaykovich credited the students for their enthusiasm, bouncing ideas off each other and exploring as much terrain as possible by interviewing experts who implemented SANS at other facilities. “They showed quite a perseverance and an ability to go deep into a very unfamiliar territory for them,” Khaykovich says.

    Hines says that Hartwig emphasized the importance of fielding expert opinions to more quickly discover optimal solutions. Fisher says that based on their research, if their design is funded, it would make SANS “more accessible to research for the sake of knowledge,” rather than dominated by industry research.

    Hartwig and Khaykovich agreed the students’ final project results showed a baseline of how MIT could pursue SANS technology cheaply, and when NRL proceeds with its own design process, Hartwig says, “The student’s work might actually change the cost of the feasibility of this at MIT in a way that if we hadn’t run the class, we would never have thought about doing.”

    Buongiorno says as they move forward with the project, NRL staff will consult students’ findings.

    “Indeed, the students developed original technical approaches, which are now being further explored by the NRL staff and may ultimately lead to the deployment of this new important capability on the MIT campus,” Buongiorno says.

    Hartwig says it’s a goal of the Nuclear Systems Design Project course to empower students to learn how to lead teams and embrace challenges, so they can be effective leaders advancing novel solutions in research and industry. “I think it helps teach people to be agile, to be flexible, to have confidence that they can actually go off and learn what they don’t know and solve problems they may think are bigger than themselves,” he says.

    It’s common for past classes of Nuclear Systems Design Project students to continue working on ideas beyond the course, and some students have even launched companies from their project research. What’s less common is for Hartwig’s students to actively serve as engineers pointed to a particular campus problem that’s expected to be resolved in the next few years.

    “In this case, they’re actually working on something real,” Hartwig says. “Their ideas are going to very much influence what we hope will be a facility that gets built at the reactor.”

    For students, it was exciting to inform a major instrument proposal that will soon be submitted to federal funding agencies, and for Hines, it became a chance to make his mark at NRL.

    “This is a lab I’ve been contributing to my entire time at MIT, and then through this project, I finished my time at MIT contributing in a much larger sense,” Hines says. More

  • in

    Evan Leppink: Seeking a way to better stabilize the fusion environment

    “Fusion energy was always one of those kind-of sci-fi technologies that you read about,” says nuclear science and engineering PhD candidate Evan Leppink. He’s recalling the time before fusion became a part of his daily hands-on experience at MIT’s Plasma Science and Fusion Center, where he is studying a unique way to drive current in a tokamak plasma using radiofrequency (RF) waves. 

    Now, an award from the U.S. Department of Energy’s (DOE) Office of Science Graduate Student Research (SCGSR) Program will support his work with a 12-month residency at the DIII-D National Fusion Facility in San Diego, California.

    Like all tokamaks, DIII-D generates hot plasma inside a doughnut-shaped vacuum chamber wrapped with magnets. Because plasma will follow magnetic field lines, tokamaks are able to contain the turbulent plasma fuel as it gets hotter and denser, keeping it away from the edges of the chamber where it could damage the wall materials. A key part of the tokamak concept is that part of the magnetic field is created by electrical currents in the plasma itself, which helps to confine and stabilize the configuration. Researchers often launch high-power RF waves into tokamaks to drive that current.

    Leppink will be contributing to research, led by his MIT advisor Steve Wukitch, that pursues launching RF waves in DIII-D using a unique compact antenna placed on the tokamak center column. Typically, antennas are placed inside the tokamak on the outer edge of the doughnut, farthest from the central hole (or column), primarily because access and installation are easier there. This is known as the “low-field side,” because the magnetic field is lower there than at the central column, the “high-field side.” This MIT-led experiment, for the first time, will mount an antenna on the high-field side. There is some theoretical evidence that placing the wave launcher there could improve power penetration and current drive efficiency. And because the plasma environment is less harsh on this side, the antenna will survive longer, a factor important for any future power-producing tokamak.

    Leppink’s work on DIII-D focuses specifically on measuring the density of plasmas generated in the tokamak, for which he developed a “reflectometer.” This small antenna launches microwaves into the plasma, which reflect back to the antenna to be measured. The time that it takes for these microwaves to traverse the plasma provides information about the plasma density, allowing researchers to build up detailed density profiles, data critical for injecting RF power into the plasma.

    “Research shows that when we try to inject these waves into the plasma to drive the current, they can lose power as they travel through the edge region of the tokamak, and can even have problems entering the core of the plasma, where we would most like to direct them,” says Leppink. “My diagnostic will measure that edge region on the high-field side near the launcher in great detail, which provides us a way to directly verify calculations or compare actual results with simulation results.”

    Although focused on his own research, Leppink has excelled at priming other students for success in their studies and research. In 2021 he received the NSE Outstanding Teaching Assistant and Mentorship Award.

    “The highlights of TA’ing for me were the times when I could watch students go from struggling with a difficult topic to fully understanding it, often with just a nudge in the right direction and then allowing them to follow their own intuition the rest of the way,” he says.

    The right direction for Leppink points toward San Diego and RF current drive experiments on DIII-D. He is grateful for the support from the SCGSR, a program created to prepare graduate students like him for science, technology, engineering, or mathematics careers important to the DOE Office of Science mission. It provides graduate thesis research opportunities through extended residency at DOE national laboratories. He has already made several trips to DIII-D, in part to install his reflectometer, and has been impressed with the size of the operation.

    “It takes a little while to kind of compartmentalize everything and say, ‘OK, well, here’s my part of the machine. This is what I’m doing.’ It can definitely be overwhelming at times. But I’m blessed to be able to work on what has been the workhorse tokamak of the United States for the past few decades.” More

  • in

    How the universe got its magnetic field

    When we look out into space, all of the astrophysical objects that we see are embedded in magnetic fields. This is true not only in the neighborhood of stars and planets, but also in the deep space between galaxies and galactic clusters. These fields are weak — typically much weaker than those of a refrigerator magnet — but they are dynamically significant in the sense that they have profound effects on the dynamics of the universe. Despite decades of intense interest and research, the origin of these cosmic magnetic fields remains one of the most profound mysteries in cosmology.

    In previous research, scientists came to understand how turbulence, the churning motion common to fluids of all types, could amplify preexisting magnetic fields through the so-called dynamo process. But this remarkable discovery just pushed the mystery one step deeper. If a turbulent dynamo could only amplify an existing field, where did the “seed” magnetic field come from in the first place?

    We wouldn’t have a complete and self-consistent answer to the origin of astrophysical magnetic fields until we understood how the seed fields arose. New work carried out by MIT graduate student Muni Zhou, her advisor Nuno Loureiro, a professor of nuclear science and engineering at MIT, and colleagues at Princeton University and the University of Colorado at Boulder provides an answer that shows the basic processes that generate a field from a completely unmagnetized state to the point where it is strong enough for the dynamo mechanism to take over and amplify the field to the magnitudes that we observe.

    Magnetic fields are everywhere

    Naturally occurring magnetic fields are seen everywhere in the universe. They were first observed on Earth thousands of years ago, through their interaction with magnetized minerals like lodestone, and used for navigation long before people had any understanding of their nature or origin. Magnetism on the sun was discovered at the beginning of the 20th century by its effects on the spectrum of light that the sun emitted. Since then, more powerful telescopes looking deep into space found that the fields were ubiquitous.

    And while scientists had long learned how to make and use permanent magnets and electromagnets, which had all sorts of practical applications, the natural origins of magnetic fields in the universe remained a mystery. Recent work has provided part of the answer, but many aspects of this question are still under debate.

    Amplifying magnetic fields — the dynamo effect

    Scientists started thinking about this problem by considering the way that electric and magnetic fields were produced in the laboratory. When conductors, like copper wire, move in magnetic fields, electric fields are created. These fields, or voltages, can then drive electrical currents. This is how the electricity that we use every day is produced. Through this process of induction, large generators or “dynamos” convert mechanical energy into the electromagnetic energy that powers our homes and offices. A key feature of dynamos is that they need magnetic fields in order to work.

    But out in the universe, there are no obvious wires or big steel structures, so how do the fields arise? Progress on this problem began about a century ago as scientists pondered the source of the Earth’s magnetic field. By then, studies of the propagation of seismic waves showed that much of the Earth, below the cooler surface layers of the mantle, was liquid, and that there was a core composed of molten nickel and iron. Researchers theorized that the convective motion of this hot, electrically conductive liquid and the rotation of the Earth combined in some way to generate the Earth’s field.

    Eventually, models emerged that showed how the convective motion could amplify an existing field. This is an example of “self-organization” — a feature often seen in complex dynamical systems — where large-scale structures grow spontaneously from small-scale dynamics. But just like in a power station, you needed a magnetic field to make a magnetic field.

    A similar process is at work all over the universe. However, in stars and galaxies and in the space between them, the electrically conducting fluid is not molten metal, but plasma — a state of matter that exists at extremely high temperatures where the electrons are ripped away from their atoms. On Earth, plasmas can be seen in lightning or neon lights. In such a medium, the dynamo effect can amplify an existing magnetic field, provided it starts at some minimal level.

    Making the first magnetic fields

    Where does this seed field come from? That’s where the recent work of Zhou and her colleagues, published May 5 in PNAS, comes in. Zhou developed the underlying theory and performed numerical simulations on powerful supercomputers that show how the seed field can be produced and what fundamental processes are at work. An important aspect of the plasma that exists between stars and galaxies is that it is extraordinarily diffuse — typically about one particle per cubic meter. That is a very different situation from the interior of stars, where the particle density is about 30 orders of magnitude higher. The low densities mean that the particles in cosmological plasmas never collide, which has important effects on their behavior that had to be included in the model that these researchers were developing.   

    Calculations performed by the MIT researchers followed the dynamics in these plasmas, which developed from well-ordered waves but became turbulent as the amplitude grew and the interactions became strongly nonlinear. By including detailed effects of the plasma dynamics at small scales on macroscopic astrophysical processes, they demonstrated that the first magnetic fields can be spontaneously produced through generic large-scale motions as simple as sheared flows. Just like the terrestrial examples, mechanical energy was converted into magnetic energy.

    An important output of their computation was the amplitude of the expected spontaneously generated magnetic field. What this showed was that the field amplitude could rise from zero to a level where the plasma is “magnetized” — that is, where the plasma dynamics are strongly affected by the presence of the field. At this point, the traditional dynamo mechanism can take over and raise the fields to the levels that are observed. Thus, their work represents a self-consistent model for the generation of magnetic fields at cosmological scale.

    Professor Ellen Zweibel of the University of Wisconsin at Madison notes that “despite decades of remarkable progress in cosmology, the origin of magnetic fields in the universe remains unknown. It is wonderful to see state-of-the-art plasma physics theory and numerical simulation brought to bear on this fundamental problem.”

    Zhou and co-workers will continue to refine their model and study the handoff from the generation of the seed field to the amplification phase of the dynamo. An important part of their future research will be to determine if the process can work on a time scale consistent with astronomical observations. To quote the researchers, “This work provides the first step in the building of a new paradigm for understanding magnetogenesis in the universe.”

    This work was funded by the National Science Foundation CAREER Award and the Future Investigators of NASA Earth and Space Science Technology (FINESST) grant. More

  • in

    MIT Climate and Sustainability Consortium announces recipients of inaugural MCSC Seed Awards

    The MIT Climate and Sustainability Consortium (MCSC) has awarded 20 projects a total of $5 million over two years in its first-ever 2022 MCSC Seed Awards program. The winning projects are led by principal investigators across all five of MIT’s schools.

    The goal of the MCSC Seed Awards is to engage MIT researchers and link the economy-wide work of the consortium to ongoing and emerging climate and sustainability efforts across campus. The program offers further opportunity to build networks among the awarded projects to deepen the impact of each and ensure the total is greater than the sum of its parts.

    For example, to drive progress under the awards category Circularity and Materials, the MCSC can facilitate connections between the technologists at MIT who are developing recovery approaches for metals, plastics, and fiber; the urban planners who are uncovering barriers to reuse; and the engineers, who will look for efficiency opportunities in reverse supply chains.

    “The MCSC Seed Awards are designed to complement actions previously outlined in Fast Forward: MIT’s Climate Action Plan for the Decade and, more specifically, the Climate Grand Challenges,” says Anantha P. Chandrakasan, dean of the MIT School of Engineering, Vannevar Bush Professor of Electrical Engineering and Computer Science, and chair of the MIT Climate and Sustainability Consortium. “In collaboration with seed award recipients and MCSC industry members, we are eager to engage in interdisciplinary exploration and propel urgent advancements in climate and sustainability.” 

    By supporting MIT researchers with expertise in economics, infrastructure, community risk assessment, mobility, and alternative fuels, the MCSC will accelerate implementation of cross-disciplinary solutions in the awards category Decarbonized and Resilient Value Chains. Enhancing Natural Carbon Sinks and building connections to local communities will require associations across experts in ecosystem change, biodiversity, improved agricultural practice and engagement with farmers, all of which the consortium can begin to foster through the seed awards.

    “Funding opportunities across campus has been a top priority since launching the MCSC,” says Jeremy Gregory, MCSC executive director. “It is our honor to support innovative teams of MIT researchers through the inaugural 2022 MCSC Seed Awards program.”

    The winning projects are tightly aligned with the MCSC’s areas of focus, which were derived from a year of highly engaged collaborations with MCSC member companies. The projects apply across the member’s climate and sustainability goals.

    The MCSC’s 16 member companies span many industries, and since early 2021, have met with members of the MIT community to define focused problem statements for industry-specific challenges, identify meaningful partnerships and collaborations, and develop clear and scalable priorities. Outcomes from these collaborations laid the foundation for the focus areas, which have shaped the work of the MCSC. Specifically, the MCSC Industry Advisory Board engaged with MIT on key strategic directions, and played a critical role in the MCSC’s series of interactive events. These included virtual workshops hosted last summer, each on a specific topic that allowed companies to work with MIT and each other to align key assumptions, identify blind spots in corporate goal-setting, and leverage synergies between members, across industries. The work continued in follow-up sessions and an annual symposium.

    “We are excited to see how the seed award efforts will help our member companies reach or even exceed their ambitious climate targets, find new cross-sector links among each other, seek opportunities to lead, and ripple key lessons within their industry, while also deepening the Institute’s strong foundation in climate and sustainability research,” says Elsa Olivetti, the Esther and Harold E. Edgerton Associate Professor in Materials Science and Engineering and MCSC co-director.

    As the seed projects take shape, the MCSC will provide ongoing opportunities for awardees to engage with the Industry Advisory Board and technical teams from the MCSC member companies to learn more about the potential for linking efforts to support and accelerate their climate and sustainability goals. Awardees will also have the chance to engage with other members of the MCSC community, including its interdisciplinary Faculty Steering Committee.

    “One of our mantras in the MCSC is to ‘amplify and extend’ existing efforts across campus; we’re always looking for ways to connect the collaborative industry relationships we’re building and the work we’re doing with other efforts on campus,” notes Jeffrey Grossman, the Morton and Claire Goulder and Family Professor in Environmental Systems, head of the Department of Materials Science and Engineering, and MCSC co-director. “We feel the urgency as well as the potential, and we don’t want to miss opportunities to do more and go faster.”

    The MCSC Seed Awards complement the Climate Grand Challenges, a new initiative to mobilize the entire MIT research community around developing the bold, interdisciplinary solutions needed to address difficult, unsolved climate problems. The 27 finalist teams addressed four broad research themes, which align with the MCSC’s focus areas. From these finalist teams, five flagship projects were announced in April 2022.

    The parallels between MCSC’s focus areas and the Climate Grand Challenges themes underscore an important connection between the shared long-term research interests of industry and academia. The challenges that some of the world’s largest and most influential companies have identified are complementary to MIT’s ongoing research and innovation — highlighting the tremendous opportunity to develop breakthroughs and scalable solutions quickly and effectively. Special Presidential Envoy for Climate John Kerry underscored the importance of developing these scalable solutions, including critical new technology, during a conversation with MIT President L. Rafael Reif at MIT’s first Climate Grand Challenges showcase event last month.

    Both the MCSC Seed Awards and the Climate Grand Challenges are part of MIT’s larger commitment and initiative to combat climate change; this was underscored in “Fast Forward: MIT’s Climate Action Plan for the Decade,” which the Institute published in May 2021.

    The project titles and research leads for each of the 20 awardees listed below are categorized by MCSC focus area.

    Decarbonized and resilient value chains

    “Collaborative community mapping toolkit for resilience planning,” led by Miho Mazereeuw, associate professor of architecture and urbanism in the Department of Architecture and director of the Urban Risk Lab (a research lead on Climate Grand Challenges flagship project) and Nicholas de Monchaux, professor and department head in the Department of Architecture
    “CP4All: Fast and local climate projections with scientific machine learning — towards accessibility for all of humanity,” led by Chris Hill, principal research scientist in the Department of Earth, Atmospheric and Planetary Sciences and Dava Newman, director of the MIT Media Lab and the Apollo Program Professor in the Department of Aeronautics and Astronautics
    “Emissions reductions and productivity in U.S. manufacturing,” led by Mert Demirer, assistant professor of applied economics at the MIT Sloan School of Management and Jing Li, assistant professor and William Barton Rogers Career Development Chair of Energy Economics in the MIT Sloan School of Management
    “Logistics electrification through scalable and inter-operable charging infrastructure: operations, planning, and policy,” led by Alex Jacquillat, the 1942 Career Development Professor and assistant professor of operations research and statistics in the MIT Sloan School of Management
    “Powertrain and system design for LOHC-powered long-haul trucking,” led by William Green, the Hoyt Hottel Professor in Chemical Engineering in the Department of Chemical Engineering and postdoctoral officer, and Wai K. Cheng, professor in the Department of Mechanical Engineering and director of the Sloan Automotive Laboratory
    “Sustainable Separation and Purification of Biochemicals and Biofuels using Membranes,” led by John Lienhard, the Abdul Latif Jameel Professor of Water in the Department of Mechanical Engineering, director of the Abdul Latif Jameel Water and Food Systems Lab, and director of the Rohsenow Kendall Heat Transfer Laboratory; and Nicolas Hadjiconstantinou, professor in the Department of Mechanical Engineering, co-director of the Center for Computational Science and Engineering, associate director of the Center for Exascale Simulation of Materials in Extreme Environments, and graduate officer
    “Toolkit for assessing the vulnerability of industry infrastructure siting to climate change,” led by Michael Howland, assistant professor in the Department of Civil and Environmental Engineering

    Circularity and Materials

    “Colorimetric Sulfidation for Aluminum Recycling,” led by Antoine Allanore, associate professor of metallurgy in the Department of Materials Science and Engineering
    “Double Loop Circularity in Materials Design Demonstrated on Polyurethanes,” led by Brad Olsen, the Alexander and I. Michael Kasser (1960) Professor and graduate admissions co-chair in the Department of Chemical Engineering, and Kristala Prather, the Arthur Dehon Little Professor and department executive officer in the Department of Chemical Engineering
    “Engineering of a microbial consortium to degrade and valorize plastic waste,” led by Otto Cordero, associate professor in the Department of Civil and Environmental Engineering, and Desiree Plata, the Gilbert W. Winslow (1937) Career Development Professor in Civil Engineering and associate professor in the Department of Civil and Environmental Engineering
    “Fruit-peel-inspired, biodegradable packaging platform with multifunctional barrier properties,” led by Kripa Varanasi, professor in the Department of Mechanical Engineering
    “High Throughput Screening of Sustainable Polyesters for Fibers,” led by Gregory Rutledge, the Lammot du Pont Professor in the Department of Chemical Engineering, and Brad Olsen, Alexander and I. Michael Kasser (1960) Professor and graduate admissions co-chair in the Department of Chemical Engineering
    “Short-term and long-term efficiency gains in reverse supply chains,” led by Yossi Sheffi, the Elisha Gray II Professor of Engineering Systems, professor in the Department of Civil and Environmental Engineering, and director of the Center for Transportation and Logistics
    The costs and benefits of circularity in building construction, led by Siqi Zheng, the STL Champion Professor of Urban and Real Estate Sustainability at the MIT Center for Real Estate and Department of Urban Studies and Planning, faculty director of the MIT Center for Real Estate, and faculty director for the MIT Sustainable Urbanization Lab; and Randolph Kirchain, principal research scientist and co-director of MIT Concrete Sustainability Hub

    Natural carbon sinks

    “Carbon sequestration through sustainable practices by smallholder farmers,” led by Joann de Zegher, the Maurice F. Strong Career Development Professor and assistant professor of operations management in the MIT Sloan School of Management, and Karen Zheng the George M. Bunker Professor and associate professor of operations management in the MIT Sloan School of Management
    “Coatings to protect and enhance diverse microbes for improved soil health and crop yields,” led by Ariel Furst, the Raymond A. (1921) And Helen E. St. Laurent Career Development Professor of Chemical Engineering in the Department of Chemical Engineering, and Mary Gehring, associate professor of biology in the Department of Biology, core member of the Whitehead Institute for Biomedical Research, and graduate officer
    “ECO-LENS: Mainstreaming biodiversity data through AI,” led by John Fernández, professor of building technology in the Department of Architecture and director of MIT Environmental Solutions Initiative
    “Growing season length, productivity, and carbon balance of global ecosystems under climate change,” led by Charles Harvey, professor in the Department of Civil and Environmental Engineering, and César Terrer, assistant professor in the Department of Civil and Environmental Engineering

    Social dimensions and adaptation

    “Anthro-engineering decarbonization at the million-person scale,” led by Manduhai Buyandelger, professor in the Anthropology Section, and Michael Short, the Class of ’42 Associate Professor of Nuclear Science and Engineering in the Department of Nuclear Science and Engineering
    “Sustainable solutions for climate change adaptation: weaving traditional ecological knowledge and STEAM,” led by Janelle Knox-Hayes, the Lister Brothers Associate Professor of Economic Geography and Planning and head of the Environmental Policy and Planning Group in the Department of Urban Studies and Planning, and Miho Mazereeuw, associate professor of architecture and urbanism in the Department of Architecture and director of the Urban Risk Lab (a research lead on a Climate Grand Challenges flagship project) More

  • in

    MIT expands research collaboration with Commonwealth Fusion Systems to build net energy fusion machine, SPARC

    MIT’s Plasma Science and Fusion Center (PSFC) will substantially expand its fusion energy research and education activities under a new five-year agreement with Institute spinout Commonwealth Fusion Systems (CFS).

    “This expanded relationship puts MIT and PSFC in a prime position to be an even stronger academic leader that can help deliver the research and education needs of the burgeoning fusion energy industry, in part by utilizing the world’s first burning plasma and net energy fusion machine, SPARC,” says PSFC director Dennis Whyte. “CFS will build SPARC and develop a commercial fusion product, while MIT PSFC will focus on its core mission of cutting-edge research and education.”

    Commercial fusion energy has the potential to play a significant role in combating climate change, and there is a concurrent increase in interest from the energy sector, governments, and foundations. The new agreement, administered by the MIT Energy Initiative (MITEI), where CFS is a startup member, will help PSFC expand its fusion technology efforts with a wider variety of sponsors. The collaboration enables rapid execution at scale and technology transfer into the commercial sector as soon as possible.

    This new agreement doubles CFS’ financial commitment to PSFC, enabling greater recruitment and support of students, staff, and faculty. “We’ll significantly increase the number of graduate students and postdocs, and just as important they will be working on a more diverse set of fusion science and technology topics,” notes Whyte. It extends the collaboration between PSFC and CFS that resulted in numerous advances toward fusion power plants, including last fall’s demonstration of a high-temperature superconducting (HTS) fusion electromagnet with record-setting field strength of 20 tesla.

    The combined magnetic fusion efforts at PSFC will surpass those in place during the operations of the pioneering Alcator C-Mod tokamak device that operated from 1993 to 2016. This increase in activity reflects a moment when multiple fusion energy technologies are seeing rapidly accelerating development worldwide, and the emergence of a new fusion energy industry that would require thousands of trained people.

    MITEI director Robert Armstrong adds, “Our goal from the beginning was to create a membership model that would allow startups who have specific research challenges to leverage the MITEI ecosystem, including MIT faculty, students, and other MITEI members. The team at the PSFC and MITEI have worked seamlessly to support CFS, and we are excited for this next phase of the relationship.”

    PSFC is supporting CFS’ efforts toward realizing the SPARC fusion platform, which facilitates rapid development and refinement of elements (including HTS magnets) needed to build ARC, a compact, modular, high-field fusion power plant that would set the stage for commercial fusion energy production. The concepts originated in Whyte’s nuclear science and engineering class 22.63 (Principles of Fusion Engineering) and have been carried forward by students and PSFC staff, many of whom helped found CFS; the new activity will expand research into advanced technologies for the envisioned pilot plant.

    “This has been an incredibly effective collaboration that has resulted in a major breakthrough for commercial fusion with the successful demonstration of revolutionary fusion magnet technology that will enable the world’s first commercially relevant net energy fusion device, SPARC, currently under construction,” says Bob Mumgaard SM ’15, PhD ’15, CEO of Commonwealth Fusion Systems. “We look forward to this next phase in the collaboration with MIT as we tackle the critical research challenges ahead for the next steps toward fusion power plant development.”

    In the push for commercial fusion energy, the next five years are critical, requiring intensive work on materials longevity, heat transfer, fuel recycling, maintenance, and other crucial aspects of power plant development. It will need innovation from almost every engineering discipline. “Having great teams working now, it will cut the time needed to move from SPARC to ARC, and really unleash the creativity. And the thing MIT does so well is cut across disciplines,” says Whyte.

    “To address the climate crisis, the world needs to deploy existing clean energy solutions as widely and as quickly as possible, while at the same time developing new technologies — and our goal is that those new technologies will include fusion power,” says Maria T. Zuber, MIT’s vice president for research. “To make new climate solutions a reality, we need focused, sustained collaborations like the one between MIT and Commonwealth Fusion Systems. Delivering fusion power onto the grid is a monumental challenge, and the combined capabilities of these two organizations are what the challenge demands.”

    On a strategic level, climate change and the imperative need for widely implementable carbon-free energy have helped orient the PSFC team toward scalability. “Building one or 10 fusion plants doesn’t make a difference — we have to build thousands,” says Whyte. “The design decisions we make will impact the ability to do that down the road. The real enemy here is time, and we want to remove as many impediments as possible and commit to funding a new generation of scientific leaders. Those are critically important in a field with as much interdisciplinary integration as fusion.” More

  • in

    How can we reduce the carbon footprint of global computing?

    The voracious appetite for energy from the world’s computers and communications technology presents a clear threat for the globe’s warming climate. That was the blunt assessment from presenters in the intensive two-day Climate Implications of Computing and Communications workshop held on March 3 and 4, hosted by MIT’s Climate and Sustainability Consortium (MCSC), MIT-IBM Watson AI Lab, and the Schwarzman College of Computing.

    The virtual event featured rich discussions and highlighted opportunities for collaboration among an interdisciplinary group of MIT faculty and researchers and industry leaders across multiple sectors — underscoring the power of academia and industry coming together.

    “If we continue with the existing trajectory of compute energy, by 2040, we are supposed to hit the world’s energy production capacity. The increase in compute energy and demand has been increasing at a much faster rate than the world energy production capacity increase,” said Bilge Yildiz, the Breene M. Kerr Professor in the MIT departments of Nuclear Science and Engineering and Materials Science and Engineering, one of the workshop’s 18 presenters. This computing energy projection draws from the Semiconductor Research Corporations’s decadal report.To cite just one example: Information and communications technology already account for more than 2 percent of global energy demand, which is on a par with the aviation industries emissions from fuel.“We are the very beginning of this data-driven world. We really need to start thinking about this and act now,” said presenter Evgeni Gousev, senior director at Qualcomm.  Innovative energy-efficiency optionsTo that end, the workshop presentations explored a host of energy-efficiency options, including specialized chip design, data center architecture, better algorithms, hardware modifications, and changes in consumer behavior. Industry leaders from AMD, Ericsson, Google, IBM, iRobot, NVIDIA, Qualcomm, Tertill, Texas Instruments, and Verizon outlined their companies’ energy-saving programs, while experts from across MIT provided insight into current research that could yield more efficient computing.Panel topics ranged from “Custom hardware for efficient computing” to “Hardware for new architectures” to “Algorithms for efficient computing,” among others.

    Visual representation of the conversation during the workshop session entitled “Energy Efficient Systems.”

    Image: Haley McDevitt

    Previous item
    Next item

    The goal, said Yildiz, is to improve energy efficiency associated with computing by more than a million-fold.“I think part of the answer of how we make computing much more sustainable has to do with specialized architectures that have very high level of utilization,” said Darío Gil, IBM senior vice president and director of research, who stressed that solutions should be as “elegant” as possible.  For example, Gil illustrated an innovative chip design that uses vertical stacking to reduce the distance data has to travel, and thus reduces energy consumption. Surprisingly, more effective use of tape — a traditional medium for primary data storage — combined with specialized hard drives (HDD), can yield a dramatic savings in carbon dioxide emissions.Gil and presenters Bill Dally, chief scientist and senior vice president of research of NVIDIA; Ahmad Bahai, CTO of Texas Instruments; and others zeroed in on storage. Gil compared data to a floating iceberg in which we can have fast access to the “hot data” of the smaller visible part while the “cold data,” the large underwater mass, represents data that tolerates higher latency. Think about digital photo storage, Gil said. “Honestly, are you really retrieving all of those photographs on a continuous basis?” Storage systems should provide an optimized mix of of HDD for hot data and tape for cold data based on data access patterns.Bahai stressed the significant energy saving gained from segmenting standby and full processing. “We need to learn how to do nothing better,” he said. Dally spoke of mimicking the way our brain wakes up from a deep sleep, “We can wake [computers] up much faster, so we don’t need to keep them running in full speed.”Several workshop presenters spoke of a focus on “sparsity,” a matrix in which most of the elements are zero, as a way to improve efficiency in neural networks. Or as Dally said, “Never put off till tomorrow, where you could put off forever,” explaining efficiency is not “getting the most information with the fewest bits. It’s doing the most with the least energy.”Holistic and multidisciplinary approaches“We need both efficient algorithms and efficient hardware, and sometimes we need to co-design both the algorithm and the hardware for efficient computing,” said Song Han, a panel moderator and assistant professor in the Department of Electrical Engineering and Computer Science (EECS) at MIT.Some presenters were optimistic about innovations already underway. According to Ericsson’s research, as much as 15 percent of the carbon emissions globally can be reduced through the use of existing solutions, noted Mats Pellbäck Scharp, head of sustainability at Ericsson. For example, GPUs are more efficient than CPUs for AI, and the progression from 3G to 5G networks boosts energy savings.“5G is the most energy efficient standard ever,” said Scharp. “We can build 5G without increasing energy consumption.”Companies such as Google are optimizing energy use at their data centers through improved design, technology, and renewable energy. “Five of our data centers around the globe are operating near or above 90 percent carbon-free energy,” said Jeff Dean, Google’s senior fellow and senior vice president of Google Research.Yet, pointing to the possible slowdown in the doubling of transistors in an integrated circuit — or Moore’s Law — “We need new approaches to meet this compute demand,” said Sam Naffziger, AMD senior vice president, corporate fellow, and product technology architect. Naffziger spoke of addressing performance “overkill.” For example, “we’re finding in the gaming and machine learning space we can make use of lower-precision math to deliver an image that looks just as good with 16-bit computations as with 32-bit computations, and instead of legacy 32b math to train AI networks, we can use lower-energy 8b or 16b computations.”

    Visual representation of the conversation during the workshop session entitled “Wireless, networked, and distributed systems.”

    Image: Haley McDevitt

    Previous item
    Next item

    Other presenters singled out compute at the edge as a prime energy hog.“We also have to change the devices that are put in our customers’ hands,” said Heidi Hemmer, senior vice president of engineering at Verizon. As we think about how we use energy, it is common to jump to data centers — but it really starts at the device itself, and the energy that the devices use. Then, we can think about home web routers, distributed networks, the data centers, and the hubs. “The devices are actually the least energy-efficient out of that,” concluded Hemmer.Some presenters had different perspectives. Several called for developing dedicated silicon chipsets for efficiency. However, panel moderator Muriel Medard, the Cecil H. Green Professor in EECS, described research at MIT, Boston University, and Maynooth University on the GRAND (Guessing Random Additive Noise Decoding) chip, saying, “rather than having obsolescence of chips as the new codes come in and in different standards, you can use one chip for all codes.”Whatever the chip or new algorithm, Helen Greiner, CEO of Tertill (a weeding robot) and co-founder of iRobot, emphasized that to get products to market, “We have to learn to go away from wanting to get the absolute latest and greatest, the most advanced processor that usually is more expensive.” She added, “I like to say robot demos are a dime a dozen, but robot products are very infrequent.”Greiner emphasized consumers can play a role in pushing for more energy-efficient products — just as drivers began to demand electric cars.Dean also sees an environmental role for the end user.“We have enabled our cloud customers to select which cloud region they want to run their computation in, and they can decide how important it is that they have a low carbon footprint,” he said, also citing other interfaces that might allow consumers to decide which air flights are more efficient or what impact installing a solar panel on their home would have.However, Scharp said, “Prolonging the life of your smartphone or tablet is really the best climate action you can do if you want to reduce your digital carbon footprint.”Facing increasing demandsDespite their optimism, the presenters acknowledged the world faces increasing compute demand from machine learning, AI, gaming, and especially, blockchain. Panel moderator Vivienne Sze, associate professor in EECS, noted the conundrum.“We can do a great job in making computing and communication really efficient. But there is this tendency that once things are very efficient, people use more of it, and this might result in an overall increase in the usage of these technologies, which will then increase our overall carbon footprint,” Sze said.Presenters saw great potential in academic/industry partnerships, particularly from research efforts on the academic side. “By combining these two forces together, you can really amplify the impact,” concluded Gousev.Presenters at the Climate Implications of Computing and Communications workshop also included: Joel Emer, professor of the practice in EECS at MIT; David Perreault, the Joseph F. and Nancy P. Keithley Professor of EECS at MIT; Jesús del Alamo, MIT Donner Professor and professor of electrical engineering in EECS at MIT; Heike Riel, IBM Fellow and head science and technology at IBM; and Takashi Ando, principal research staff member at IBM Research. The recorded workshop sessions are available on YouTube. More

  • in

    Machine learning, harnessed to extreme computing, aids fusion energy development

    MIT research scientists Pablo Rodriguez-Fernandez and Nathan Howard have just completed one of the most demanding calculations in fusion science — predicting the temperature and density profiles of a magnetically confined plasma via first-principles simulation of plasma turbulence. Solving this problem by brute force is beyond the capabilities of even the most advanced supercomputers. Instead, the researchers used an optimization methodology developed for machine learning to dramatically reduce the CPU time required while maintaining the accuracy of the solution.

    Fusion energyFusion offers the promise of unlimited, carbon-free energy through the same physical process that powers the sun and the stars. It requires heating the fuel to temperatures above 100 million degrees, well above the point where the electrons are stripped from their atoms, creating a form of matter called plasma. On Earth, researchers use strong magnetic fields to isolate and insulate the hot plasma from ordinary matter. The stronger the magnetic field, the better the quality of the insulation that it provides.

    Rodriguez-Fernandez and Howard have focused on predicting the performance expected in the SPARC device, a compact, high-magnetic-field fusion experiment, currently under construction by the MIT spin-out company Commonwealth Fusion Systems (CFS) and researchers from MIT’s Plasma Science and Fusion Center. While the calculation required an extraordinary amount of computer time, over 8 million CPU-hours, what was remarkable was not how much time was used, but how little, given the daunting computational challenge.

    The computational challenge of fusion energyTurbulence, which is the mechanism for most of the heat loss in a confined plasma, is one of the science’s grand challenges and the greatest problem remaining in classical physics. The equations that govern fusion plasmas are well known, but analytic solutions are not possible in the regimes of interest, where nonlinearities are important and solutions encompass an enormous range of spatial and temporal scales. Scientists resort to solving the equations by numerical simulation on computers. It is no accident that fusion researchers have been pioneers in computational physics for the last 50 years.

    One of the fundamental problems for researchers is reliably predicting plasma temperature and density given only the magnetic field configuration and the externally applied input power. In confinement devices like SPARC, the external power and the heat input from the fusion process are lost through turbulence in the plasma. The turbulence itself is driven by the difference in the extremely high temperature of the plasma core and the relatively cool temperatures of the plasma edge (merely a few million degrees). Predicting the performance of a self-heated fusion plasma therefore requires a calculation of the power balance between the fusion power input and the losses due to turbulence.

    These calculations generally start by assuming plasma temperature and density profiles at a particular location, then computing the heat transported locally by turbulence. However, a useful prediction requires a self-consistent calculation of the profiles across the entire plasma, which includes both the heat input and turbulent losses. Directly solving this problem is beyond the capabilities of any existing computer, so researchers have developed an approach that stitches the profiles together from a series of demanding but tractable local calculations. This method works, but since the heat and particle fluxes depend on multiple parameters, the calculations can be very slow to converge.

    However, techniques emerging from the field of machine learning are well suited to optimize just such a calculation. Starting with a set of computationally intensive local calculations run with the full-physics, first-principles CGYRO code (provided by a team from General Atomics led by Jeff Candy) Rodriguez-Fernandez and Howard fit a surrogate mathematical model, which was used to explore and optimize a search within the parameter space. The results of the optimization were compared to the exact calculations at each optimum point, and the system was iterated to a desired level of accuracy. The researchers estimate that the technique reduced the number of runs of the CGYRO code by a factor of four.

    New approach increases confidence in predictionsThis work, described in a recent publication in the journal Nuclear Fusion, is the highest fidelity calculation ever made of the core of a fusion plasma. It refines and confirms predictions made with less demanding models. Professor Jonathan Citrin, of the Eindhoven University of Technology and leader of the fusion modeling group for DIFFER, the Dutch Institute for Fundamental Energy Research, commented: “The work significantly accelerates our capabilities in more routinely performing ultra-high-fidelity tokamak scenario prediction. This algorithm can help provide the ultimate validation test of machine design or scenario optimization carried out with faster, more reduced modeling, greatly increasing our confidence in the outcomes.” 

    In addition to increasing confidence in the fusion performance of the SPARC experiment, this technique provides a roadmap to check and calibrate reduced physics models, which run with a small fraction of the computational power. Such models, cross-checked against the results generated from turbulence simulations, will provide a reliable prediction before each SPARC discharge, helping to guide experimental campaigns and improving the scientific exploitation of the device. It can also be used to tweak and improve even simple data-driven models, which run extremely quickly, allowing researchers to sift through enormous parameter ranges to narrow down possible experiments or possible future machines.

    The research was funded by CFS, with computational support from the National Energy Research Scientific Computing Center, a U.S. Department of Energy Office of Science User Facility. More

  • in

    Using excess heat to improve electrolyzers and fuel cells

    Reducing the use of fossil fuels will have unintended consequences for the power-generation industry and beyond. For example, many industrial chemical processes use fossil-fuel byproducts as precursors to things like asphalt, glycerine, and other important chemicals. One solution to reduce the impact of the loss of fossil fuels on industrial chemical processes is to store and use the heat that nuclear fission produces. New MIT research has dramatically improved a way to put that heat toward generating chemicals through a process called electrolysis. 

    Electrolyzers are devices that use electricity to split water (H2O) and generate molecules of hydrogen (H2) and oxygen (O2). Hydrogen is used in fuel cells to generate electricity and drive electric cars or drones or in industrial operations like the production of steel, ammonia, and polymers. Electrolyzers can also take in water and carbon dioxide (CO2) and produce oxygen and ethylene (C2H4), a chemical used in polymers and elsewhere.

    There are three main types of electrolyzers. One type works at room temperature, but has downsides; they’re inefficient and require rare metals, such as platinum. A second type is more efficient but runs at high temperatures, above 700 degrees Celsius. But metals corrode at that temperature, and the devices need expensive sealing and insulation. The third type would be a Goldilocks solution for nuclear heat if it were perfected, running at 300-600 C and requiring mostly cheap materials like stainless steel. These cells have never been operated as efficiently as theory says they should. The new work, published this month in Nature, both illuminates the problem and offers a solution.

    A sandwich mystery

    The intermediate-temperature devices use what are called protonic ceramic electrochemical cells. Each cell is a sandwich, with a dense electrolyte layered between two porous electrodes. Water vapor is pumped into the top electrode. A wire on the side connects the two electrodes, and externally generated electricity runs from the top to the bottom. The voltage pulls electrons out of the water, which splits the molecule, releasing oxygen. A hydrogen atom without an electron is just a proton. The protons get pulled through the electrolyte to rejoin with the electrons at the bottom electrode and form H2 molecules, which are then collected.

    On its own, the electrolyte in the middle, made mainly of barium, cerium, and zirconium, conducts protons very well. “But when we put the same material into this three-layer device, the proton conductivity of the full cell is pretty bad,” says Yanhao Dong, a postdoc in MIT’s Department of Nuclear Science and Engineering and a paper co-author. “Its conductivity is only about 50 percent of the bulk form’s. We wondered why there’s an inconsistency here.”

    A couple of clues pointed them in the right direction. First, if they don’t prepare the cell very carefully, the top layer, only about 20 microns (.02 millimeters) thick, doesn’t stay attached. “Sometimes if you use just Scotch tape, it will peel off,” Dong says. Second, when they looked at a cross section of a device using a scanning electron microscope, they saw that the top surface of the electrolyte layer was flat, whereas the bottom surface of the porous electrode sitting on it was bumpy, and the two came into contact in only a few places. They didn’t bond well. That precarious interface leads to both structural de-lamination and poor proton passage from the electrode to the electrolyte.

    Acidic solution

    The solution turned out to be simple: researchers roughed up the top of the electrolyte. Specifically, they applied acid for 10 minutes, which etched grooves into the surface. Ju Li, the Battelle Energy Alliance Professor in Nuclear Engineering and professor of materials science and engineering at MIT, and a paper co-author, likens it to sandblasting a surface before applying paint to increase adhesion. Their acid-treated cells produced about 200 percent more hydrogen per area at 1.5 volts at 600 C than did any previous cell of its type, and worked well down to 350 C with very little performance decay over extended operation. 

    “The authors reported a surprisingly simple yet highly effective surface treatment to dramatically improve the interface,” says Liangbing Hu, the director of the Center for Materials Innovation at the Maryland Energy Innovation Institute, who was not involved in the work. He calls the cell performance “exceptional.”

    “We are excited and surprised” by the results, Dong says. “The engineering solution seems quite simple. And that’s actually good, because it makes it very applicable to real applications.” In a practical product, many such cells would be stacked together to form a module. MIT’s partner in the project, Idaho National Laboratory, is very strong in engineering and prototyping, so Li expects to see electrolyzers based on this technology at scale before too long. “At the materials level, this is a breakthrough that shows that at a real-device scale you can work at this sweet spot of temperature of 350 to 600 degrees Celsius for nuclear fission and fusion reactors,” he says.

    “Reduced operating temperature enables cheaper materials for the large-scale assembly, including the stack,” says Idaho National Laboratory researcher and paper co-author Dong Ding. “The technology operates within the same temperature range as several important, current industrial processes, including ammonia production and CO2 reduction. Matching these temperatures will expedite the technology’s adoption within the existing industry.”

    “This is very significant for both Idaho National Lab and us,” Li adds, “because it bridges nuclear energy and renewable electricity.” He notes that the technology could also help fuel cells, which are basically electrolyzers run in reverse, using green hydrogen or hydrocarbons to generate electricity. According to Wei Wu, a materials scientist at Idaho National Laboratory and a paper co-author, “this technique is quite universal and compatible with other solid electrochemical devices.”

    Dong says it’s rare for a paper to advance both science and engineering to such a degree. “We are happy to combine those together and get both very good scientific understanding and also very good real-world performance.”

    This work, done in collaboration with Idaho National Laboratory, New Mexico State University, and the University of Nebraska–Lincoln, was funded, in part, by the U.S. Department of Energy. More