More stories

  • in

    High-energy and hungry for the hardest problems

    A high school track star and valedictorian, Anne White has always relished moving fast and clearing high hurdles. Since joining the Department of Nuclear Science and Engineering (NSE) in 2009 she has produced path-breaking fusion research, helped attract a more diverse cohort of students and scholars into the discipline, and, during a worldwide pandemic, assumed the role of department head as well as co-lead of an Institute-wide initiative to address climate change. For her exceptional leadership, innovation, and accomplishments in education and research, White was named the School of Engineering Distinguished Professor of Engineering in July 2020.

    But White declares little interest in recognition or promotions. “I don’t care about all that stuff,” she says. She’s in the race for much bigger stakes. “I want to find ways to save the world with nuclear,” she says.

    Tackling turbulence

    It was this goal that drew White to MIT. Her research, honed during graduate studies at the University of California at Los Angeles, involved developing a detailed understanding of conditions inside fusion devices, and resolving issues critical to realizing the vision of fusion energy — a carbon-free, nearly limitless source of power generated by 150-million-degree plasma.

    Harnessing this superheated, gaseous form of matter requires a special donut-shaped device called a tokamak, which contains the plasma within magnetic fields. When White entered fusion around the turn of the millennium, models of plasma behavior in tokamaks didn’t reliably match observed or experimental conditions. She was determined to change that picture, working with MIT’s state-of-the-art research tokamak, Alcator C-Mod.

    Play video

    Alcator C-Mod Tokamak Tour

    White believed solving the fusion puzzle meant getting a handle on plasma turbulence — the process by which charged atomic particles, breaking out of magnetic confinement, transport heat from the core to the cool edges of the tokamak. Although researchers knew that fusion energy depends on containing and controlling the heat of plasma reactions, White recalls that when she began grad school, “it was not widely accepted that turbulence was important, and that it was central to heat transport. She “felt it was critical to compare experimental measurements to first principles physics models, so we could demonstrate the significance of turbulence and give tokamak models better predictive ability.”

    In a series of groundbreaking studies, White’s team created the tools for measuring turbulence in different conditions, and developed computational models that could account for variations in turbulence, all validated by experiments. She was one of the first fusion scientists both to perform experiments and conduct simulations. “We lived in the domain between these two worlds,” she says.

    White’s turbulence models opened up approaches for managing turbulence and maximizing tokamak performance, paving the way for net-energy fusion energy devices, including ITER, the world’s largest fusion experiment, and SPARC, a compact, high-magnetic-field tokamak, a collaboration between MIT’s Plasma Science and Fusion Center and Commonwealth Fusion Systems.

    Laser-focused on turbulence

    Growing up in the desert city of Yuma, Arizona, White spent her free time outdoors, hiking and camping. “I was always in the space of protecting the environment,” she says. The daughter of two lawyers who taught her “to argue quickly and efficiently,” she excelled in math and physics in high school. Awarded a full ride at the University of Arizona, she was intent on a path in science, one where she could tackle problems like global warming, as it was known then. Physics seemed like the natural concentration for her.

    But there was unexpected pushback. The physics advisor believed her physics grades were lackluster. “I said, ‘Who cares what this guy thinks; I’ll take physics classes anyway,’” recalls White. Being tenacious and “thick skinned,” says White, turned out to be life-altering. “I took nuclear physics, which opened my eyes to fission, which then set me off on a path of understanding nuclear power and advanced nuclear systems,” she says. Math classes introduced her to chaotic systems, and she decided she wanted to study turbulence. Then, at a Society of Physics Students meeting White says she attended for the free food, she learned about fusion.

    “I realized this was what I wanted to do,” says White. “I became totally laser focused on turbulence and tokamaks.”

    At UCLA, she began to develop instruments and methods for measuring and modeling plasma turbulence, working on three different fusion research reactors, and earning fellowships from the Department of Energy (DOE) during her graduate and post-graduate years in fusion energy science. At MIT, she received a DOE Early Career Award that enabled her to build a research team that she now considers her “legacy.”

    As she expanded her research portfolio, White was also intent on incorporating fusion into the NSE curriculum at the undergraduate and graduate level, and more broadly, on making NSE a destination for students concerned about climate change. In recognition of her efforts, she received the 2014 Junior Bose Teaching Award. She also helped design the EdX course, Nuclear Engineering: Science, Systems and Society, introducing thousands of online learners to the potential of the field. “I have to be in the classroom,” she says. “I have to be with students, interacting, and sharing knowledge and lines of inquiry with them.”

    But even as she deepened her engagement with teaching and with her fusion research, which was helping spur development of new fusion energy technologies, White could not resist leaping into a consequential new undertaking: chairing the department. “It sounds cheesy, but I did it for my kid,” she says. “I can be helpful working on fusion, but I thought, what if I can help more by enabling other people across all areas of nuclear? This department gave me so much, I wanted to give back.”

    Although the pandemic struck just months after she stepped into the role in 2019, White propelled the department toward a new strategic plan. “It captures all the urgency and passion of the faculty, and is attractive to new students, with more undergraduates enrolling and more graduate students applying,” she says. White sees the department advancing the broader goals of the field, “articulating why nuclear is fundamentally important across many dimensions for carbon-free electricity and generation.” This means getting students involved in advanced fission technologies such as nuclear batteries and small modular reactors, as well as giving them an education in fusion that will help catalyze a nascent energy industry.

    Restless for a challenge

    White feels she’s still growing into the leadership role. “I’m really enthusiastic and sometimes too intense for people, so I have to dial it back during challenging conversations,” she says. She recently completed a Harvard Business School course on leadership.

    As the recently named co-chair of MIT’s Climate Nucleus (along with Professor Noelle Selin), charged with overseeing MIT’s campus initiatives around climate change, White says she draws on a repertoire of skills that come naturally to her: listening carefully, building consensus, and seeing value in the diversity of opinion. She is optimistic about mobilizing the Institute around goals to lower MIT’s carbon footprint, “using the entire campus as a research lab,” she says.

    In the midst of this push, White continues to advance projects of concern to her, such as making nuclear physics education more accessible. She developed an in-class module involving a simple particle detector for measuring background radiation. “Any high school or university student could build this experiment in 10 minutes and see alpha particle clusters and muons,” she says.

    White is also planning to host “Rising Stars,” an international conference intended to help underrepresented groups break barriers to entry in the field of nuclear science and engineering. “Grand intellectual challenges like saving the world appeal to all genders and backgrounds,” she says.

    These projects, her departmental and institutional duties, and most recently a new job chairing DOE’s Fusion Energy Sciences Advisory Committee leave her precious little time for a life outside work. But she makes time for walks and backpacking with her husband and toddler son, and reading the latest books by female faculty colleagues, such as “The New Breed,” by Media Lab robotics researcher Kate Darling, and “When People Want Punishment,” by Lily Tsai, Ford Professor of Political Science. “There are so many things I don’t know and want to understand,” says White.

    Yet even at leisure, White doesn’t slow down. “It’s restlessness: I love to learn, and anytime someone says a problem is hard, or impossible, I want to tackle it,” she says. There’s no time off, she believes, when the goal is “solving climate change and amplifying the work of other people trying to solve it.” More

  • in

    Solving a longstanding conundrum in heat transfer

    It is a problem that has beguiled scientists for a century. But, buoyed by a $625,000 Distinguished Early Career Award from the U.S. Department of Energy (DoE), Matteo Bucci, an associate professor in the Department of Nuclear Science and Engineering (NSE), hopes to be close to an answer.

    Tackling the boiling crisis

    Whether you’re heating a pot of water for pasta or are designing nuclear reactors, one phenomenon — boiling — is vital for efficient execution of both processes.

    “Boiling is a very effective heat transfer mechanism; it’s the way to remove large amounts of heat from the surface, which is why it is used in many high-power density applications,” Bucci says. An example use case: nuclear reactors.

    To the layperson, boiling appears simple — bubbles form and burst, removing heat. But what if so many bubbles form and coalesce that they form a band of vapor that prevents further heat transfer? Such a problem is a known entity and is labeled the boiling crisis. It would lead to runaway heat, and a failure of fuel rods in nuclear reactors. So “understanding and determining under which conditions the boiling crisis is likely to happen is critical to designing more efficient and cost-competitive nuclear reactors,” Bucci says.

    Early work on the boiling crisis dates back nearly a century ago, to 1926. And while much work has been done, “it is clear that we haven’t found an answer,” Bucci says. The boiling crisis remains a challenge because while models abound, the measurement of related phenomena to prove or disprove these models has been difficult. “[Boiling] is a process that happens on a very, very small length scale and over very, very short times,” Bucci says. “We are not able to observe it at the level of detail necessary to understand what really happens and validate hypotheses.”

    But, over the past few years, Bucci and his team have been developing diagnostics that can measure the phenomena related to boiling and thereby provide much-needed answers to a classic problem. Diagnostics are anchored in infrared thermometry and a technique using visible light. “By combining these two techniques I think we’re going to be ready to answer standing questions related to heat transfer, we can make our way out of the rabbit hole,” Bucci says. The grant award from the U.S. DoE for Nuclear Energy Projects will aid in this and Bucci’s other research efforts.

    An idyllic Italian childhood

    Tackling difficult problems is not new territory for Bucci, who grew up in the small town of Città di Castello near Florence, Italy. Bucci’s mother was an elementary school teacher. His father used to have a machine shop, which helped develop Bucci’s scientific bent. “I liked LEGOs a lot when I was a kid. It was a passion,” he adds.

    Despite Italy going through a severe pullback from nuclear engineering during his formative years, the subject fascinated Bucci. Job opportunities in the field were uncertain but Bucci decided to dig in. “If I have to do something for the rest of my life, it might as well be something I like,” he jokes. Bucci attended the University of Pisa for undergraduate and graduate studies in nuclear engineering.

    His interest in heat transfer mechanisms took root during his doctoral studies, a research subject he pursued in Paris at the French Alternative Energies and Atomic Energy Commission (CEA). It was there that a colleague suggested work on the boiling water crisis. This time Bucci set his sights on NSE at MIT and reached out to Professor Jacopo Buongiorno to inquire about research at the institution. Bucci had to fundraise at CEA to conduct research at MIT. He arrived just a couple of days before the Boston Marathon bombing in 2013 with a round-trip ticket. But Bucci has stayed ever since, moving on to become a research scientist and then associate professor at NSE.

    Bucci admits he struggled to adapt to the environment when he first arrived at MIT, but work and friendships with colleagues — he counts NSE’s Guanyu Su and Reza Azizian as among his best friends — helped conquer early worries.

    The integration of artificial intelligence

    In addition to diagnostics for boiling, Bucci and his team are working on ways of integrating artificial intelligence and experimental research. He is convinced that “the integration of advanced diagnostics, machine learning, and advanced modeling tools will blossom in a decade.”

    Bucci’s team is developing an autonomous laboratory for boiling heat transfer experiments. Running on machine learning, the setup decides which experiments to run based on a learning objective the team assigns. “We formulate a question and the machine will answer by optimizing the kinds of experiments that are necessary to answer those questions,” Bucci says, “I honestly think this is the next frontier for boiling,” he adds.

    “It’s when you climb a tree and you reach the top, that you realize that the horizon is much more vast and also more beautiful,” Bucci says of his zeal to pursue more research in the field.

    Even as he seeks new heights, Bucci has not forgotten his origins. Commemorating Italy’s hosting of the World Cup in 1990, a series of posters showcasing a soccer field fitted into the Roman Colosseum occupies pride of place in his home and office. Created by Alberto Burri, the posters are of sentimental value: The (now deceased) Italian artist also hailed from Bucci’s hometown — Città di Castello. More

  • in

    A better way to quantify radiation damage in materials

    It was just a piece of junk sitting in the back of a lab at the MIT Nuclear Reactor facility, ready to be disposed of. But it became the key to demonstrating a more comprehensive way of detecting atomic-level structural damage in materials — an approach that will aid the development of new materials, and could potentially support the ongoing operation of carbon-emission-free nuclear power plants, which would help alleviate global climate change.

    A tiny titanium nut that had been removed from inside the reactor was just the kind of material needed to prove that this new technique, developed at MIT and at other institutions, provides a way to probe defects created inside materials, including those that have been exposed to radiation, with five times greater sensitivity than existing methods.

    The new approach revealed that much of the damage that takes place inside reactors is at the atomic scale, and as a result is difficult to detect using existing methods. The technique provides a way to directly measure this damage through the way it changes with temperature. And it could be used to measure samples from the currently operating fleet of nuclear reactors, potentially enabling the continued safe operation of plants far beyond their presently licensed lifetimes.

    The findings are reported today in the journal Science Advances in a paper by MIT research specialist and recent graduate Charles Hirst PhD ’22; MIT professors Michael Short, Scott Kemp, and Ju Li; and five others at the University of Helsinki, the Idaho National Laboratory, and the University of California at Irvine.

    Rather than directly observing the physical structure of a material in question, the new approach looks at the amount of energy stored within that structure. Any disruption to the orderly structure of atoms within the material, such as that caused by radiation exposure or by mechanical stresses, actually imparts excess energy to the material. By observing and quantifying that energy difference, it’s possible to calculate the total amount of damage within the material — even if that damage is in the form of atomic-scale defects that are too small to be imaged with microscopes or other detection methods.

    The principle behind this method had been worked out in detail through calculations and simulations. But it was the actual tests on that one titanium nut from the MIT nuclear reactor that provided the proof — and thus opened the door to a new way of measuring damage in materials.

    The method they used is called differential scanning calorimetry. As Hirst explains, this is similar in principle to the calorimetry experiments many students carry out in high school chemistry classes, where they measure how much energy it takes to raise the temperature of a gram of water by one degree. The system the researchers used was “fundamentally the exact same thing, measuring energetic changes. … I like to call it just a fancy furnace with a thermocouple inside.”

    The scanning part has to do with gradually raising the temperature a bit at a time and seeing how the sample responds, and the differential part refers to the fact that two identical chambers are measured at once, one empty, and one containing the sample being studied. The difference between the two reveals details of the energy of the sample, Hirst explains.

    “We raise the temperature from room temperature up to 600 degrees Celsius, at a constant rate of 50 degrees per minute,” he says. Compared to the empty vessel, “your material will naturally lag behind because you need energy to heat your material. But if there are changes in the energy inside the material, that will change the temperature. In our case, there was an energy release when the defects recombine, and then it will get a little bit of a head start on the furnace … and that’s how we are measuring the energy in our sample.”

    Hirst, who carried out the work over a five-year span as his doctoral thesis project, found that contrary to what had been believed, the irradiated material showed that there were two different mechanisms involved in the relaxation of defects in titanium at the studied temperatures, revealed by two separate peaks in calorimetry. “Instead of one process occurring, we clearly saw two, and each of them corresponds to a different reaction that’s happening in the material,” he says.

    They also found that textbook explanations of how radiation damage behaves with temperature weren’t accurate, because previous tests had mostly been carried out at extremely low temperatures and then extrapolated to the higher temperatures of real-life reactor operations. “People weren’t necessarily aware that they were extrapolating, even though they were, completely,” Hirst says.

    “The fact is that our common-knowledge basis for how radiation damage evolves is based on extremely low-temperature electron radiation,” adds Short. “It just became the accepted model, and that’s what’s taught in all the books. It took us a while to realize that our general understanding was based on a very specific condition, designed to elucidate science, but generally not applicable to conditions in which we actually want to use these materials.”

    Now, the new method can be applied “to materials plucked from existing reactors, to learn more about how they are degrading with operation,” Hirst says.

    “The single biggest thing the world can do in order to get cheap, carbon-free power is to keep current reactors on the grid. They’re already paid for, they’re working,” Short adds.  But to make that possible, “the only way we can keep them on the grid is to have more certainty that they will continue to work well.” And that’s where this new way of assessing damage comes into play.

    While most nuclear power plants have been licensed for 40 to 60 years of operation, “we’re now talking about running those same assets out to 100 years, and that depends almost fully on the materials being able to withstand the most severe accidents,” Short says. Using this new method, “we can inspect them and take them out before something unexpected happens.”

    In practice, plant operators could remove a tiny sample of material from critical areas of the reactor, and analyze it to get a more complete picture of the condition of the overall reactor. Keeping existing reactors running is “the single biggest thing we can do to keep the share of carbon-free power high,” Short stresses. “This is one way we think we can do that.”

    Sergei Dudarev, a fellow at the United Kingdom Atomic Energy Authority who was not associated with this work, says this “is likely going to be impactful, as it confirms, in a nice systematic manner, supported both by experiment and simulations, the unexpectedly significant part played by the small invisible defects in microstructural evolution of materials exposed to irradiation.”

    The process is not just limited to the study of metals, nor is it limited to damage caused by radiation, the researchers say. In principle, the method could be used to measure other kinds of defects in materials, such as those caused by stresses or shockwaves, and it could be applied to materials such as ceramics or semiconductors as well.

    In fact, Short says, metals are the most difficult materials to measure with this method, and early on other researchers kept asking why this team was focused on damage to metals. That was partly because reactor components tend to be made of metal, and also because “It’s the hardest, so, if we crack this problem, we have a tool to crack them all!”

    Measuring defects in other kinds of materials can be up to 10,000 times easier than in metals, he says. “If we can do this with metals, we can make this extremely, ubiquitously applicable.” And all of it enabled by a small piece of junk that was sitting at the back of a lab.

    The research team included Fredric Granberg and Kai Nordlund at the University of Helsinki in Finland; Boopathy Kombaiah and Scott Middlemas at Idaho National Laboratory; and Penghui Cao at the University of California at Irvine. The work was supported by the U.S. National Science Foundation, an Idaho National Laboratory research grant, and a Euratom Research and Training program grant. More

  • in

    New hardware offers faster computation for artificial intelligence, with much less energy

    As scientists push the boundaries of machine learning, the amount of time, energy, and money required to train increasingly complex neural network models is skyrocketing. A new area of artificial intelligence called analog deep learning promises faster computation with a fraction of the energy usage.

    Programmable resistors are the key building blocks in analog deep learning, just like transistors are the core elements for digital processors. By repeating arrays of programmable resistors in complex layers, researchers can create a network of analog artificial “neurons” and “synapses” that execute computations just like a digital neural network. This network can then be trained to achieve complex AI tasks like image recognition and natural language processing.

    A multidisciplinary team of MIT researchers set out to push the speed limits of a type of human-made analog synapse that they had previously developed. They utilized a practical inorganic material in the fabrication process that enables their devices to run 1 million times faster than previous versions, which is also about 1 million times faster than the synapses in the human brain.

    Moreover, this inorganic material also makes the resistor extremely energy-efficient. Unlike materials used in the earlier version of their device, the new material is compatible with silicon fabrication techniques. This change has enabled fabricating devices at the nanometer scale and could pave the way for integration into commercial computing hardware for deep-learning applications.

    “With that key insight, and the very powerful nanofabrication techniques we have at MIT.nano, we have been able to put these pieces together and demonstrate that these devices are intrinsically very fast and operate with reasonable voltages,” says senior author Jesús A. del Alamo, the Donner Professor in MIT’s Department of Electrical Engineering and Computer Science (EECS). “This work has really put these devices at a point where they now look really promising for future applications.”

    “The working mechanism of the device is electrochemical insertion of the smallest ion, the proton, into an insulating oxide to modulate its electronic conductivity. Because we are working with very thin devices, we could accelerate the motion of this ion by using a strong electric field, and push these ionic devices to the nanosecond operation regime,” explains senior author Bilge Yildiz, the Breene M. Kerr Professor in the departments of Nuclear Science and Engineering and Materials Science and Engineering.

    “The action potential in biological cells rises and falls with a timescale of milliseconds, since the voltage difference of about 0.1 volt is constrained by the stability of water,” says senior author Ju Li, the Battelle Energy Alliance Professor of Nuclear Science and Engineering and professor of materials science and engineering, “Here we apply up to 10 volts across a special solid glass film of nanoscale thickness that conducts protons, without permanently damaging it. And the stronger the field, the faster the ionic devices.”

    These programmable resistors vastly increase the speed at which a neural network is trained, while drastically reducing the cost and energy to perform that training. This could help scientists develop deep learning models much more quickly, which could then be applied in uses like self-driving cars, fraud detection, or medical image analysis.

    “Once you have an analog processor, you will no longer be training networks everyone else is working on. You will be training networks with unprecedented complexities that no one else can afford to, and therefore vastly outperform them all. In other words, this is not a faster car, this is a spacecraft,” adds lead author and MIT postdoc Murat Onen.

    Co-authors include Frances M. Ross, the Ellen Swallow Richards Professor in the Department of Materials Science and Engineering; postdocs Nicolas Emond and Baoming Wang; and Difei Zhang, an EECS graduate student. The research is published today in Science.

    Accelerating deep learning

    Analog deep learning is faster and more energy-efficient than its digital counterpart for two main reasons. “First, computation is performed in memory, so enormous loads of data are not transferred back and forth from memory to a processor.” Analog processors also conduct operations in parallel. If the matrix size expands, an analog processor doesn’t need more time to complete new operations because all computation occurs simultaneously.

    The key element of MIT’s new analog processor technology is known as a protonic programmable resistor. These resistors, which are measured in nanometers (one nanometer is one billionth of a meter), are arranged in an array, like a chess board.

    In the human brain, learning happens due to the strengthening and weakening of connections between neurons, called synapses. Deep neural networks have long adopted this strategy, where the network weights are programmed through training algorithms. In the case of this new processor, increasing and decreasing the electrical conductance of protonic resistors enables analog machine learning.

    The conductance is controlled by the movement of protons. To increase the conductance, more protons are pushed into a channel in the resistor, while to decrease conductance protons are taken out. This is accomplished using an electrolyte (similar to that of a battery) that conducts protons but blocks electrons.

    To develop a super-fast and highly energy efficient programmable protonic resistor, the researchers looked to different materials for the electrolyte. While other devices used organic compounds, Onen focused on inorganic phosphosilicate glass (PSG).

    PSG is basically silicon dioxide, which is the powdery desiccant material found in tiny bags that come in the box with new furniture to remove moisture. It is studied as a proton conductor under humidified conditions for fuel cells. It is also the most well-known oxide used in silicon processing. To make PSG, a tiny bit of phosphorus is added to the silicon to give it special characteristics for proton conduction.

    Onen hypothesized that an optimized PSG could have a high proton conductivity at room temperature without the need for water, which would make it an ideal solid electrolyte for this application. He was right.

    Surprising speed

    PSG enables ultrafast proton movement because it contains a multitude of nanometer-sized pores whose surfaces provide paths for proton diffusion. It can also withstand very strong, pulsed electric fields. This is critical, Onen explains, because applying more voltage to the device enables protons to move at blinding speeds.

    “The speed certainly was surprising. Normally, we would not apply such extreme fields across devices, in order to not turn them into ash. But instead, protons ended up shuttling at immense speeds across the device stack, specifically a million times faster compared to what we had before. And this movement doesn’t damage anything, thanks to the small size and low mass of protons. It is almost like teleporting,” he says.

    “The nanosecond timescale means we are close to the ballistic or even quantum tunneling regime for the proton, under such an extreme field,” adds Li.

    Because the protons don’t damage the material, the resistor can run for millions of cycles without breaking down. This new electrolyte enabled a programmable protonic resistor that is a million times faster than their previous device and can operate effectively at room temperature, which is important for incorporating it into computing hardware.

    Thanks to the insulating properties of PSG, almost no electric current passes through the material as protons move. This makes the device extremely energy efficient, Onen adds.

    Now that they have demonstrated the effectiveness of these programmable resistors, the researchers plan to reengineer them for high-volume manufacturing, says del Alamo. Then they can study the properties of resistor arrays and scale them up so they can be embedded into systems.

    At the same time, they plan to study the materials to remove bottlenecks that limit the voltage that is required to efficiently transfer the protons to, through, and from the electrolyte.

    “Another exciting direction that these ionic devices can enable is energy-efficient hardware to emulate the neural circuits and synaptic plasticity rules that are deduced in neuroscience, beyond analog deep neural networks. We have already started such a collaboration with neuroscience, supported by the MIT Quest for Intelligence,” adds Yildiz.

    “The collaboration that we have is going to be essential to innovate in the future. The path forward is still going to be very challenging, but at the same time it is very exciting,” del Alamo says.

    “Intercalation reactions such as those found in lithium-ion batteries have been explored extensively for memory devices. This work demonstrates that proton-based memory devices deliver impressive and surprising switching speed and endurance,” says William Chueh, associate professor of materials science and engineering at Stanford University, who was not involved with this research. “It lays the foundation for a new class of memory devices for powering deep learning algorithms.”

    “This work demonstrates a significant breakthrough in biologically inspired resistive-memory devices. These all-solid-state protonic devices are based on exquisite atomic-scale control of protons, similar to biological synapses but at orders of magnitude faster rates,” says Elizabeth Dickey, the Teddy & Wilton Hawkins Distinguished Professor and head of the Department of Materials Science and Engineering at Carnegie Mellon University, who was not involved with this work. “I commend the interdisciplinary MIT team for this exciting development, which will enable future-generation computational devices.”

    This research is funded, in part, by the MIT-IBM Watson AI Lab. More

  • in

    Fusion’s newest ambassador

    When high school senior Tuba Balta emailed MIT Plasma Science and Fusion Center (PSFC) Director Dennis Whyte in February, she was not certain she would get a response. As part of her final semester at BASIS Charter School, in Washington, she had been searching unsuccessfully for someone to sponsor an internship in fusion energy, a topic that had recently begun to fascinate her because “it’s not figured out yet.” Time was running out if she was to include the internship as part of her senior project.

    “I never say ‘no’ to a student,” says Whyte, who felt she could provide a youthful perspective on communicating the science of fusion to the general public.

    Posters explaining the basics of fusion science were being considered for the walls of a PSFC lounge area, a space used to welcome visitors who might not know much about the center’s focus: What is fusion? What is plasma? What is magnetic confinement fusion? What is a tokamak?

    Why couldn’t Balta be tasked with coming up with text for these posters, written specifically to be understandable, even intriguing, to her peers?

    Meeting the team

    Although most of the internship would be virtual, Balta visited MIT to meet Whyte and others who would guide her progress. A tour of the center showed her the past and future of the PSFC, one lab area revealing on her left the remains of the decades-long Alcator C-Mod tokamak and on her right the testing area for new superconducting magnets crucial to SPARC, designed in collaboration with MIT spinoff Commonwealth Fusion Systems.

    With Whyte, graduate student Rachel Bielajew, and Outreach Coordinator Paul Rivenberg guiding her content and style, Balta focused on one of eight posters each week. Her school also required her to keep a weekly blog of her progress, detailing what she was learning in the process of creating the posters.

    Finding her voice

    Balta admits that she was not looking forward to this part of the school assignment. But she decided to have fun with it, adopting an enthusiastic and conversational tone, as if she were sitting with friends around a lunch table. Each week, she was able to work out what she was composing for her posters and her final project by trying it out on her friends in the blog.

    Her posts won praise from her schoolmates for their clarity, as when in Week 3 she explained the concept of turbulence as it relates to fusion research, sending her readers to their kitchen faucets to experiment with the pressure and velocity of running tap water.

    The voice she found through her blog served her well during her final presentation about fusion at a school expo for classmates, parents, and the general public.

    “Most people are intimidated by the topic, which they shouldn’t be,” says Balta. “And it just made me happy to help other people understand it.”

    Her favorite part of the internship? “Getting to talk to people whose papers I was reading and ask them questions. Because when it comes to fusion, you can’t just look it up on Google.”

    Awaiting her first year at the University of Chicago, Balta reflects on the team spirit she experienced in communicating with researchers at the PSFC.

    “I think that was one of my big takeaways,” she says, “that you have to work together. And you should, because you’re always going to be missing some piece of information; but there’s always going to be somebody else who has that piece, and we can all help each other out.” More

  • in

    Pursuing progress at the nanoscale

    Last fall, a team of five senior undergraduate nuclear engineering students met once a week for dinners where they took turns cooking and debated how to tackle a particularly daunting challenge set forth in their program’s capstone course, 22.033 (Nuclear Systems Design Project).

    In past semesters, students had free reign to identify any real-world problem that interested them to solve through team-driven prototyping and design. This past fall worked a little differently. The team continued the trend of tackling daunting problems, but instead got an assignment to explore a particular design challenge on MIT’s campus. Rising to the challenge, the team spent the semester seeking a feasible way to introduce a highly coveted technology at MIT.

    Housed inside a big blue dome is the MIT Nuclear Reactor Laboratory (NRL). The reactor is used to conduct a wide range of science experiments, but in recent years, there have been multiple attempts to implement an instrument at the reactor that could probe the structure of materials, molecules, and devices. With this technology, researchers could model the structure of a wide range of materials and complex liquids made of polymers or containing nanoscale inhomogeneities that differ from the larger mass. On campus, researchers for the first time could conduct experiments to better understand the properties and functions of anything placed in front of a neutron beam emanating from the reactor core.

    The impact of this would be immense. If the reactor could be adapted to conduct this advanced technique, known as small-angle neutron scattering (SANS), it would open up a whole new world of research at MIT.

    “It’s essentially using the nuclear reactor as an incredibly high-performance camera that researchers from all over MIT would be very interested in using, including nuclear science and engineering, chemical engineering, biological engineering, and materials science, who currently use this tool at other institutions,” says Zachary Hartwig, Nuclear Systems Design Project professor and the MIT Robert N. Noyce Career Development Professor.

    SANS instruments have been installed at fewer than 20 facilities worldwide, and MIT researchers have previously considered implementing the capability at the reactor to help MIT expand community-wide access to SANS. Last fall, this mission went from long-time campus dream to potential reality as it became the design challenge that Hartwig’s students confronted. Despite having no experience with SANS, the team embraced the challenge, taking the first steps to figure out how to bring this technology to campus.

    “I really loved the idea that what we were doing could have a very real impact,” says Zoe Fisher, Nuclear Systems Design Project team member and now graduate nuclear engineering student.

    Each fall, Hartwig uses the course to introduce students to real-world challenges with strict constraints on solutions, and last fall’s project came with plenty of thorny design questions for students to tackle. First was the size limitation posed by the space available at MIT’s reactor. In SANS facilities around the world, the average length of the instrument is 30 meters, but at NRL, the space available is approximately 7.5 meters. Second, these instruments can cost up to $30 million, which is far outside NRL’s proposed budget of $3 million. That meant not only did students need to design an instrument that would work in a smaller space, but also one that could be built for a tenth of the typical cost.

    “The challenge was not just implementing one of these instruments,” Hartwig says. “It was whether the students could significantly innovate beyond the ‘traditional’ approach to doing SANS to meet the daunting constraints that we have at the MIT Reactor.”

    Because NRL actually wants to pursue this project, the students had to get creative, and their creative potential was precisely why the idea arose to get them involved, says Jacopo Buongiorno, the director of science and technology at NRL and Tokyo Electric Power Company Professor in Nuclear Engineering. “Involvement in real-world projects that answer questions about feasibility and cost of new technology and capabilities is a key element of a successful undergraduate education at MIT,” Buongiorno says.

    Students say it would have been impossible to tackle the problem without the help of co-instructor Boris Khaykovich, a research scientist at NRL who specializes in neutron instrumentation.

    Over the past two decades, Khaykovich has watched as SANS became the most popular technique for analyzing material structure. As the amount of available SANS beam time at the few facilities that exist became more competitive, access declined. Today only the experiments passing the most stringent review get access. What Khaykovich hopes to bring to MIT is improved access to SANS by designing an instrument that will be suitable for a majority of run-of-the-mill experiments, even if it’s not as powerful as state-of-the-art national SANS facilities. Such an instrument can still serve a wider range of researchers who currently have few opportunities to pursue SANS experiments.

    “In the U.S., we don’t have a simple, small, day-to-day SANS instrument,” Khaykovich says.

    With Khaykovich’s help, nuclear engineering undergraduate student Liam Hines says his team was able to go much further with their assessment than they would’ve starting from scratch, with no background in SANS. This project was unlike anything they’d ever been asked of as MIT students, and for students like Hines, who contributed to NRL research his entire time on campus, it was a project that hit close to home. “We were imagining this thing that might be designed at MIT,” Hines says.

    Fisher and Hines were joined by undergraduate nuclear engineering student team members Francisco Arellano, Jovier Jimenez, and Brendan Vaughan. Together, they devised a design that surprised both Khaykovich and Hartwig, identifying creative solutions that overcame all limitations and significantly reduced cost.

    Their team’s final project featured an adaptation of a conical design that was recently experimentally tested in Japan, but not generally used. The conical design allowed them to maximize precision while working within the other constraints, resulting in an instrument design that exceeded Hartwig’s expectations. The students also showed the feasibility of using an alternative type of glass-based low-cost neutron detector to calibrate the scattering data. By avoiding the need for a traditional detector based on helium-3, which is increasingly scarce and exorbitantly expensive, such a detector would dramatically reduce cost and increase availability. Their final presentation indicated the day-to-day SANS instrument could be built at only 4.5 meters long and with an estimated cost less than $1 million.

    Khaykovich credited the students for their enthusiasm, bouncing ideas off each other and exploring as much terrain as possible by interviewing experts who implemented SANS at other facilities. “They showed quite a perseverance and an ability to go deep into a very unfamiliar territory for them,” Khaykovich says.

    Hines says that Hartwig emphasized the importance of fielding expert opinions to more quickly discover optimal solutions. Fisher says that based on their research, if their design is funded, it would make SANS “more accessible to research for the sake of knowledge,” rather than dominated by industry research.

    Hartwig and Khaykovich agreed the students’ final project results showed a baseline of how MIT could pursue SANS technology cheaply, and when NRL proceeds with its own design process, Hartwig says, “The student’s work might actually change the cost of the feasibility of this at MIT in a way that if we hadn’t run the class, we would never have thought about doing.”

    Buongiorno says as they move forward with the project, NRL staff will consult students’ findings.

    “Indeed, the students developed original technical approaches, which are now being further explored by the NRL staff and may ultimately lead to the deployment of this new important capability on the MIT campus,” Buongiorno says.

    Hartwig says it’s a goal of the Nuclear Systems Design Project course to empower students to learn how to lead teams and embrace challenges, so they can be effective leaders advancing novel solutions in research and industry. “I think it helps teach people to be agile, to be flexible, to have confidence that they can actually go off and learn what they don’t know and solve problems they may think are bigger than themselves,” he says.

    It’s common for past classes of Nuclear Systems Design Project students to continue working on ideas beyond the course, and some students have even launched companies from their project research. What’s less common is for Hartwig’s students to actively serve as engineers pointed to a particular campus problem that’s expected to be resolved in the next few years.

    “In this case, they’re actually working on something real,” Hartwig says. “Their ideas are going to very much influence what we hope will be a facility that gets built at the reactor.”

    For students, it was exciting to inform a major instrument proposal that will soon be submitted to federal funding agencies, and for Hines, it became a chance to make his mark at NRL.

    “This is a lab I’ve been contributing to my entire time at MIT, and then through this project, I finished my time at MIT contributing in a much larger sense,” Hines says. More

  • in

    Evan Leppink: Seeking a way to better stabilize the fusion environment

    “Fusion energy was always one of those kind-of sci-fi technologies that you read about,” says nuclear science and engineering PhD candidate Evan Leppink. He’s recalling the time before fusion became a part of his daily hands-on experience at MIT’s Plasma Science and Fusion Center, where he is studying a unique way to drive current in a tokamak plasma using radiofrequency (RF) waves. 

    Now, an award from the U.S. Department of Energy’s (DOE) Office of Science Graduate Student Research (SCGSR) Program will support his work with a 12-month residency at the DIII-D National Fusion Facility in San Diego, California.

    Like all tokamaks, DIII-D generates hot plasma inside a doughnut-shaped vacuum chamber wrapped with magnets. Because plasma will follow magnetic field lines, tokamaks are able to contain the turbulent plasma fuel as it gets hotter and denser, keeping it away from the edges of the chamber where it could damage the wall materials. A key part of the tokamak concept is that part of the magnetic field is created by electrical currents in the plasma itself, which helps to confine and stabilize the configuration. Researchers often launch high-power RF waves into tokamaks to drive that current.

    Leppink will be contributing to research, led by his MIT advisor Steve Wukitch, that pursues launching RF waves in DIII-D using a unique compact antenna placed on the tokamak center column. Typically, antennas are placed inside the tokamak on the outer edge of the doughnut, farthest from the central hole (or column), primarily because access and installation are easier there. This is known as the “low-field side,” because the magnetic field is lower there than at the central column, the “high-field side.” This MIT-led experiment, for the first time, will mount an antenna on the high-field side. There is some theoretical evidence that placing the wave launcher there could improve power penetration and current drive efficiency. And because the plasma environment is less harsh on this side, the antenna will survive longer, a factor important for any future power-producing tokamak.

    Leppink’s work on DIII-D focuses specifically on measuring the density of plasmas generated in the tokamak, for which he developed a “reflectometer.” This small antenna launches microwaves into the plasma, which reflect back to the antenna to be measured. The time that it takes for these microwaves to traverse the plasma provides information about the plasma density, allowing researchers to build up detailed density profiles, data critical for injecting RF power into the plasma.

    “Research shows that when we try to inject these waves into the plasma to drive the current, they can lose power as they travel through the edge region of the tokamak, and can even have problems entering the core of the plasma, where we would most like to direct them,” says Leppink. “My diagnostic will measure that edge region on the high-field side near the launcher in great detail, which provides us a way to directly verify calculations or compare actual results with simulation results.”

    Although focused on his own research, Leppink has excelled at priming other students for success in their studies and research. In 2021 he received the NSE Outstanding Teaching Assistant and Mentorship Award.

    “The highlights of TA’ing for me were the times when I could watch students go from struggling with a difficult topic to fully understanding it, often with just a nudge in the right direction and then allowing them to follow their own intuition the rest of the way,” he says.

    The right direction for Leppink points toward San Diego and RF current drive experiments on DIII-D. He is grateful for the support from the SCGSR, a program created to prepare graduate students like him for science, technology, engineering, or mathematics careers important to the DOE Office of Science mission. It provides graduate thesis research opportunities through extended residency at DOE national laboratories. He has already made several trips to DIII-D, in part to install his reflectometer, and has been impressed with the size of the operation.

    “It takes a little while to kind of compartmentalize everything and say, ‘OK, well, here’s my part of the machine. This is what I’m doing.’ It can definitely be overwhelming at times. But I’m blessed to be able to work on what has been the workhorse tokamak of the United States for the past few decades.” More

  • in

    How the universe got its magnetic field

    When we look out into space, all of the astrophysical objects that we see are embedded in magnetic fields. This is true not only in the neighborhood of stars and planets, but also in the deep space between galaxies and galactic clusters. These fields are weak — typically much weaker than those of a refrigerator magnet — but they are dynamically significant in the sense that they have profound effects on the dynamics of the universe. Despite decades of intense interest and research, the origin of these cosmic magnetic fields remains one of the most profound mysteries in cosmology.

    In previous research, scientists came to understand how turbulence, the churning motion common to fluids of all types, could amplify preexisting magnetic fields through the so-called dynamo process. But this remarkable discovery just pushed the mystery one step deeper. If a turbulent dynamo could only amplify an existing field, where did the “seed” magnetic field come from in the first place?

    We wouldn’t have a complete and self-consistent answer to the origin of astrophysical magnetic fields until we understood how the seed fields arose. New work carried out by MIT graduate student Muni Zhou, her advisor Nuno Loureiro, a professor of nuclear science and engineering at MIT, and colleagues at Princeton University and the University of Colorado at Boulder provides an answer that shows the basic processes that generate a field from a completely unmagnetized state to the point where it is strong enough for the dynamo mechanism to take over and amplify the field to the magnitudes that we observe.

    Magnetic fields are everywhere

    Naturally occurring magnetic fields are seen everywhere in the universe. They were first observed on Earth thousands of years ago, through their interaction with magnetized minerals like lodestone, and used for navigation long before people had any understanding of their nature or origin. Magnetism on the sun was discovered at the beginning of the 20th century by its effects on the spectrum of light that the sun emitted. Since then, more powerful telescopes looking deep into space found that the fields were ubiquitous.

    And while scientists had long learned how to make and use permanent magnets and electromagnets, which had all sorts of practical applications, the natural origins of magnetic fields in the universe remained a mystery. Recent work has provided part of the answer, but many aspects of this question are still under debate.

    Amplifying magnetic fields — the dynamo effect

    Scientists started thinking about this problem by considering the way that electric and magnetic fields were produced in the laboratory. When conductors, like copper wire, move in magnetic fields, electric fields are created. These fields, or voltages, can then drive electrical currents. This is how the electricity that we use every day is produced. Through this process of induction, large generators or “dynamos” convert mechanical energy into the electromagnetic energy that powers our homes and offices. A key feature of dynamos is that they need magnetic fields in order to work.

    But out in the universe, there are no obvious wires or big steel structures, so how do the fields arise? Progress on this problem began about a century ago as scientists pondered the source of the Earth’s magnetic field. By then, studies of the propagation of seismic waves showed that much of the Earth, below the cooler surface layers of the mantle, was liquid, and that there was a core composed of molten nickel and iron. Researchers theorized that the convective motion of this hot, electrically conductive liquid and the rotation of the Earth combined in some way to generate the Earth’s field.

    Eventually, models emerged that showed how the convective motion could amplify an existing field. This is an example of “self-organization” — a feature often seen in complex dynamical systems — where large-scale structures grow spontaneously from small-scale dynamics. But just like in a power station, you needed a magnetic field to make a magnetic field.

    A similar process is at work all over the universe. However, in stars and galaxies and in the space between them, the electrically conducting fluid is not molten metal, but plasma — a state of matter that exists at extremely high temperatures where the electrons are ripped away from their atoms. On Earth, plasmas can be seen in lightning or neon lights. In such a medium, the dynamo effect can amplify an existing magnetic field, provided it starts at some minimal level.

    Making the first magnetic fields

    Where does this seed field come from? That’s where the recent work of Zhou and her colleagues, published May 5 in PNAS, comes in. Zhou developed the underlying theory and performed numerical simulations on powerful supercomputers that show how the seed field can be produced and what fundamental processes are at work. An important aspect of the plasma that exists between stars and galaxies is that it is extraordinarily diffuse — typically about one particle per cubic meter. That is a very different situation from the interior of stars, where the particle density is about 30 orders of magnitude higher. The low densities mean that the particles in cosmological plasmas never collide, which has important effects on their behavior that had to be included in the model that these researchers were developing.   

    Calculations performed by the MIT researchers followed the dynamics in these plasmas, which developed from well-ordered waves but became turbulent as the amplitude grew and the interactions became strongly nonlinear. By including detailed effects of the plasma dynamics at small scales on macroscopic astrophysical processes, they demonstrated that the first magnetic fields can be spontaneously produced through generic large-scale motions as simple as sheared flows. Just like the terrestrial examples, mechanical energy was converted into magnetic energy.

    An important output of their computation was the amplitude of the expected spontaneously generated magnetic field. What this showed was that the field amplitude could rise from zero to a level where the plasma is “magnetized” — that is, where the plasma dynamics are strongly affected by the presence of the field. At this point, the traditional dynamo mechanism can take over and raise the fields to the levels that are observed. Thus, their work represents a self-consistent model for the generation of magnetic fields at cosmological scale.

    Professor Ellen Zweibel of the University of Wisconsin at Madison notes that “despite decades of remarkable progress in cosmology, the origin of magnetic fields in the universe remains unknown. It is wonderful to see state-of-the-art plasma physics theory and numerical simulation brought to bear on this fundamental problem.”

    Zhou and co-workers will continue to refine their model and study the handoff from the generation of the seed field to the amplification phase of the dynamo. An important part of their future research will be to determine if the process can work on a time scale consistent with astronomical observations. To quote the researchers, “This work provides the first step in the building of a new paradigm for understanding magnetogenesis in the universe.”

    This work was funded by the National Science Foundation CAREER Award and the Future Investigators of NASA Earth and Space Science Technology (FINESST) grant. More