More stories

  • in

    Finding the questions that guide MIT fusion research

    “One of the things I learned was, doing good science isn’t so much about finding the answers as figuring out what the important questions are.”

    As Martin Greenwald retires from the responsibilities of senior scientist and deputy director of the MIT Plasma Science and Fusion Center (PSFC), he reflects on his almost 50 years of science study, 43 of them as a researcher at MIT, pursuing the question of how to make the carbon-free energy of fusion a reality.

    Most of Greenwald’s important questions about fusion began after graduating from MIT with a BS in both physics and chemistry. Beginning graduate work at the University of California at Berkeley, he felt compelled to learn more about fusion as an energy source that could have “a real societal impact.” At the time, researchers were exploring new ideas for devices that could create and confine fusion plasmas. Greenwald worked on Berkeley’s “alternate concept” TORMAC, a Toroidal Magnetic Cusp. “It didn’t work out very well,” he laughs. “The first thing I was known for was making the measurements that shut down the program.”

    Believing the temperature of the plasma generated by the device would not be as high as his group leader expected, Greenwald developed hardware that could measure the low temperatures predicted by his own “back of the envelope calculations.” As he anticipated, his measurements showed that “this was not a fusion plasma; this was hardly a confined plasma at all.”

    With a PhD from Berkeley, Greenwald returned to MIT for a research position at the PSFC, attracted by the center’s “esprit de corps.”

    He arrived in time to participate in the final experiments on Alcator A, the first in a series of tokamaks built at MIT, all characterized by compact size and featuring high-field magnets. The tokamak design was then becoming favored as the most effective route to fusion: its doughnut-shaped vacuum chamber, surrounded by electromagnets, could confine the turbulent plasma long enough, while increasing its heat and density, to make fusion occur.

    Alcator A showed that the energy confinement time improves in relation to increasing plasma density. MIT’s succeeding device, Alcator C, was designed to use higher magnetic fields, boosting expectations that it would reach higher densities and better confinement. To attain these goals, however, Greenwald had to pursue a new technique that increased density by injecting pellets of frozen fuel into the plasma, a method he likens to throwing “snowballs in hell.” This work was notable for the creation of a new regime of enhanced plasma confinement on Alcator C. In those experiments, a confined plasma surpassed for the first time one of the two Lawson criteria — the minimum required value for the product of the plasma density and confinement time — for making net power from fusion. This had been a milestone for fusion research since their publication by John Lawson in 1957.

    Greenwald continued to make a name for himself as part of a larger study into the physics of the Compact Ignition Tokamak — a high-field burning plasma experiment that the U.S. program was proposing to build in the late 1980s. The result, unexpectedly, was a new scaling law, later known as the “Greenwald Density Limit,” and a new theory for the mechanism of the limit. It has been used to accurately predict performance on much larger machines built since.

    The center’s next tokamak, Alcator C-Mod, started operation in 1993 and ran for more than 20 years, with Greenwald as the chair of its Experimental Program Committee. Larger than Alcator C, the new device supported a highly shaped plasma, strong radiofrequency heating, and an all-metal plasma-facing first wall. All of these would eventually be required in a fusion power system.

    C-Mod proved to be MIT’s most enduring fusion experiment to date, producing important results for 20 years. During that time Greenwald contributed not only to the experiments, but to mentoring the next generation. Research scientist Ryan Sweeney notes that “Martin quickly gained my trust as a mentor, in part due to his often casual dress and slightly untamed hair, which are embodiments of his transparency and his focus on what matters. He can quiet a room of PhDs and demand attention not by intimidation, but rather by his calmness and his ability to bring clarity to complicated problems, be they scientific or human in nature.”

    Greenwald worked closely with the group of students who, in PSFC Director Dennis Whyte’s class, came up with the tokamak concept that evolved into SPARC. MIT is now pursuing this compact, high-field tokamak with Commonwealth Fusion Systems, a startup that grew out of the collective enthusiasm for this concept, and the growing realization it could work. Greenwald now heads the Physics Group for the SPARC project at MIT. He has helped confirm the device’s physics basis in order to predict performance and guide engineering decisions.

    “Martin’s multifaceted talents are thoroughly embodied by, and imprinted on, SPARC” says Whyte. “First, his leadership in its plasma confinement physics validation and publication place SPARC on a firm scientific footing. Secondly, the impact of the density limit he discovered, which shows that fuel density increases with magnetic field and decreasing the size of the tokamak, is critical in obtaining high fusion power density not just in SPARC, but in future power plants. Third, and perhaps most impressive, is Martin’s mentorship of the SPARC generation of leadership.”

    Greenwald’s expertise and easygoing personality have made him an asset as head of the PSFC Office for Computer Services and group leader for data acquisition and computing, and sought for many professional committees. He has been an APS Fellow since 2000, and was an APS Distinguished Lecturer in Plasma Physics (2001-02). He was also presented in 2014 with a Leadership Award from Fusion Power Associates. He is currently an associate editor for Physics of Plasmas and a member of the Lawrence Livermore National Laboratory Physical Sciences Directorate External Review Committee.

    Although leaving his full-time responsibilities, Greenwald will remain at MIT as a visiting scientist, a role he says will allow him to “stick my nose into everything without being responsible for anything.”

    “At some point in the race you have to hand off the baton,“ he says. “And it doesn’t mean you’re not interested in the outcome; and it doesn’t mean you’re just going to walk away into the stands. I want to be there at the end when we succeed.” More

  • in

    Leveraging science and technology against the world’s top problems

    Looking back on nearly a half-century at MIT, Richard K. Lester, associate provost and Japan Steel Industry Professor, sees a “somewhat eccentric professional trajectory.”

    But while his path has been irregular, there has been a clearly defined through line, Lester says: the emergence of new science and new technologies, the potential of these developments to shake up the status quo and address some of society’s most consequential problems, and what the outcomes might mean for America’s place in the world.

    Perhaps no assignment in Lester’s portfolio better captures this theme than the new MIT Climate Grand Challenges competition. Spearheaded by Lester and Maria Zuber, MIT vice president for research, and launched at the height of the pandemic in summer 2020, this initiative is designed to mobilize the entire MIT research community around tackling “the really hard, challenging problems currently standing in the way of an effective global response to the climate emergency,” says Lester. “The focus is on those problems where progress requires developing and applying frontier knowledge in the natural and social sciences and cutting-edge technologies. This is the MIT community swinging for the fences in areas where we have a comparative advantage.”This is a passion project for him, not least because it has engaged colleagues from nearly all of MIT’s departments. After nearly 100 initial ideas were submitted by more than 300 faculty, 27 teams were named finalists and received funding to develop comprehensive research and innovation plans in such areas as decarbonizing complex industries; risk forecasting and adaptation; advancing climate equity; and carbon removal, management, and storage. In April, a small subset of this group will become multiyear flagship projects, augmenting the work of existing MIT units that are pursuing climate research. Lester is sunny in the face of these extraordinarily complex problems. “This is a bottom-up effort with exciting proposals, and where the Institute is collectively committed — it’s MIT at its best.”

    Nuclear to the core

    This initiative carries a particular resonance for Lester, who remains deeply engaged in nuclear engineering. “The role of nuclear energy is central and will need to become even more central if we’re to succeed in addressing the climate challenge,” he says. He also acknowledges that for nuclear energy technologies — both fission and fusion — to play a vital role in decarbonizing the economy, they must not just win “in the court of public opinion, but in the marketplace,” he says. “Over the years, my research has sought to elucidate what needs to be done to overcome these obstacles.”

    In fact, Lester has been campaigning for much of his career for a U.S. nuclear innovation agenda, a commitment that takes on increased urgency as the contours of the climate crisis sharpen. He argues for the rapid development and testing of nuclear technologies that can complement the renewable but intermittent energy sources of sun and wind. Whether powerful, large-scale, molten-salt-cooled reactors or small, modular, light water reactors, nuclear batteries or promising new fusion projects, U.S. energy policy must embrace nuclear innovation, says Lester, or risk losing the high-stakes race for a sustainable future.

    Chancing into a discipline

    Lester’s introduction to nuclear science was pure happenstance.

    Born in the English industrial city of Leeds, he grew up in a musical family and played piano, violin, and then viola. “It was a big part of my life,” he says, and for a time, music beckoned as a career. He tumbled into a chemical engineering concentration at Imperial College, London, after taking a job in a chemical factory following high school. “There’s a certain randomness to life, and in my case, it’s reflected in my choice of major, which had a very large impact on my ultimate career.”

    In his second year, Lester talked his way into running a small experiment in the university’s research reactor, on radiation effects in materials. “I got hooked, and began thinking of studying nuclear engineering.” But there were few graduate programs in British universities at the time. Then serendipity struck again. The instructor of Lester’s single humanities course at Imperial had previously taught at MIT, and suggested Lester take a look at the nuclear program there. “I will always be grateful to him (and, indirectly, to MIT’s Humanities program) for opening my eyes to the existence of this institution where I’ve spent my whole adult life,” says Lester.

    He arrived at MIT with the notion of mitigating the harms of nuclear weapons. It was a time when the nuclear arms race “was an existential threat in everyone’s life,” he recalls. He targeted his graduate studies on nuclear proliferation. But he also encountered an electrifying study by MIT meteorologist Jule Charney. “Professor Charney produced one of the first scientific assessments of the effects on climate of increasing CO2 concentrations in the atmosphere, with quantitative estimates that have not fundamentally changed in 40 years.”

    Lester shifted directions. “I came to MIT to work on nuclear security, but stayed in the nuclear field because of the contributions that it can and must make in addressing climate change,” he says.

    Research and policy

    His path forward, Lester believed, would involve applying his science and technology expertise to critical policy problems, grounded in immediate, real-world concerns, and aiming for broad policy impacts. Even as a member of NSE, he joined with colleagues from many MIT departments to study American industrial practices and what was required to make them globally competitive, and then founded MIT’s Industrial Performance Center (IPC). Working at the IPC with interdisciplinary teams of faculty and students on the sources of productivity and innovation, his research took him to many countries at different stages of industrialization, including China, Taiwan, Japan, and Brazil.

    Lester’s wide-ranging work yielded books (including the MIT Press bestseller “Made in America”), advisory positions with governments, corporations, and foundations, and unexpected collaborations. “My interests were always fairly broad, and being at MIT made it possible to team up with world-leading scholars and extraordinary students not just in nuclear engineering, but in many other fields such as political science, economics, and management,” he says.

    Forging cross-disciplinary ties and bringing creative people together around a common goal proved a valuable skill as Lester stepped into positions of ever-greater responsibility at the Institute. He didn’t exactly relish the prospect of a desk job, though. “I religiously avoided administrative roles until I felt I couldn’t keep avoiding them,” he says.

    Today, as associate provost, he tends to MIT’s international activities — a daunting task given increasing scrutiny of research universities’ globe-spanning research partnerships and education of foreign students. But even in the midst of these consuming chores, Lester remains devoted to his home department. “Being a nuclear engineer is a central part of my identity,” he says.

    To students entering the nuclear field nearly 50 years after he did, who are understandably “eager to fix everything that seems wrong immediately,” he has a message: “Be patient. The hard things, the ones that are really worth doing, will take a long time to do.” Putting the climate crisis behind us will take two generations, Lester believes. Current students will start the job, but it will also take the efforts of their children’s generation before it is done.  “So we need you to be energetic and creative, of course, but whatever you do we also need you to be patient and to have ‘stick-to-itiveness’ — and maybe also a moral compass that our generation has lacked.” More

  • in

    MIT Energy Conference focuses on climate’s toughest challenges

    This year’s MIT Energy Conference, the largest student-led event of its kind, included keynote talks and panels that tackled some of the thorniest remaining challenges in the global effort to cut back on climate-altering emissions. These include the production of construction materials such as steel and cement, and the role of transportation including aviation and shipping. While the challenges are formidable, approaches incorporating methods such as fusion, heat pumps, energy efficiency, and the use of hydrogen hold promise, participants said.

    The two-day conference, held on March 31 and April 1 for more than 900 participants, included keynote lectures, 14 panel discussions, a fireside chat, networking events, and more. The event this year included the final round of the annual MIT Climate and Energy Prize, whose winning team receives $100,000 and other support. The prize, awarded since 2007, has led to the creation of more than 220 companies and $1.1 billion in investments.

    This year’s winner is a project that hopes to provide an innovative, efficient waterless washing machine aimed at the vast majority of the world’s people, who still do laundry by hand.

    “A truly consequential moment in history”

    In his opening keynote address Fatih Birol, executive director of the International Energy Agency, noted that this year’s conference was taking place during the unprovoked invasion of Ukraine by Russia, a leading gas and oil exporter. As a result, “global oil markets are going through a major turmoil,” he said.

    He said that Russian oil exports are expected to drop by 3 million barrels a day, and that international efforts to release reserves and promote increased production elsewhere will help, but will not suffice. “We have to look to other measures” to make up the shortfall, he said, noting that his agency has produced a 10-point plan of measures to help reduce global demand for oil.

    Europe gets 45 percent of its natural gas from Russia, and the agency also has developed a 10-point plan to help alleviate expected shortages there, including measures to improve energy efficiency in homes and industries, promote renewable heating sources, and postpone retirement of some nuclear plants. But he emphasized that “our goals to reach our climate targets should not be yet another victim of Mr. Putin and his allies.”  Unfortunately, Birol said, “I see that addressing climate change is sliding down in the policy agenda of many governments.”

    But he sees reasons for optimism as well, in terms of the feasibility of achieving the global emissions reduction target, agreed to by countries representing 80 percent of the global economy, of reaching net zero carbon dioxide emissions by 2050. The IEA has developed a roadmap for the entire energy sector to get there, which is now used by many governments as a benchmark, according to Birol.

    In addition, the trend is already clear, he said. “More than 90 percent of all power plants installed in the world [last year] were renewable energy,” mainly solar and wind. And 10 percent of cars sold worldwide last year, and 20 percent in Europe, were electric cars. “Please remember that in 2019 it was only 2 percent!” he said. He also predicted that “nuclear is going to make a comeback in many countries,” both in terms of large plants and newer small modular reactors.

    Birol said that “I hope that the current crisis gives governments the impetus to address the energy security concerns, to reach our climate goals, and … [to] choose the right direction at this very important turning point.”

    The conference’s second day began with keynote talks by Gina McCarthy, national climate advisor at the White House Office of Domestic Climate Policy, and Maria Zuber, MIT’s vice president for research. In her address, Zuber said, “This conference comes at a truly consequential moment in history — a moment that puts into stark relief the enormous risks created by our current fossil-fuel based energy system — risks we cannot continue to accept.”

    She added that “time is not on our side.” To meet global commitments for limiting climate impacts, the world needs to reduce emissions by about half by 2030, and get to net zero by 2050. “In other words, we need to transform our entire global energy system in a few decades,” she said. She cited MIT’s “Fast Forward” climate action plan, issued last year, as presenting the two tracks that the world needs to pursue simultaneously: going as far as possible, as fast as possible, with the tools that exist now, while also innovating and investing in new ideas, technologies, practices, and institutions that may be needed to reach the net-zero goal.

    On the first track, she said, citing an IEA report, “from here until 2040, we can get most of the emissions reductions we need with technologies that are currently available or on the verge of becoming commercially available.” These include electrifying and boosting efficiency in buildings, industry, and transportation; increasing the portion of electricity coming from emissions-free sources; and investing in new infrastructure such as electric vehicle charging stations.

    But more than that is needed, she pointed out. For example, the amount of methane that leaks away into the atmosphere from fossil fuel operations is equivalent to all the natural gas used in Europe’s power sector, Zuber said. Recovering and selling that methane can dramatically reduce global methane emissions, often at little or no cost.

    For the longer run, “we need track-two solutions to decarbonize tough industries like aviation, shipping, chemicals, concrete, and steel,” and to remove carbon dioxide from the atmosphere. She described some of the promising technologies that are in the pipeline. Fusion, for example, has moved from being a scientific challenge to an engineering problem whose solution seems well underway, she said.

    Another important area is food-related systems, which currently account for a third of all global emissions. For example, fertilizer production uses a very energy-intensive process, but work on plants engineered to fix nitrogen directly could make a significant dent.

    These and several other advanced research areas may not all pan out, but some undoubtedly will, and will help curb climate change as well as create new jobs and reduce pollution.

    Though the problems we face are complex, they are not insurmountable, Zuber said. “We don’t need a miracle. What we need is to move along the two tracks I’ve outlined with determination, ingenuity, and fierce urgency.”

    The promise and challenges of hydrogen

    Other conference speakers took on some of the less-discussed but crucial areas that also need to be addressed in order to limit global warming to 1.5 degrees. Heavy transportation, and aviation in particular, have been considered especially challenging. In his keynote address, Glenn Llewellyn, vice president for zero-emission aircraft at Airbus, outlined several approaches his company is working on to develop competitive midrange alternative airliners by 2035 that use either batteries or fuel cells powered by hydrogen. The early-stage designs demonstrate that, contrary to some projections, there is a realistic pathway to weaning that industry from its present reliance on fossil fuel, chiefly kerosene.

    Hydrogen has real potential as an aviation fuel, he said, either directly for use in fuel cells for power or burned directly for propulsion, or indirectly as a feedstock for synthetic fuels. Both are being studied by the company, he said, including a hybrid model that uses both hydrogen fuel cells and hydrogen-fueled jet engines. The company projects a range of 2,000 nautical miles for a jet carrying 200 to 300 passengers, he said — all with no direct emissions and no contrails.

    But this vision will not be practical, Llewellyn said, unless economies of scale help to significantly lower the cost of hydrogen production. “Hydrogen is at the hub of aviation decarbonization,” he said. But that kind of price reduction seems quite feasible, he said, given that other major industries are also seriously looking at the use of hydrogen for their own decarbonization plans, including the production of steel and cement.

    Such uses were the subject of a panel discussion entitled “Deploying the Hydrogen Economy.” Hydrogen production technology exists, but not nearly at the scale that’s needed, which is about 500 million tons a year, pointed out moderator Dharik Mallapragada of the MIT Energy Initiative.

    Yet in some applications, the use of hydrogen both reduces emissions and is economically competitive. Preeti Pande of Plug Power said that her company, which produces hydrogen fuel cells, has found a significant market in an unexpected place: fork lifts, used in warehouses and factories worldwide. It turns out that replacing current battery-operated versions with fuel cell versions is a win-win for the companies that use them, saving money while helping to meet decarbonization goals.

    Lindsay Ashby of Avangrid Renewables said that the company has installed fuel-cell buses in Barcelona that run entirely on hydrogen generated by solar panels. The company is also building a 100-megawatt solar facility to produce hydrogen for the production of fertilizer, another major industry in need of decarbonization because of its large emissions footprint. And Brett Perleman of the Center for Houston’s Future said of his city that “we’re already a hydrogen hub today, just not green hydrogen” since the gas is currently mostly produced as a byproduct of fossil fuels. But that is changing rapidly, he said, and Houston, along with several other cities, aims to be a center of activity for hydrogen produced from renewable, non-carbon-emitting sources. They aim to be producing 1,000 tons a day by 2028, “and I think we’ll end up exceeding that,” he said.

    For industries that can switch to renewably generated electricity, that is typically the best choice, Perleman said. “But for those that can’t, hydrogen is a great option,” and that includes aviation, shipping, and rail. “The big oil companies all have plans in place” to develop clean hydrogen production, he said. “It’s not just a dream, but a reality.”

    For shipping, which tends to rely on bunker fuel, a particularly high-emissions fossil fuel, another potential option could be a new generation of small nuclear plants, said Jeff Navin of Terrapower, a company currently developing such units. “Finding replacements for coal, oil, or natural gas for industrial purposes is very hard,” he said, but often what these processes require is consistent high heat, which nuclear can deliver, as long as costs and regulatory issues can be resolved.  

    MIT professor of nuclear engineering Jacopo Buongiorno pointed out that the primary reasons for delays and cost overruns in nuclear plants have had to do with issues at the construction site, many of which could be alleviated by having smaller, factory-built modular plants, or by building multiple units at a time of a standardized design. If the government would take on the nuclear waste disposal, as some other countries have done, then nuclear power could play an important part in the decarbonization of many industries, he said.

    Student-led startups

    The two-day conference concluded with the final round of the annual MIT Climate and Energy Prize, consisting of the five finalist teams presenting brief pitches for their startup company ideas, followed by questions from the panel of judges. This year’s finalists included a team called Muket, dedicated to finding ways of reducing methane emissions from cattle and dairy farms. Feed additives or other measures could cut the emissions by 50 percent, the team estimates.

    A team called Ivu Biologics described a system for incorporating nitrogen-fixing microbes into the coatings of seeds, thereby reducing the need for added fertilizers, whose production is a major greenhouse gas source. The company is making use of seed-coating technology developed at MIT over the last few years. Another team, called Mesophase, also based on MIT-developed technology, aims to replace the condensers used in power plants and other industrial systems with much more efficient versions, thus increasing the energy output from a given amount of fuel or other heat source.

    A team called TerraTrade aims to facilitate the adoption of power purchase agreements by companies, institutions and governments, by acting as a kind of broker to create and administer such agreements, making it easier for even smaller entities to take part in these plans, which help to enable rapid development of renewable fossil-fuel-free energy production.

    The grand prize of $100,000 was awarded to a team called Ultropia, which is developing a combined clothes washer and drier that uses ultrasound instead of water for its cleaning. The system does use a small amount of water, but this can be recycled, making these usable even in areas where water availability is limited. The devices could have a great impact on the estimated 6 billion people in the world today who are still limited to washing clothes by hand, the team says, and because the machines would be so efficient, they would require very little energy to run — a significant improvement over the wider adoption of conventional washers and driers. More

  • in

    Chemical reactions for the energy transition

    One challenge in decarbonizing the energy system is knowing how to deal with new types of fuels. Traditional fuels such as natural gas and oil can be combined with other materials and then heated to high temperatures so they chemically react to produce other useful fuels or substances, or even energy to do work. But new materials such as biofuels can’t take as much heat without breaking down.

    A key ingredient in such chemical reactions is a specially designed solid catalyst that is added to encourage the reaction to happen but isn’t itself consumed in the process. With traditional materials, the solid catalyst typically interacts with a gas; but with fuels derived from biomass, for example, the catalyst must work with a liquid — a special challenge for those who design catalysts.

    For nearly a decade, Yogesh Surendranath, an associate professor of chemistry at MIT, has been focusing on chemical reactions between solid catalysts and liquids, but in a different situation: rather than using heat to drive reactions, he and his team input electricity from a battery or a renewable source such as wind or solar to give chemically inactive molecules more energy so they react. And key to their research is designing and fabricating solid catalysts that work well for reactions involving liquids.

    Recognizing the need to use biomass to develop sustainable liquid fuels, Surendranath wondered whether he and his team could take the principles they have learned about designing catalysts to drive liquid-solid reactions with electricity and apply them to reactions that occur at liquid-solid interfaces without any input of electricity.

    To their surprise, they found that their knowledge is directly relevant. Why? “What we found — amazingly — is that even when you don’t hook up wires to your catalyst, there are tiny internal ‘wires’ that do the reaction,” says Surendranath. “So, reactions that people generally think operate without any flow of current actually do involve electrons shuttling from one place to another.” And that means that Surendranath and his team can bring the powerful techniques of electrochemistry to bear on the problem of designing catalysts for sustainable fuels.

    A novel hypothesis

    Their work has focused on a class of chemical reactions important in the energy transition that involve adding oxygen to small organic (carbon-containing) molecules such as ethanol, methanol, and formic acid. The conventional assumption is that the reactant and oxygen chemically react to form the product plus water. And a solid catalyst — often a combination of metals — is present to provide sites on which the reactant and oxygen can interact.

    But Surendranath proposed a different view of what’s going on. In the usual setup, two catalysts, each one composed of many nanoparticles, are mounted on a conductive carbon substrate and submerged in water. In that arrangement, negatively charged electrons can flow easily through the carbon, while positively charged protons can flow easily through water.

    Surendranath’s hypothesis was that the conversion of reactant to product progresses by means of two separate “half-reactions” on the two catalysts. On one catalyst, the reactant turns into a product, in the process sending electrons into the carbon substrate and protons into the water. Those electrons and protons are picked up by the other catalyst, where they drive the oxygen-to-water conversion. So, instead of a single reaction, two separate but coordinated half-reactions together achieve the net conversion of reactant to product.

    As a result, the overall reaction doesn’t actually involve any net electron production or consumption. It is a standard “thermal” reaction resulting from the energy in the molecules and maybe some added heat. The conventional approach to designing a catalyst for such a reaction would focus on increasing the rate of that reactant-to-product conversion. And the best catalyst for that kind of reaction could turn out to be, say, gold or palladium or some other expensive precious metal.

    However, if that reaction actually involves two half-reactions, as Surendranath proposed, there is a flow of electrical charge (the electrons and protons) between them. So Surendranath and others in the field could instead use techniques of electrochemistry to design not a single catalyst for the overall reaction but rather two separate catalysts — one to speed up one half-reaction and one to speed up the other half-reaction. “That means we don’t have to design one catalyst to do all the heavy lifting of speeding up the entire reaction,” says Surendranath. “We might be able to pair up two low-cost, earth-abundant catalysts, each of which does half of the reaction well, and together they carry out the overall transformation quickly and efficiently.”

    But there’s one more consideration: Electrons can flow through the entire catalyst composite, which encompasses the catalyst particle(s) and the carbon substrate. For the chemical conversion to happen as quickly as possible, the rate at which electrons are put into the catalyst composite must exactly match the rate at which they are taken out. Focusing on just the electrons, if the reaction-to-product conversion on the first catalyst sends the same number of electrons per second into the “bath of electrons” in the catalyst composite as the oxygen-to-water conversion on the second catalyst takes out, the two half-reactions will be balanced, and the electron flow — and the rate of the combined reaction — will be fast. The trick is to find good catalysts for each of the half-reactions that are perfectly matched in terms of electrons in and electrons out.

    “A good catalyst or pair of catalysts can maintain an electrical potential — essentially a voltage — at which both half-reactions are fast and are balanced,” says Jaeyune Ryu PhD ’21, a former member of the Surendranath lab and lead author of the study; Ryu is now a postdoc at Harvard University. “The rates of the reactions are equal, and the voltage in the catalyst composite won’t change during the overall thermal reaction.”

    Drawing on electrochemistry

    Based on their new understanding, Surendranath, Ryu, and their colleagues turned to electrochemistry techniques to identify a good catalyst for each half-reaction that would also pair up to work well together. Their analytical framework for guiding catalyst development for systems that combine two half-reactions is based on a theory that has been used to understand corrosion for almost 100 years, but has rarely been applied to understand or design catalysts for reactions involving small molecules important for the energy transition.

    Key to their work is a potentiostat, a type of voltmeter that can either passively measure the voltage of a system or actively change the voltage to cause a reaction to occur. In their experiments, Surendranath and his team use the potentiostat to measure the voltage of the catalyst in real time, monitoring how it changes millisecond to millisecond. They then correlate those voltage measurements with simultaneous but separate measurements of the overall rate of catalysis to understand the reaction pathway.

    For their study of the conversion of small, energy-related molecules, they first tested a series of catalysts to find good ones for each half-reaction — one to convert the reactant to product, producing electrons and protons, and another to convert the oxygen to water, consuming electrons and protons. In each case, a promising candidate would yield a rapid reaction — that is, a fast flow of electrons and protons out or in.

    To help identify an effective catalyst for performing the first half-reaction, the researchers used their potentiostat to input carefully controlled voltages and measured the resulting current that flowed through the catalyst. A good catalyst will generate lots of current for little applied voltage; a poor catalyst will require high applied voltage to get the same amount of current. The team then followed the same procedure to identify a good catalyst for the second half-reaction.

    To expedite the overall reaction, the researchers needed to find two catalysts that matched well — where the amount of current at a given applied voltage was high for each of them, ensuring that as one produced a rapid flow of electrons and protons, the other one consumed them at the same rate.

    To test promising pairs, the researchers used the potentiostat to measure the voltage of the catalyst composite during net catalysis — not changing the voltage as before, but now just measuring it from tiny samples. In each test, the voltage will naturally settle at a certain level, and the goal is for that to happen when the rate of both reactions is high.

    Validating their hypothesis and looking ahead

    By testing the two half-reactions, the researchers could measure how the reaction rate for each one varied with changes in the applied voltage. From those measurements, they could predict the voltage at which the full reaction would proceed fastest. Measurements of the full reaction matched their predictions, supporting their hypothesis.

    The team’s novel approach of using electrochemistry techniques to examine reactions thought to be strictly thermal in nature provides new insights into the detailed steps by which those reactions occur and therefore into how to design catalysts to speed them up. “We can now use a divide-and-conquer strategy,” says Ryu. “We know that the net thermal reaction in our study happens through two ‘hidden’ but coupled half-reactions, so we can aim to optimize one half-reaction at a time” — possibly using low-cost catalyst materials for one or both.

    Adds Surendranath, “One of the things that we’re excited about in this study is that the result is not final in and of itself. It has really seeded a brand-new thrust area in our research program, including new ways to design catalysts for the production and transformation of renewable fuels and chemicals.”

    This research was supported primarily by the Air Force Office of Scientific Research. Jaeyune Ryu PhD ’21 was supported by a Samsung Scholarship. Additional support was provided by a National Science Foundation Graduate Research Fellowship.

    This article appears in the Autumn 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    New program bolsters innovation in next-generation artificial intelligence hardware

    The MIT AI Hardware Program is a new academia and industry collaboration aimed at defining and developing translational technologies in hardware and software for the AI and quantum age. A collaboration between the MIT School of Engineering and MIT Schwarzman College of Computing, involving the Microsystems Technologies Laboratories and programs and units in the college, the cross-disciplinary effort aims to innovate technologies that will deliver enhanced energy efficiency systems for cloud and edge computing.

    “A sharp focus on AI hardware manufacturing, research, and design is critical to meet the demands of the world’s evolving devices, architectures, and systems,” says Anantha Chandrakasan, dean of the MIT School of Engineering and Vannevar Bush Professor of Electrical Engineering and Computer Science. “Knowledge-sharing between industry and academia is imperative to the future of high-performance computing.”

    Based on use-inspired research involving materials, devices, circuits, algorithms, and software, the MIT AI Hardware Program convenes researchers from MIT and industry to facilitate the transition of fundamental knowledge to real-world technological solutions. The program spans materials and devices, as well as architecture and algorithms enabling energy-efficient and sustainable high-performance computing.

    “As AI systems become more sophisticated, new solutions are sorely needed to enable more advanced applications and deliver greater performance,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “Our aim is to devise real-world technological solutions and lead the development of technologies for AI in hardware and software.”

    The inaugural members of the program are companies from a wide range of industries including chip-making, semiconductor manufacturing equipment, AI and computing services, and information systems R&D organizations. The companies represent a diverse ecosystem, both nationally and internationally, and will work with MIT faculty and students to help shape a vibrant future for our planet through cutting-edge AI hardware research.

    The five inaugural members of the MIT AI Hardware Program are:  

    Amazon, a global technology company whose hardware inventions include the Kindle, Amazon Echo, Fire TV, and Astro; 
    Analog Devices, a global leader in the design and manufacturing of analog, mixed signal, and DSP integrated circuits; 
    ASML, an innovation leader in the semiconductor industry, providing chipmakers with hardware, software, and services to mass produce patterns on silicon through lithography; 
    NTT Research, a subsidiary of NTT that conducts fundamental research to upgrade reality in game-changing ways that improve lives and brighten our global future; and 
    TSMC, the world’s leading dedicated semiconductor foundry.

    The MIT AI Hardware Program will create a roadmap of transformative AI hardware technologies. Leveraging MIT.nano, the most advanced university nanofabrication facility anywhere, the program will foster a unique environment for AI hardware research.  

    “We are all in awe at the seemingly superhuman capabilities of today’s AI systems. But this comes at a rapidly increasing and unsustainable energy cost,” says Jesús del Alamo, the Donner Professor in MIT’s Department of Electrical Engineering and Computer Science. “Continued progress in AI will require new and vastly more energy-efficient systems. This, in turn, will demand innovations across the entire abstraction stack, from materials and devices to systems and software. The program is in a unique position to contribute to this quest.”

    The program will prioritize the following topics:

    analog neural networks;
    new roadmap CMOS designs;
    heterogeneous integration for AI systems;
    onolithic-3D AI systems;
    analog nonvolatile memory devices;
    software-hardware co-design;
    intelligence at the edge;
    intelligent sensors;
    energy-efficient AI;
    intelligent internet of things (IIoT);
    neuromorphic computing;
    AI edge security;
    quantum AI;
    wireless technologies;
    hybrid-cloud computing; and
    high-performance computation.

    “We live in an era where paradigm-shifting discoveries in hardware, systems communications, and computing have become mandatory to find sustainable solutions — solutions that we are proud to give to the world and generations to come,” says Aude Oliva, senior research scientist in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and director of strategic industry engagement in the MIT Schwarzman College of Computing.

    The new program is co-led by Jesús del Alamo and Aude Oliva, and Anantha Chandrakasan serves as chair. More

  • in

    Q&A: Climate Grand Challenges finalists on new pathways to decarbonizing industry

    Note: This is the third article in a four-part interview series highlighting the work of the 27 MIT Climate Grand Challenges finalist teams, which received a total of $2.7 million in startup funding to advance their projects. In April, the Institute will name a subset of the finalists as multiyear flagship projects.

    The industrial sector is the backbone of today’s global economy, yet its activities are among the most energy-intensive and the toughest to decarbonize. Efforts to reach net-zero targets and avert runaway climate change will not succeed without new solutions for replacing sources of carbon emissions with low-carbon alternatives and developing scalable nonemitting applications of hydrocarbons.

    In conversations prepared for MIT News, faculty from three of the teams with projects in the competition’s “Decarbonizing complex industries and processes” category discuss strategies for achieving impact in hard-to-abate sectors, from long-distance transportation and building construction to textile manufacturing and chemical refining. The other Climate Grand Challenges research themes include using data and science to forecast climate-related risk, building equity and fairness into climate solutions, and removing, managing, and storing greenhouse gases. The following responses have been edited for length and clarity.

    Moving toward an all-carbon material approach to building

    Faced with the prospect of building stock doubling globally by 2050, there is a great need for sustainable alternatives to conventional mineral- and metal-based construction materials. Mark Goulthorpe, associate professor in the Department of Architecture, explains the methods behind Carbon >Building, an initiative to develop energy-efficient building materials by reorienting hydrocarbons from current use as fuels to environmentally benign products, creating an entirely new genre of lightweight, all-carbon buildings that could actually drive decarbonization.

    Q: What are all-carbon buildings and how can they help mitigate climate change?

    A: Instead of burning hydrocarbons as fuel, which releases carbon dioxide and other greenhouse gases that contribute to atmospheric pollution, we seek to pioneer a process that uses carbon materially to build at macro scale. New forms of carbon — carbon nanotube, carbon foam, etc. — offer salient properties for building that might effectively displace the current material paradigm. Only hydrocarbons offer sufficient scale to beat out the billion-ton mineral and metal markets, and their perilous impact. Carbon nanotube from methane pyrolysis is of special interest, as it offers hydrogen as a byproduct.

    Q: How will society benefit from the widespread use of all-carbon buildings?

    A: We anticipate reducing costs and timelines in carbon composite buildings, while increasing quality, longevity, and performance, and diminishing environmental impact. Affordability of buildings is a growing problem in all global markets as the cost of labor and logistics in multimaterial assemblies creates a burden that is very detrimental to economic growth and results in overcrowding and urban blight.

    Alleviating these challenges would have huge societal benefits, especially for those in lower income brackets who cannot afford housing, but the biggest benefit would be in drastically reducing the environmental footprint of typical buildings, which account for nearly 40 percent of global energy consumption.

    An all-carbon building sector will not only reduce hydrocarbon extraction, but can produce higher value materials for building. We are looking to rethink the building industry by greatly streamlining global production and learning from the low-labor methods pioneered by composite manufacturing such as wind turbine blades, which are quick and cheap to produce. This technology can improve the sustainability and affordability of buildings — and holds the promise of faster, cheaper, greener, and more resilient modes of dwelling.

    Emissions reduction through innovation in the textile industry

    Collectively, the textile industry is responsible for over 4 billion metric tons of carbon dioxide equivalent per year, or 5 to 10 percent of global greenhouse gas emissions — more than aviation and maritime shipping combined. And the problem is only getting worse with the industry’s rapid growth. Under the current trajectory, consumption is projected to increase 30 percent by 2030, reaching 102 million tons. A diverse group of faculty and researchers led by Gregory Rutledge, the Lammot du Pont Professor in the Department of Chemical Engineering, and Yuly Fuentes-Medel, project manager for fiber technologies and research advisor to the MIT Innovation Initiative, is developing groundbreaking innovations to reshape how textiles are selected, sourced, designed, manufactured, and used, and to create the structural changes required for sustained reductions in emissions by this industry.

    Q: Why has the textile industry been difficult to decarbonize?

    A: The industry currently operates under a linear model that relies heavily on virgin feedstock, at roughly 97 percent, yet recycles or downcycles less than 15 percent. Furthermore, recent trends in “fast fashion” have led to massive underutilization of apparel, such that products are discarded on average after only seven to 10 uses. In an industry with high volume and low margins, replacement technologies must achieve emissions reduction at scale while maintaining performance and economic efficiency.

    There are also technical barriers to adopting circular business models, from the challenge of dealing with products comprising fiber blends and chemical additives to the low maturity of recycling technologies. The environmental impacts of textiles and apparel have been estimated using life cycle analysis, and industry-standard indexes are under development to assess sustainability throughout the life cycle of a product, but information and tools are needed to model how new solutions will alter those impacts and include the consumer as an active player to keep our planet safe. This project seeks to deliver both the new solutions and the tools to evaluate their potential for impact.

    Q: Describe the five components of your program. What is the anticipated timeline for implementing these solutions?

    A: Our plan comprises five programmatic sections, which include (1) enabling a paradigm shift to sustainable materials using nontraditional, carbon-negative polymers derived from biomass and additives that facilitate recycling; (2) rethinking manufacturing with processes to structure fibers and fabrics for performance, waste reduction, and increased material efficiency; (3) designing textiles for value by developing products that are customized, adaptable, and multifunctional, and that interact with their environment to reduce energy consumption; (4) exploring consumer behavior change through human interventions that reduce emissions by encouraging the adoption of new technologies, increased utilization of products, and circularity; and (5) establishing carbon transparency with systems-level analyses that measure the impact of these strategies and guide decision making.

    We have proposed a five-year timeline with annual targets for each project. Conservatively, we estimate our program could reduce greenhouse gas emissions in the industry by 25 percent by 2030, with further significant reductions to follow.

    Tough-to-decarbonize transportation

    Airplanes, transoceanic ships, and freight trucks are critical to transporting people and delivering goods, and the cornerstone of global commerce, manufacturing, and tourism. But these vehicles also emit 3.7 billion tons of carbon dioxide annually and, left unchecked, they could take up a quarter of the remaining carbon budget by 2050. William Green, the Hoyt C. Hottel Professor in the Department Chemical Engineering, co-leads a multidisciplinary team with Steven Barrett, professor of aeronautics and astronautics and director of the MIT Laboratory for Aviation and the Environment, that is working to identify and advance economically viable technologies and policies for decarbonizing heavy duty trucking, shipping, and aviation. The Tough to Decarbonize Transportation research program aims to design and optimize fuel chemistry and production, vehicles, operations, and policies to chart the course to net-zero emissions by midcentury.

    Q: What are the highest priority focus areas of your research program?

    A: Hydrocarbon fuels made from biomass are the least expensive option, but it seems impractical, and probably damaging to the environment, to harvest the huge amount of biomass that would be needed to meet the massive and growing energy demands from these sectors using today’s biomass-to-fuel technology. We are exploring strategies to increase the amount of useful fuel made per ton of biomass harvested, other methods to make low-climate-impact hydrocarbon fuels, such as from carbon dioxide, and ways to make fuels that do not contain carbon at all, such as with hydrogen, ammonia, and other hydrogen carriers.

    These latter zero-carbon options free us from the need for biomass or to capture gigatons of carbon dioxide, so they could be a very good long-term solution, but they would require changing the vehicles significantly, and the construction of new refueling infrastructure, with high capital costs.

    Q: What are the scientific, technological, and regulatory barriers to scaling and implementing potential solutions?

    A: Reimagining an aviation, trucking, and shipping sector that connects the world and increases equity without creating more environmental damage is challenging because these vehicles must operate disconnected from the electrical grid and have energy requirements that cannot be met by batteries alone. Some of the concepts do not even exist in prototype yet, and none of the appealing options have been implemented at anywhere near the scale required.

    In most cases, we do not know the best way to make the fuel, and for new fuels the vehicles and refueling systems all need to be developed. Also, new fuels, or large-scale use of biomass, will introduce new environmental problems that need to be carefully considered, to ensure that decarbonization solutions do not introduce big new problems.

    Perhaps most difficult are the policy, economic, and equity issues. A new long-haul transportation system will be expensive, and everyone will be affected by the increased cost of shipping freight. To have the desired climate impact, the transport system must change in almost every country. During the transition period, we will need both the existing vehicle and fuel system to keep running smoothly, even as a new low-greenhouse system is introduced. We will also examine what policies could make that work and how we can get countries around the world to agree to implement them. More

  • in

    A better way to separate gases

    Industrial processes for chemical separations, including natural gas purification and the production of oxygen and nitrogen for medical or industrial uses, are collectively responsible for about 15 percent of the world’s energy use. They also contribute a corresponding amount to the world’s greenhouse gas emissions. Now, researchers at MIT and Stanford University have developed a new kind of membrane for carrying out these separation processes with roughly 1/10 the energy use and emissions.

    Using membranes for separation of chemicals is known to be much more efficient than processes such as distillation or absorption, but there has always been a tradeoff between permeability — how fast gases can penetrate through the material — and selectivity — the ability to let the desired molecules pass through while blocking all others. The new family of membrane materials, based on “hydrocarbon ladder” polymers, overcomes that tradeoff, providing both high permeability and extremely good selectivity, the researchers say.

    The findings are reported today in the journal Science, in a paper by Yan Xia, an associate professor of chemistry at Stanford; Zachary Smith, an assistant professor of chemical engineering at MIT; Ingo Pinnau, a professor at King Abdullah University of Science and Technology, and five others.

    Gas separation is an important and widespread industrial process whose uses include removing impurities and undesired compounds from natural gas or biogas, separating oxygen and nitrogen from air for medical and industrial purposes, separating carbon dioxide from other gases for carbon capture, and producing hydrogen for use as a carbon-free transportation fuel. The new ladder polymer membranes show promise for drastically improving the performance of such separation processes. For example, separating carbon dioxide from methane, these new membranes have five times the selectivity and 100 times the permeability of existing cellulosic membranes for that purpose. Similarly, they are 100 times more permeable and three times as selective for separating hydrogen gas from methane.

    The new type of polymers, developed over the last several years by the Xia lab, are referred to as ladder polymers because they are formed from double strands connected by rung-like bonds, and these linkages provide a high degree of rigidity and stability to the polymer material. These ladder polymers are synthesized via an efficient and selective chemistry the Xia lab developed called CANAL, an acronym for catalytic arene-norbornene annulation, which stitches readily available chemicals into ladder structures with hundreds or even thousands of rungs. The polymers are synthesized in a solution, where they form rigid and kinked ribbon-like strands that can easily be made into a thin sheet with sub-nanometer-scale pores by using industrially available polymer casting processes. The sizes of the resulting pores can be tuned through the choice of the specific hydrocarbon starting compounds. “This chemistry and choice of chemical building blocks allowed us to make very rigid ladder polymers with different configurations,” Xia says.

    To apply the CANAL polymers as selective membranes, the collaboration made use of Xia’s expertise in polymers and Smith’s specialization in membrane research. Holden Lai, a former Stanford doctoral student, carried out much of the development and exploration of how their structures impact gas permeation properties. “It took us eight years from developing the new chemistry to finding the right polymer structures that bestow the high separation performance,” Xia says.

    The Xia lab spent the past several years varying the structures of CANAL polymers to understand how their structures affect their separation performance. Surprisingly, they found that adding additional kinks to their original CANAL polymers significantly improved the mechanical robustness of their membranes and boosted their selectivity  for molecules of similar sizes, such as oxygen and nitrogen gases, without losing permeability of the more permeable gas. The selectivity actually improves as the material ages. The combination of high selectivity and high permeability makes these materials outperform all other polymer materials in many gas separations, the researchers say.

    Today, 15 percent of global energy use goes into chemical separations, and these separation processes are “often based on century-old technologies,” Smith says. “They work well, but they have an enormous carbon footprint and consume massive amounts of energy. The key challenge today is trying to replace these nonsustainable processes.” Most of these processes require high temperatures for boiling and reboiling solutions, and these often are the hardest processes to electrify, he adds.

    For the separation of oxygen and nitrogen from air, the two molecules only differ in size by about 0.18 angstroms (ten-billionths of a meter), he says. To make a filter capable of separating them efficiently “is incredibly difficult to do without decreasing throughput.” But the new ladder polymers, when manufactured into membranes produce tiny pores that achieve high selectivity, he says. In some cases, 10 oxygen molecules permeate for every nitrogen, despite the razor-thin sieve needed to access this type of size selectivity. These new membrane materials have “the highest combination of permeability and selectivity of all known polymeric materials for many applications,” Smith says.

    “Because CANAL polymers are strong and ductile, and because they are soluble in certain solvents, they could be scaled for industrial deployment within a few years,” he adds. An MIT spinoff company called Osmoses, led by authors of this study, recently won the MIT $100K entrepreneurship competition and has been partly funded by The Engine to commercialize the technology.

    There are a variety of potential applications for these materials in the chemical processing industry, Smith says, including the separation of carbon dioxide from other gas mixtures as a form of emissions reduction. Another possibility is the purification of biogas fuel made from agricultural waste products in order to provide carbon-free transportation fuel. Hydrogen separation for producing a fuel or a chemical feedstock, could also be carried out efficiently, helping with the transition to a hydrogen-based economy.

    The close-knit team of researchers is continuing to refine the process to facilitate the development from laboratory to industrial scale, and to better understand the details on how the macromolecular structures and packing result in the ultrahigh selectivity. Smith says he expects this platform technology to play a role in multiple decarbonization pathways, starting with hydrogen separation and carbon capture, because there is such a pressing need for these technologies in order to transition to a carbon-free economy.

    “These are impressive new structures that have outstanding gas separation performance,” says Ryan Lively, am associate professor of chemical and biomolecular engineering at Georgia Tech, who was not involved in this work. “Importantly, this performance is improved during membrane aging and when the membranes are challenged with concentrated gas mixtures. … If they can scale these materials and fabricate membrane modules, there is significant potential practical impact.”

    The research team also included Jun Myun Ahn and Ashley Robinson at Stanford, Francesco Benedetti at MIT, now the chief executive officer at Osmoses, and Yingge Wang at King Abdullah University of Science and Technology in Saudi Arabia. The work was supported by the Stanford Natural Gas Initiative, the Sloan Research Fellowship, the U.S. Department of Energy Office of Basic Energy Sciences, and the National Science Foundation. More

  • in

    Finding her way to fusion

    “I catch myself startling people in public.”

    Zoe Fisher’s animated hands carry part of the conversation as she describes how her naturally loud and expressive laughter turned heads in the streets of Yerevan. There during MIT’s Independent Activities period (IAP), she was helping teach nuclear science at the American University of Armenia, before returning to MIT to pursue fusion research at the Plasma Science and Fusion Center (PSFC).

    Startling people may simply be in Fisher’s DNA. She admits that when she first arrived at MIT, knowing nothing about nuclear science and engineering (NSE), she chose to join that department’s Freshman Pre-Orientation Program (FPOP) “for the shock value.” It was a choice unexpected by family, friends, and mostly herself. Now in her senior year, a 2021 recipient of NSE’s Irving Kaplan Award for academic achievements by a junior and entering a fifth-year master of science program in nuclear fusion, Fisher credits that original spontaneous impulse for introducing her to a subject she found so compelling that, after exploring multiple possibilities, she had to return to it.

    Fisher’s venture to Armenia, under the guidance of NSE associate professor Areg Danagoulian, is not the only time she has taught oversees with MISTI’s Global Teaching Labs, though it is the first time she has taught nuclear science, not to mention thermodynamics and materials science. During IAP 2020 she was a student teacher at a German high school, teaching life sciences, mathematics, and even English to grades five through 12. And after her first year she explored the transportation industry with a mechanical engineering internship in Tuscany, Italy.

    By the time she was ready to declare her NSE major she had sampled the alternatives both overseas and at home, taking advantage of MIT’s Undergraduate Research Opportunities Program (UROP). Drawn to fusion’s potential as an endless source of carbon-free energy on earth, she decided to try research at the PSFC, to see if the study was a good fit. 

    Much fusion research at MIT has favored heating hydrogen fuel inside a donut-shaped device called a tokamak, creating plasma that is hot and dense enough for fusion to occur. Because plasma will follow magnetic field lines, these devices are wrapped with magnets to keep the hot fuel from damaging the chamber walls.

    Fisher was assigned to SPARC, the PSFC’s new tokamak collaboration with MIT startup Commonwealth Fusion Systems (CSF), which uses a game-changing high-temperature superconducting (HTS) tape to create fusion magnets that minimize tokamak size and maximize performance. Working on a database reference book for SPARC materials, she was finding purpose even in the most repetitive tasks. “Which is how I knew I wanted to stay in fusion,” she laughs.

    Fisher’s latest UROP assignment takes her — literally — deeper into SPARC research. She works in a basement laboratory in building NW13 nicknamed “The Vault,” on a proton accelerator whose name conjures an underworld: DANTE. Supervised by PSFC Director Dennis Whyte and postdoc David Fischer, she is exploring the effects of radiation damage on the thin HTS tape that is key to SPARC’s design, and ultimately to the success of ARC, a prototype working fusion power plant.

    Because repetitive bombardment with neutrons produced during the fusion process can diminish the superconducting properties of the HTS, it is crucial to test the tape repeatedly. Fisher assists in assembling and testing the experimental setups for irradiating the HTS samples. Fisher recalls her first project was installing a “shutter” that would allow researchers to control exactly how much radiation reached the tape without having to turn off the entire experiment.

    “You could just push the button — block the radiation — then unblock it. It sounds super simple, but it took many trials. Because first I needed the right size solenoid, and then I couldn’t find a piece of metal that was small enough, and then we needed cryogenic glue…. To this day the actual final piece is made partially of paper towels.”

    She shrugs and laughs. “It worked, and it was the cheapest option.”

    Fisher is always ready to find the fun in fusion. Referring to DANTE as “A really cool dude,” she admits, “He’s perhaps a bit fickle. I may or may not have broken him once.” During a recent IAP seminar, she joined other PSFC UROP students to discuss her research, and expanded on how a mishap can become a gateway to understanding.

    “The grad student I work with and I got to repair almost the entire internal circuit when we blew the fuse — which originally was a really bad thing. But it ended up being great because we figured out exactly how it works.”

    Fisher’s upbeat spirit makes her ideal not only for the challenges of fusion research, but for serving the MIT community. As a student representative for NSE’s Diversity, Equity and Inclusion Committee, she meets monthly with the goal of growing and supporting diversity within the department.

    “This opportunity is impactful because I get my voice, and the voices of my peers, taken seriously,” she says. “Currently, we are spending most of our efforts trying to identify and eliminate hurdles based on race, ethnicity, gender, and income that prevent people from pursuing — and applying to — NSE.”

    To break from the lab and committees, she explores the Charles River as part of MIT’s varsity sailing team, refusing to miss a sunset. She also volunteers as an FPOP mentor, seeking to provide incoming first-years with the kind of experience that will make them want to return to the topic, as she did.

    She looks forward to continuing her studies on the HTS tapes she has been irradiating, proposing to send a current pulse above the critical current through the tape, to possibly anneal any defects from radiation, which would make repairs on future fusion power plants much easier.

    Fisher credits her current path to her UROP mentors and their infectious enthusiasm for the carbon-free potential of fusion energy.

    “UROPing around the PSFC showed me what I wanted to do with my life,” she says. “Who doesn’t want to save the world?” More