More stories

  • in

    Energy storage from a chemistry perspective

    The transition toward a more sustainable, environmentally sound electrical grid has driven an upsurge in renewables like solar and wind. But something as simple as cloud cover can cause grid instability, and wind power is inherently unpredictable. This intermittent nature of renewables has invigorated the competitive landscape for energy storage companies looking to enhance power system flexibility while enabling the integration of renewables.

    “Impact is what drives PolyJoule more than anything else,” says CEO Eli Paster. “We see impact from a renewable integration standpoint, from a curtailment standpoint, and also from the standpoint of transitioning from a centralized to a decentralized model of energy-power delivery.”

    PolyJoule is a Billerica, Massachusetts-based startup that’s looking to reinvent energy storage from a chemistry perspective. Co-founders Ian Hunter of MIT’s Department of Mechanical Engineering and Tim Swager of the Department of Chemistry are longstanding MIT professors considered luminaries in their respective fields. Meanwhile, the core team is a small but highly skilled collection of chemists, manufacturing specialists, supply chain optimizers, and entrepreneurs, many of whom have called MIT home at one point or another.

    “The ideas that we work on in the lab, you’ll see turned into products three to four years from now, and they will still be innovative and well ahead of the curve when they get to market,” Paster says. “But the concepts come from the foresight of thinking five to 10 years in advance. That’s what we have in our back pocket, thanks to great minds like Ian and Tim.”

    PolyJoule takes a systems-level approach married to high-throughput, analytical electrochemistry that has allowed the company to pinpoint a chemical cell design based on 10,000 trials. The result is a battery that is low-cost, safe, and has a long lifetime. It’s capable of responding to base loads and peak loads in microseconds, allowing the same battery to participate in multiple power markets and deployment use cases.

    In the energy storage sphere, interesting technologies abound, but workable solutions are few and far between. But Paster says PolyJoule has managed to bridge the gap between the lab and the real world by taking industry concerns into account from the beginning. “We’ve taken a slightly contrarian view to all of the other energy storage companies that have come before us that have said, ‘If we build it, they will come.’ Instead, we’ve gone directly to the customer and asked, ‘If you could have a better battery storage platform, what would it look like?’”

    With commercial input feeding into the thought processes behind their technological and commercial deployment, PolyJoule says they’ve designed a battery that is less expensive to make, less expensive to operate, safer, and easier to deploy.

    Traditionally, lithium-ion batteries have been the go-to energy storage solution. But lithium has its drawbacks, including cost, safety issues, and detrimental effects on the environment. But PolyJoule isn’t interested in lithium — or metals of any kind, in fact. “We start with the periodic table of organic elements,” says Paster, “and from there, we derive what works at economies of scale, what is easy to converge and convert chemically.”

    Having an inherently safer chemistry allows PolyJoule to save on system integration costs, among other things. PolyJoule batteries don’t contain flammable solvents, which means no added expenses related to fire mitigation. Safer chemistry also means ease of storage, and PolyJoule batteries are currently undergoing global safety certification (UL approval) to be allowed indoors and on airplanes. Finally, with high power built into the chemistry, PolyJoule’s cells can be charged and discharged to extremes, without the need for heating or cooling systems.

    “From raw material to product delivery, we examine each step in the value chain with an eye towards reducing costs,” says Paster. It all starts with designing the chemistry around earth-abundant elements, which allows the small startup to compete with larger suppliers, even at smaller scales. Consider the fact that PolyJoule’s differentiating material cost is less than $1 per kilogram, whereas lithium carbonate sells for $20 per kilogram.

    On the manufacturing side, Paster explains that PolyJoule cuts costs by making their cells in old paper mills and warehouses, employing off-the-shelf equipment previously used for tissue paper or newspaper printing. “We use equipment that has been around for decades because we don’t want to create a cutting-edge technology that requires cutting-edge manufacturing,” he says. “We want to create a cutting-edge technology that can be deployed in industrialized nations and in other nations that can benefit the most from energy storage.”

    PolyJoule’s first customer is an industrial distributed energy consumer with baseline energy consumption that increases by a factor of 10 when the heavy machinery kicks on twice a day. In the early morning and late afternoon, it consumes about 50 kilowatts for 20 minutes to an hour, compared to a baseline rate of 5  kilowatts. It’s an application model that is translatable to a variety of industries. Think wastewater treatment, food processing, and server farms — anything with a fluctuation in power consumption over a 24-hour period.

    By the end of the year, PolyJoule will have delivered its first 10 kilowatt-hour system, exiting stealth mode and adding commercial viability to demonstrated technological superiority. “What we’re seeing, now is massive amounts of energy storage being added to renewables and grid-edge applications,” says Paster. “We anticipated that by 12-18 months, and now we’re ramping up to catch up with some of the bigger players.” More

  • in

    Global warming begets more warming, new paleoclimate study finds

    It is increasingly clear that the prolonged drought conditions, record-breaking heat, sustained wildfires, and frequent, more extreme storms experienced in recent years are a direct result of rising global temperatures brought on by humans’ addition of carbon dioxide to the atmosphere. And a new MIT study on extreme climate events in Earth’s ancient history suggests that today’s planet may become more volatile as it continues to warm.

    The study, appearing today in Science Advances, examines the paleoclimate record of the last 66 million years, during the Cenozoic era, which began shortly after the extinction of the dinosaurs. The scientists found that during this period, fluctuations in the Earth’s climate experienced a surprising “warming bias.” In other words, there were far more warming events — periods of prolonged global warming, lasting thousands to tens of thousands of years — than cooling events. What’s more, warming events tended to be more extreme, with greater shifts in temperature, than cooling events.

    The researchers say a possible explanation for this warming bias may lie in a “multiplier effect,” whereby a modest degree of warming — for instance from volcanoes releasing carbon dioxide into the atmosphere — naturally speeds up certain biological and chemical processes that enhance these fluctuations, leading, on average, to still more warming.

    Interestingly, the team observed that this warming bias disappeared about 5 million years ago, around the time when ice sheets started forming in the Northern Hemisphere. It’s unclear what effect the ice has had on the Earth’s response to climate shifts. But as today’s Arctic ice recedes, the new study suggests that a multiplier effect may kick back in, and the result may be a further amplification of human-induced global warming.

    “The Northern Hemisphere’s ice sheets are shrinking, and could potentially disappear as a long-term consequence of human actions” says the study’s lead author Constantin Arnscheidt, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “Our research suggests that this may make the Earth’s climate fundamentally more susceptible to extreme, long-term global warming events such as those seen in the geologic past.”

    Arnscheidt’s study co-author is Daniel Rothman, professor of geophysics at MIT, and  co-founder and co-director of MIT’s Lorenz Center.

    A volatile push

    For their analysis, the team consulted large databases of sediments containing deep-sea benthic foraminifera — single-celled organisms that have been around for hundreds of millions of years and whose hard shells are preserved in sediments. The composition of these shells is affected by the ocean temperatures as organisms are growing; the shells are therefore considered a reliable proxy for the Earth’s ancient temperatures.

    For decades, scientists have analyzed the composition of these shells, collected from all over the world and dated to various time periods, to track how the Earth’s temperature has fluctuated over millions of years. 

    “When using these data to study extreme climate events, most studies have focused on individual large spikes in temperature, typically of a few degrees Celsius warming,” Arnscheidt says. “Instead, we tried to look at the overall statistics and consider all the fluctuations involved, rather than picking out the big ones.”

    The team first carried out a statistical analysis of the data and observed that, over the last 66 million years, the distribution of global temperature fluctuations didn’t resemble a standard bell curve, with symmetric tails representing an equal probability of extreme warm and extreme cool fluctuations. Instead, the curve was noticeably lopsided, skewed toward more warm than cool events. The curve also exhibited a noticeably longer tail, representing warm events that were more extreme, or of higher temperature, than the most extreme cold events.

    “This indicates there’s some sort of amplification relative to what you would otherwise have expected,” Arnscheidt says. “Everything’s pointing to something fundamental that’s causing this push, or bias toward warming events.”

    “It’s fair to say that the Earth system becomes more volatile, in a warming sense,” Rothman adds.

    A warming multiplier

    The team wondered whether this warming bias might have been a result of “multiplicative noise” in the climate-carbon cycle. Scientists have long understood that higher temperatures, up to a point, tend to speed up biological and chemical processes. Because the carbon cycle, which is a key driver of long-term climate fluctuations, is itself composed of such processes, increases in temperature may lead to larger fluctuations, biasing the system towards extreme warming events.

    In mathematics, there exists a set of equations that describes such general amplifying, or multiplicative effects. The researchers applied this multiplicative theory to their analysis to see whether the equations could predict the asymmetrical distribution, including the degree of its skew and the length of its tails.

    In the end, they found that the data, and the observed bias toward warming, could be explained by the multiplicative theory. In other words, it’s very likely that, over the last 66 million years, periods of modest warming were on average further enhanced by multiplier effects, such as the response of biological and chemical processes that further warmed the planet.

    As part of the study, the researchers also looked at the correlation between past warming events and changes in Earth’s orbit. Over hundreds of thousands of years, Earth’s orbit around the sun regularly becomes more or less elliptical. But scientists have wondered why many past warming events appeared to coincide with these changes, and why these events feature outsized warming compared with what the change in Earth’s orbit could have wrought on its own.

    So, Arnscheidt and Rothman incorporated the Earth’s orbital changes into the multiplicative model and their analysis of Earth’s temperature changes, and found that multiplier effects could predictably amplify, on average, the modest temperature rises due to changes in Earth’s orbit.

    “Climate warms and cools in synchrony with orbital changes, but the orbital cycles themselves would predict only modest changes in climate,” Rothman says. “But if we consider a multiplicative model, then modest warming, paired with this multiplier effect, can result in extreme events that tend to occur at the same time as these orbital changes.”

    “Humans are forcing the system in a new way,” Arnscheidt adds. “And this study is showing that, when we increase temperature, we’re likely going to interact with these natural, amplifying effects.”

    This research was supported, in part, by MIT’s School of Science. More

  • in

    Chemistry Undergraduate Teaching Lab hibernates fume hoods, drastically reducing energy costs

    The Department of Chemistry’s state-of-the-art Undergraduate Teaching Lab (UGTL), which opened on the fifth floor of MIT.nano in fall 2018, is home to 69 fume hoods. The hoods, ranging from four to seven feet wide, protect students and staff from potential exposure to hazardous materials while working in the lab. Fume hoods represent a tremendous energy consumption on the MIT campus; in addition to the energy required to operate them, the air that replaces what is exhausted must be heated or cooled. Thus, any lab with a large number of fume hoods is destined to be faced with high operational energy cost.

    “When the UGTL’s fume hoods are in use, the air-change rates — the number of times fresh air is exchanged in the space in a given time frame — averages between 25 and 30 air changes per hour (ACH),” says Nicole Imbergamo, senior sustainability project manager in MIT Campus Construction. “When the lab is unoccupied, that air-change rate averages 11 ACH. For context, in a laboratory with a single fume hood, typically MIT’s EHS [Environment, Health, and Safety] department would require six ACH when occupied and four ACH when unoccupied. Hibernation of the fume hoods allowed us to close the gap between the current unoccupied air-change rate and what is typical on campus in a non-teaching lab environment.”

    Fifty-eight of the 69 fume hoods in the UGTL are consistently unused between the hours of 6:30 p.m. and 12 p.m., as well as all weekend long, totaling 135 hours per week. Based on these numbers, the team determined it was safe to “hibernate” the fume hoods during the off hours, saving the Institute on fan energy and the cost of heating and cooling the air that gets flushed into each hood.

    John Dolhun PhD ’73 is the director of the UGTL. “The project started when MIT Green Labs — a division of the Environment, Health, and Safety Office now known as the Safe & Sustainable Labs Program — contacted the UGTL in October 2018, followed by an initial meeting in November 2018 with all the key players, including Safe and Sustainable Labs, the EHS Office, the Department of Facilities, and the Department of Chemistry,” says Dolhun. “It was during these initial discussions that the UGTL recognized this was something we had to do. The project was completed in April 2021.”

    Now, through a scheduled time clock in the Building Management System (BMS), the 58 fume hoods are flipped into hibernation mode at the end of each day. “In hibernation mode, the exhaust air valves go to their minimum airflow, which is lower than a fume hood minimum required when in use,” says Imbergamo. “As a safety feature, if the sash of a fume hood is opened while it is in standby mode, the valve and hood are automatically released from hibernation until the next scheduled time.” The BMS allows Dolhun and all with access to instantly view the hibernation status of every hood online, at any time, from any location. As an additional safety measure, the lab is equipped with an emergency kill switch that, when activated, instantly takes all 58 fume hoods out of hibernation, increasing the air changes per hour by about 37 percent, at one touch.

    The MIT operations team worked with the building controls vendor to create graphics that allow the UGTL users to easily see the hood sash positions and their current status as either hibernated or in normal operating mode. This virtual visibility allows the UGTL team to confirm the hoods are all closed before leaving the lab at the end of each day, and to confirm the energy reductions. This visual access also lends itself to educating the students on the importance of closing the sash at the end of their lab work, and gives an opportunity for educating the students on relevant fume hood management best practices that will serve them far beyond their undergraduate chemistry classes.

    Since employing the use of hibernation mode, the unoccupied UGTL air change rate has plummeted from 11 ACH to seven ACH, drastically shrinking unnecessary energy outflow, saving MIT an estimated $21,000 per year. The annual utility cost savings of both reduced supply and exhaust fan energy, as well as the heating and cooling required of the supply air to the space, will result in a less-than three-year payback for MIT. The overall success of the hood hibernation program, and the savings that it has afforded the UGTL, is very motivational for the Green Initiative. The highlights of this system will be shared with other labs, both at MIT and beyond, that may also benefit from similar adjustments. More

  • in

    Amy Watterson: Model engineer

    “I love that we are doing something that no one else is doing.”

    Amy Watterson is excited when she talks about SPARC, the pilot fusion plant being developed by MIT spinoff Commonwealth Fusion Systems (CSF). Since being hired as a mechanical engineer at the Plasma Science and Fusion Center (PSFC) two years ago, Watterson has found her skills stretching to accommodate the multiple needs of the project.

    Fusion, which fuels the sun and stars, has long been sought as a carbon-free energy source for the world. For decades researchers have pursued the “tokamak,” a doughnut-shaped vacuum chamber where hot plasma can be contained by magnetic fields and heated to the point where fusion occurs. Sustaining the fusion reactions long enough to draw energy from them has been a challenge.

    Watterson is intimately aware of this difficulty. Much of her life she has heard the quip, “Fusion is 50 years away and always will be.” The daughter of PSFC research scientist Catherine Fiore, who headed the PSFC’s Office of Environment, Safety and Health, and Reich Watterson, an optical engineer working at the center, she had watched her parents devote years to making fusion a reality. She determined before entering Rensselaer Polytechnic Institute that she could forgo any attempt to follow her parents into a field that might not produce results during her career.

    Working on SPARC has changed her mindset. Taking advantage of a novel high-temperature superconducting tape, SPARC’s magnets will be compact while generating magnetic fields stronger than would be possible from other mid-sized tokamaks, and producing more fusion power. It suggests a high-field device that produces net fusion gain is not 50 years away. SPARC is scheduled to be begin operation in 2025.

    An education in modeling

    Watterson’s current excitement, and focus, is due to an approaching milestone for SPARC: a test of the Toroidal Field Magnet Coil (TFMC), a scaled prototype for the HTS magnets that will surround SPARC’s toroidal vacuum chamber. Its design and manufacture have been shaped by computer models and simulations. As part of a large research team, Waterson has received an education in modeling over the past two years.

    Computer models move scientific experiments forward by allowing researchers to predict what will happen to an experiment — or its materials — if a parameter is changed. Modeling a component of the TFMC, for example, researchers can test how it is affected by varying amounts of current, different temperatures or different materials. With this information they can make choices that will improve the success of the experiment.

    In preparation for the magnet testing, Watterson has modeled aspects of the cryogenic system that will circulate helium gas around the TFMC to keep it cold enough to remain superconducting. Taking into consideration the amount of cooling entering the system, the flow rate of the helium, the resistance created by valves and transfer lines and other parameters, she can model how much helium flow will be necessary to guarantee the magnet stays cold enough. Adjusting a parameter can make the difference between a magnet remaining superconducting and becoming overheated or even damaged.

    Watterson and her teammates have also modeled pressures and stress on the inside of the TFMC. Pumping helium through the coil to cool it down will add 20 atmospheres of pressure, which could create a degree of flex in elements of the magnet that are welded down. Modeling can help determine how much pressure a weld can sustain.

    “How thick does a weld need to be, and where should you put the weld so that it doesn’t break — that’s something you don’t want to leave until you’re finally assembling it,” says Watterson.

    Modeling the behavior of helium is particularly challenging because its properties change significantly as the pressure and temperature change.

    “A few degrees or a little pressure will affect the fluid’s viscosity, density, thermal conductivity, and heat capacity,” says Watterson. “The flow has different pressures and temperatures at different places in the cryogenic loop. You end up with a set of equations that are very dependent on each other, which makes it a challenge to solve.”

    Role model

    Watterson notes that her modeling depends on the contributions of colleagues at the PSFC, and praises the collaborative spirit among researchers and engineers, a community that now feels like family. Her teammates have been her mentors. “I’ve learned so much more on the job in two years than I did in four years at school,” she says.

    She realizes that having her mother as a role model in her own family has always made it easier for her to imagine becoming a scientist or engineer. Tracing her early passion for engineering to a middle school Lego robotics tournament, her eyes widen as she talks about the need for more female engineers, and the importance of encouraging girls to believe they are equal to the challenge.

    “I want to be a role model and tell them ‘I’m a successful engineer, you can be too.’ Something I run into a lot is that little girls will say, ‘I can’t be an engineer, I’m not cut out for that.’ And I say, ‘Well that’s not true. Let me show you. If you can make this Lego robot, then you can be an engineer.’ And it turns out they usually can.”

    Then, as if making an adjustment to one of her computer models, she continues.

    “Actually, they always can.” More

  • in

    A new approach to preventing human-induced earthquakes

    When humans pump large volumes of fluid into the ground, they can set off potentially damaging earthquakes, depending on the underlying geology. This has been the case in certain oil- and gas-producing regions, where wastewater, often mixed with oil, is disposed of by injecting it back into the ground — a process that has triggered sizable seismic events in recent years.

    Now MIT researchers, working with an interdisciplinary team of scientists from industry and academia, have developed a method to manage such human-induced seismicity, and have demonstrated that the technique successfully reduced the number of earthquakes occurring in an active oil field.

    Their results, appearing today in Nature, could help mitigate earthquakes caused by the oil and gas industry, not just from the injection of wastewater produced with oil, but also that produced from hydraulic fracturing, or “fracking.” The team’s approach could also help prevent quakes from other human activities, such as the filling of water reservoirs and aquifers, and the sequestration of carbon dioxide in deep geologic formations.

    “Triggered seismicity is a problem that goes way beyond producing oil,” says study lead author Bradford Hager, the Cecil and Ida Green Professor of Earth Sciences in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “This is a huge problem for society that will have to be confronted if we are to safely inject carbon dioxide into the subsurface. We demonstrated the kind of study that will be necessary for doing this.”

    The study’s co-authors include Ruben Juanes, professor of civil and environmental engineering at MIT, and collaborators from the University of California at Riverside, the University of Texas at Austin, Harvard University, and Eni, a multinational oil and gas company based in Italy.

    Safe injections

    Both natural and human-induced earthquakes occur along geologic faults, or fractures between two blocks of rock in the Earth’s crust. In stable periods, the rocks on either side of a fault are held in place by the pressures generated by surrounding rocks. But when a large volume of fluid is suddenly injected at high rates, it can upset a fault’s fluid stress balance. In some cases, this sudden injection can lubricate a fault and cause rocks on either side to slip and trigger an earthquake.

    The most common source of such fluid injections is from the oil and gas industry’s disposal of wastewater that is brought up along with oil. Field operators dispose of this water through injection wells that continuously pump the water back into the ground at high pressures.

    “There’s a lot of water produced with the oil, and that water is injected into the ground, which has caused a large number of quakes,” Hager notes. “So, for a while, oil-producing regions in Oklahoma had more magnitude 3 quakes than California, because of all this wastewater that was being injected.”

    In recent years, a similar problem arose in southern Italy, where injection wells on oil fields operated by Eni triggered microseisms in an area where large naturally occurring earthquakes had previously occurred. The company, looking for ways to address the problem, sought consulation from Hager and Juanes, both leading experts in seismicity and subsurface flows.

    “This was an opportunity for us to get access to high-quality seismic data about the subsurface, and learn how to do these injections safely,” Juanes says.

    Seismic blueprint

    The team made use of detailed information, accumulated by the oil company over years of operation in the Val D’Agri oil field, a region of southern Italy that lies in a tectonically active basin. The data included information about the region’s earthquake record, dating back to the 1600s, as well as the structure of rocks and faults, and the state of the subsurface corresponding to the various injection rates of each well.

    This video shows the change in stress on the geologic faults of the Val d’Agri field from 2001 to 2019, as predicted by a new MIT-derived model. Video credit: A. Plesch (Harvard University)

    This video shows small earthquakes occurring on the Costa Molina fault within the Val d’Agri field from 2004 to 2016. Each event is shown for two years fading from an initial bright color to the final dark color. Video credit: A. Plesch (Harvard University)

    The researchers integrated these data into a coupled subsurface flow and geomechanical model, which predicts how the stresses and strains of underground structures evolve as the volume of pore fluid, such as from the injection of water, changes. They connected this model to an earthquake mechanics model in order to translate the changes in underground stress and fluid pressure into a likelihood of triggering earthquakes. They then quantified the rate of earthquakes associated with various rates of water injection, and identified scenarios that were unlikely to trigger large quakes.

    When they ran the models using data from 1993 through 2016, the predictions of seismic activity matched with the earthquake record during this period, validating their approach. They then ran the models forward in time, through the year 2025, to predict the region’s seismic response to three different injection rates: 2,000, 2,500, and 3,000 cubic meters per day. The simulations showed that large earthquakes could be avoided if operators kept injection rates at 2,000 cubic meters per day — a flow rate comparable to a small public fire hydrant.

    Eni field operators implemented the team’s recommended rate at the oil field’s single water injection well over a 30-month period between January 2017 and June 2019. In this time, the team observed only a few tiny seismic events, which coincided with brief periods when operators went above the recommended injection rate.

    “The seismicity in the region has been very low in these two-and-a-half years, with around four quakes of 0.5 magnitude, as opposed to hundreds of quakes, of up to 3 magnitude, that were happening between 2006 and 2016,” Hager says. 

    The results demonstrate that operators can successfully manage earthquakes by adjusting injection rates, based on the underlying geology. Juanes says the team’s modeling approach may help to prevent earthquakes related to other processes, such as the building of water reservoirs and the sequestration of carbon dioxide — as long as there is detailed information about a region’s subsurface.

    “A lot of effort needs to go into understanding the geologic setting,” says Juanes, who notes that, if carbon sequestration were carried out on depleted oil fields, “such reservoirs could have this type of history, seismic information, and geologic interpretation that you could use to build similar models for carbon sequestration. We show it’s at least possible to manage seismicity in an operational setting. And we offer a blueprint for how to do it.”

    This research was supported, in part, by Eni. More

  • in

    Manipulating magnets in the quest for fusion

    “You get the high field, you get the performance.”

    Senior Research Scientist Brian LaBombard is summarizing what might be considered a guiding philosophy behind designing and engineering fusion devices at MIT’s Plasma Science and Fusion Center (PSFC). Beginning in 1972 with the Alcator A tokamak, through Alcator C (1978) and Alcator C-Mod (1991), the PSFC has used magnets with high fields to confine the hot plasma in compact, high-performance tokamaks. Joining what was then the Plasma Fusion Center as a graduate student in 1978, just as Alcator A was finishing its run, LaBombard is one of the few who has worked with each iteration of the high-field concept. Now he has turned his attention to the PSFC’s latest fusion venture, a fusion energy project called SPARC.

    Designed in collaboration with MIT spinoff Commonwealth Fusion Systems (CFS), SPARC employs novel high temperature superconducting (HTS) magnets at high-field to achieve fusion that will produce net energy gain. Some of these magnets will wrap toroidally around the tokamak’s doughnut-shaped vacuum chamber, confining fusion reactions and preventing damage to the walls of the device.

    The PSFC has spent three years researching, developing, and manufacturing a scaled version of these toroidal field (TF) coils — the toroidal field model coil, or TFMC. Before the TF coils can be built for SPARC, LaBombard and his team need to test the model coil under the conditions that it will experience in this tokamak.

    HTS magnets need to be cooled in order to remain superconducting, and to be protected from the heat generated by current. For testing, the TFMC will be enclosed in a cryostat, cooled to the low temperatures needed for eventual tokamak operation, and charged with current to produce magnetic field. How the magnet responds as the current is provided to the coil will determine if the technology is in hand to construct the 18 TF coils for SPARC.

    A history of achievement

    That LaBombard is part of the PSFC’s next fusion project is not unusual; that he is involved in designing, engineering, and testing the magnets is. Until 2018, when he led the R&D research team for one of the magnet designs being considered for SPARC, LaBombard’s 30-plus years of celebrated research had focused on other areas of the fusion question.

    As a graduate student, he gained early acclaim for the research he reported in his PhD thesis. Working on Alcator C, he made groundbreaking discoveries about the plasma physics in the “boundary” region of the tokamak, between the edge of the fusing core and the wall of the machine. With typical modesty, LaBombard credits some of his success to the fact that the topic was not well-studied, and that Alcator C provided measurements not possible on other machines.

    “People knew about the boundary, but nobody was really studying it in detail. On Alcator C, there were interesting phenomena, such as marfes [multifaceted asymmetric radiation from the edge], being detected for the first time. This pushed me to make boundary layer measurements in great detail that no one had ever seen before. It was all new territory, so I made a big splash.”

    That splash established him as a leading researcher in the field of boundary plasmas. After a two-year turn at the University of California at Los Angeles working on a plasma-wall test facility called PISCES, LaBombard, who grew up in New England, was happy to return to MIT to join the PSFC’s new Alcator C-Mod project.

    Over the next 28 years of C-Mod’s construction phase and operation, LaBombard continued to make groundbreaking contributions to understanding tokamak edge and divertor plasmas, and to design internal components that can survive the harsh conditions and provide plasma control — including C-Mod’s vertical target plate divertor and a unique divertor cryopump system. That experience led him to conceive of the “X-point target divertor” for handling extreme fusion power exhaust and to propose a national Advanced Divertor tokamak eXperiment (ADX) to test such ideas.

    All along, LaBombard’s true passion was in creating revolutionary diagnostics to unfold boundary layer physics and in guiding graduate students to do the same: an Omegatron, to measure impurity concentrations directly in the boundary plasma, resolved by charge-to-mass ratio; fast-scanning Langmuir-Mach probes to measure plasma flows; a Shoelace Antenna to provide insight into plasma fluctuations at the edge; the invention of a Mirror Langmuir Probe for the real-time measurements of plasma turbulence at high bandwidth.

    Switching sides

    His expertise established, he could have continued this focus on the edge of the plasma through collaborations with other laboratories and at the PSFC. Instead, he finds himself on the other side of the vacuum chamber, immersed in magnet design and technology. Challenged with finding an effective HTS magnet design for SPARC, he and his team were able to propose a winning strategy, one that seemed most likely to achieve the compact high field and high performance that PSFC tokamaks have been known for.

    LaBombard is stimulated by his new direction and excited about the upcoming test of the TFMC. His new role takes advantage of his physics background in electricity and magnetism. It also supports his passion for designing and building things, which he honed as high school apprentice to his machinist father and explored professionally building systems for Alcator C-Mod.

    “I view my principal role is to make sure the TF coil works electrically, the way it’s supposed to,” he says. “So it produces the magnetic field without damaging the coil.”

    A successful test would validate the understanding of how the new magnet technology works, and will prepare the team to build magnets for SPARC.

    Among those overseeing the hours of TFMC testing will be graduate students, current and former, reminding LaBombard of his own student days working on Alcator C, and of his years supervising students on Alcator C-Mod.

    “Those students were directly involved with Alcator C-Mod. They would jump in, make things happen — and as a team. This team spirit really enabled everyone to excel.

    “And looking to when SPARC was taking shape, you could see that across the board, from the new folks to the younger folks, they really got engaged by the spirit of Alcator — by recognition of the plasma performance that can be made possible by high magnetic fields.”

    He laughs as he looks to the past and to the future.

    “And they are taking it to SPARC.” More