More stories

  • in

    MIT Energy Initiative launches the Future Energy Systems Center

    The MIT Energy Initiative (MITEI) has launched a new research consortium — the Future Energy Systems Center — to address the climate crisis and the role energy systems can play in solving it. This integrated effort engages researchers from across all of MIT to help the global community reach its goal of net-zero carbon emissions. The center examines the accelerating energy transition and collaborates with industrial leaders to reform the world’s energy systems. The center is part of “Fast Forward: MIT’s Climate Action Plan for the Decade,” MIT’s multi-pronged effort announced last year to address the climate crisis.

    The Future Energy Systems Center investigates the emerging technology, policy, demographics, and economics reshaping the landscape of energy supply and demand. The center conducts integrative analysis of the entire energy system — a holistic approach essential to understanding the cross-sectorial impact of the energy transition.

    “We must act quickly to get to net-zero greenhouse gas emissions. At the same time, we have a billion people around the world with inadequate access, or no access, to electricity — and we need to deliver it to them,” says MITEI Director Robert C. Armstrong, the Chevron Professor of Chemical Engineering. “The Future Energy Systems Center combines MIT’s deep knowledge of energy science and technology with advanced tools for systems analysis to examine how advances in technology and system economics may respond to various policy scenarios.”  

    The overarching focus of the center is integrative analysis of the entire energy system, providing insights into the complex multi-sectorial transformations needed to alter the three major energy-consuming sectors of the economy — transportation, industry, and buildings — in conjunction with three major decarbonization-enabling technologies — electricity, energy storage and low-carbon fuels, and carbon management. “Deep decarbonization of our energy system requires an economy-wide perspective on the technology options, energy flows, materials flows, life-cycle emissions, costs, policies, and socioeconomics consequences,” says Randall Field, the center’s executive director. “A systems approach is essential in enabling cross-disciplinary teams to work collaboratively together to address the existential crisis of climate change.”

    Through techno-economic and systems-oriented research, the center analyzes these important interactions. For example:

    •  Increased reliance on variable renewable energy, such as wind and solar, and greater electrification of transportation, industry, and buildings will require expansion of demand management and other solutions for balancing of electricity supply and demand across these areas.

    •  Likewise, balancing supply and demand will require deploying grid-scale energy storage and converting the electricity to low-carbon fuels (hydrogen and liquid fuels), which can in turn play a vital role in the energy transition for hard-to-decarbonize segments of transportation, industry, and buildings.

    •  Carbon management (carbon dioxide capture from industry point sources and from air and oceans; utilization/conversion to valuable products; transport; storage) will also play a critical role in decarbonizing industry, electricity, and fuels — both as carbon-mitigation and negative-carbon solutions.

    As a member-supported research consortium, the center collaborates with industrial experts and leaders — from both energy’s consumer and supplier sides — to gain insights to help researchers anticipate challenges and opportunities of deploying technology at the scale needed to achieve decarbonization. “The Future Energy Systems Center gives us a powerful way to engage with industry to accelerate the energy transition,” says Armstrong. “Working together, we can better understand how our current technology toolbox can be more effectively put to use now to reduce emissions, and what new technologies and policies will ultimately be needed to reach net-zero.”

    A steering committee, made up of 11 MIT professors and led by Armstrong, selects projects to create a research program with high impact on decarbonization, while leveraging MIT strengths and addressing interests of center members in pragmatic and scalable solutions. “MIT — through our recently released climate action plan — is committed to moving with urgency and speed to help wring carbon dioxide emissions out the global economy to resolve the growing climate crisis,” says Armstrong. “We have no time to waste.”

    The center members to date are: AECI, Analog Devices, Chevron, ConocoPhillips, Copec, Dominion, Duke Energy, Enerjisa, Eneva, Eni, Equinor, Eversource, Exelon, ExxonMobil, Ferrovial, Iberdrola, IHI, National Grid, Raizen, Repsol, Rio Tinto, Shell, Tata Power, Toyota Research Institute, and Washington Gas. More

  • in

    Pricing carbon, valuing people

    In November, inflation hit a 39-year high in the United States. The consumer price index was up 6.8 percent from the previous year due to major increases in the cost of rent, food, motor vehicles, gasoline, and other common household expenses. While inflation impacts the entire country, its effects are not felt equally. At greatest risk are low- and middle-income Americans who may lack sufficient financial reserves to absorb such economic shocks.

    Meanwhile, scientists, economists, and activists across the political spectrum continue to advocate for another potential systemic economic change that many fear will also put lower-income Americans at risk: the imposition of a national carbon price, fee, or tax. Framed by proponents as the most efficient and cost-effective way to reduce greenhouse gas emissions and meet climate targets, a carbon penalty would incentivize producers and consumers to shift expenditures away from carbon-intensive products and services (e.g., coal or natural gas-generated electricity) and toward low-carbon alternatives (e.g., 100 percent renewable electricity). But if not implemented in a way that takes differences in household income into account, this policy strategy, like inflation, could place an unequal and untenable economic burden on low- and middle-income Americans.         

    To garner support from policymakers, carbon-penalty proponents have advocated for policies that recycle revenues from carbon penalties to all or lower-income taxpayers in the form of payroll tax reductions or lump-sum payments. And yet some of these proposed policies run the risk of reducing the overall efficiency of the U.S. economy, which would lower the nation’s GDP and impede its economic growth.

    Which begs the question: Is there a sweet spot at which a national carbon-penalty revenue-recycling policy can both avoid inflicting economic harm on lower-income Americans at the household level and degrading economic efficiency at the national level?

    In search of that sweet spot, researchers at the MIT Joint Program on the Science and Policy of Global Change assess the economic impacts of four different carbon-penalty revenue-recycling policies: direct rebates from revenues to households via lump-sum transfers; indirect refunding of revenues to households via a proportional reduction in payroll taxes; direct rebates from revenues to households, but only for low- and middle-income groups, with remaining revenues recycled via a proportional reduction in payroll taxes; and direct, higher rebates for poor households, with remaining revenues recycled via a proportional reduction in payroll taxes.

    To perform the assessment, the Joint Program researchers integrate a U.S. economic model (MIT U.S. Regional Energy Policy) with a dataset (Bureau of Labor Statistics’ Consumer Expenditure Survey) providing consumption patterns and other socioeconomic characteristics for 15,000 U.S. households. Using the combined model, they evaluate the distributional impacts and potential trade-offs between economic equity and efficiency of all four carbon-penalty revenue-recycling policies.

    The researchers find that household rebates have progressive impacts on consumers’ financial well-being, with the greatest benefits going to the lowest-income households, while policies centered on improving the efficiency of the economy (e.g., payroll tax reductions) have slightly regressive household-level financial impacts. In a nutshell, the trade-off is between rebates that provide more equity and less economic efficiency versus tax cuts that deliver the opposite result. The latter two policy options, which combine rebates to lower-income households with payroll tax reductions, result in an optimal blend of sufficiently progressive financial results at the household level and economy efficiency at the national level. Results of the study are published in the journal Energy Economics.

    “We have determined that only a portion of carbon-tax revenues is needed to compensate low-income households and thus reduce inequality, while the rest can be used to improve the economy by reducing payroll or other distortionary taxes,” says Xaquin García-Muros, lead author of the study, a postdoc at the MIT Joint Program who is affiliated with the Basque Centre for Climate Change in Spain. “Therefore, we can eliminate potential trade-offs between efficiency and equity, and promote a just and efficient energy transition.”

    “If climate policies increase the gap between rich and poor households or reduce the affordability of energy services, then these policies might be rejected by the public and, as a result, attempts to decarbonize the economy will be less efficient,” says Joint Program Deputy Director Sergey Paltsev, a co-author of the study. “Our findings provide guidance to decision-makers to advance more well-designed policies that deliver economic benefits to the nation as a whole.” 

    The study’s novel integration of a national economic model with household microdata creates a new and powerful platform to further investigate key differences among households that can help inform policies aimed at a just transition to a low-carbon economy. More

  • in

    Overcoming a bottleneck in carbon dioxide conversion

    If researchers could find a way to chemically convert carbon dioxide into fuels or other products, they might make a major dent in greenhouse gas emissions. But many such processes that have seemed promising in the lab haven’t performed as expected in scaled-up formats that would be suitable for use with a power plant or other emissions sources.

    Now, researchers at MIT have identified, quantified, and modeled a major reason for poor performance in such conversion systems. The culprit turns out to be a local depletion of the carbon dioxide gas right next to the electrodes being used to catalyze the conversion. The problem can be alleviated, the team found, by simply pulsing the current off and on at specific intervals, allowing time for the gas to build back up to the needed levels next to the electrode.

    The findings, which could spur progress on developing a variety of materials and designs for electrochemical carbon dioxide conversion systems, were published today in the journal Langmuir, in a paper by MIT postdoc Álvaro Moreno Soto, graduate student Jack Lake, and professor of mechanical engineering Kripa Varanasi.

    “Carbon dioxide mitigation is, I think, one of the important challenges of our time,” Varanasi says. While much of the research in the area has focused on carbon capture and sequestration, in which the gas is pumped into some kind of deep underground reservoir or converted to an inert solid such as limestone, another promising avenue has been converting the gas into other carbon compounds such as methane or ethanol, to be used as fuel, or ethylene, which serves as a precursor to useful polymers.

    There are several ways to do such conversions, including electrochemical, thermocatalytic, photothermal, or photochemical processes. “Each of these has problems or challenges,” Varanasi says. The thermal processes require very high temperature, and they don’t produce very high-value chemical products, which is a challenge with the light-activated processes as well, he says. “Efficiency is always at play, always an issue.”

    The team has focused on the electrochemical approaches, with a goal of getting “higher-C products” — compounds that contain more carbon atoms and tend to be higher-value fuels because of their energy per weight or volume. In these reactions, the biggest challenge has been curbing competing reactions that can take place at the same time, especially the splitting of water molecules into oxygen and hydrogen.

    The reactions take place as a stream of liquid electrolyte with the carbon dioxide dissolved in it passes over a metal catalytic surface that is electrically charged. But as the carbon dioxide gets converted, it leaves behind a region in the electrolyte stream where it has essentially been used up, and so the reaction within this depleted zone turns toward water splitting instead. This unwanted reaction uses up energy and greatly reduces the overall efficiency of the conversion process, the researchers found.

    “There’s a number of groups working on this, and a number of catalysts that are out there,” Varanasi says. “In all of these, I think the hydrogen co-evolution becomes a bottleneck.”

    One way of counteracting this depletion, they found, can be achieved by a pulsed system — a cycle of simply turning off the voltage, stopping the reaction and giving the carbon dioxide time to spread back into the depleted zone and reach usable levels again, and then resuming the reaction.

    Often, the researchers say, groups have found promising catalyst materials but haven’t run their lab tests long enough to observe these depletion effects, and thus have been frustrated in trying to scale up their systems. Furthermore, the concentration of carbon dioxide next to the catalyst dictates the products that are made. Hence, depletion can also change the mix of products that are produced and can make the process unreliable. “If you want to be able to make a system that works at industrial scale, you need to be able to run things over a long period of time,” Varanasi says, “and you need to not have these kinds of effects that reduce the efficiency or reliability of the process.”

    The team studied three different catalyst materials, including copper, and “we really focused on making sure that we understood and can quantify the depletion effects,” Lake says. In the process they were able to develop a simple and reliable way of monitoring the efficiency of the conversion process as it happens, by measuring the changing pH levels, a measure of acidity, in the system’s electrolyte.

    In their tests, they used more sophisticated analytical tools to characterize reaction products, including gas chromatography for analysis of the gaseous products, and nuclear magnetic resonance characterization for the system’s liquid products. But their analysis showed that the simple pH measurement of the electrolyte next to the electrode during operation could provide a sufficient measure of the efficiency of the reaction as it progressed.

    This ability to easily monitor the reaction in real-time could ultimately lead to a system optimized by machine-learning methods, controlling the production rate of the desired compounds through continuous feedback, Moreno Soto says.

    Now that the process is understood and quantified, other approaches to mitigating the carbon dioxide depletion might be developed, the researchers say, and could easily be tested using their methods.

    This work shows, Lake says, that “no matter what your catalyst material is” in such an electrocatalytic system, “you’ll be affected by this problem.” And now, by using the model they developed, it’s possible to determine exactly what kind of time window needs to be evaluated to get an accurate sense of the material’s overall efficiency and what kind of system operations could maximize its effectiveness.

    The research was supported by Shell, through the MIT Energy Initiative. More

  • in

    Courtney Lesoon and Elizabeth Yarina win Fulbright-Hays Scholarships

    Two MIT doctoral students in the MIT School of Architecture and Planning have received the prestigious Fulbright-Hays Scholarship for Doctoral Dissertation Research Award. Courtney Lesoon and Elizabeth “Lizzie” Yarina are the first awardees from MIT in more than a decade.

    The fellowship provides opportunities for doctoral students to engage in full-time dissertation research abroad. The program, funded by the U.S. Department of Education, is designed to contribute to the development and improvement of the study of modern foreign languages and area studies. Applicants anticipate pursuing a teaching career in the United States following completion of their dissertation. There were 138 individuals from 47 institutions named scholars for the 2021 cycle.

    Courtney Lesoon

    Lesoon is a doctoral candidate in the Aga Khan Program for Islamic Architecture, in the History, Theory and Criticism Section of the Department of Architecture. Lesoon earned her BA from College of the Holy Cross and was a 2012-13 Fulbright U.S. Student grantee to the United Arab Emirates, where her research concerned contemporary art and emerging cultural institutions. Her dissertation is titled “Spatializing Ahl al-ʿIlm: Learning and the Rise of the Early Islamic City.” Losoon’s fieldwork will be done in Morocco, Egypt, and Turkey.

    “Courtney’s project presents an innovative idea that has not, to my knowledge, been investigated before,” says Nasser Rabbat, professor and director of the MIT Aga Khan Program. “How did the emergence and evolution of a particularly Islamic learning system affect the development of the city in the early Islamic period? Her work enriches the thinking about premodern urbanism and education everywhere by theorizing the intricate relationship between traveling, learning, and the city.”

    “I’ll be working in different manuscripts collections in Morocco, Egypt, and Turkey to investigate where and how scholars were learning inside of the early Islamic city before the formal institutionalization of higher education,” says Lesoon. “I’m interested in how learning — as a set of social practices — informed urban life. My project speaks to two different fields; Islamic urbanism and Islamic intellectual history. I’m really excited about my time on Fulbright-Hays; it will be a really fruitful time for my research and writing.”

    Before arriving at MIT, Lesoon worked as a research assistant in the Art of the Middle East Department at the Los Angeles County Museum of Art. Recently, she was awarded the 2021 Margaret B. Ševčenko Prize for “the best unpublished essay written by a junior scholar” for her paper “The Sphero-conical as Apothecary Vessel: An Argument for Dedicated Use.” Lesoon earned her MA from the University of Michigan at Ann Arbor, where her thesis investigated an 18th-century “Damascus Room” and its acquisition as a collected interior in the United States.

    Lizzie Yarina

    Yarina is a doctoral candidate in the MIT Department of Urban Studies and Planning (DUSP) and a research fellow at the MIT Norman B. Leventhal Center for Advanced Urbanism. She is presently co-editing a volume on the relationship between climate models and the built environment with a multidisciplinary team of editors and contributors. Yarina was a research scientist at the MIT Urban Risk Lab, where she was part of a team examining alternatives to the Federal Emergency Management Agency’s post-disaster housing systems; she also conducted research on disaster preparedness in Japan. Her award supports her doctoral research under the title “Modeling the Mekong: Climate Adaptation Imaginaries in Delta Regions,” which will include fieldwork in Vietnam, the Netherlands, Thailand, and Cambodia.

    “Lizzie’s research brings together three dimensions critical to global well-being and sustainability: adapting to the inevitability of changing ecosystems wrought by the climate crisis; questioning the equity, appropriateness, and relationality of adaptation planning models spanning the global North and the global South; and understanding how to develop durable and just climate futures,” says Christopher Zegras, professor of mobility and urban planning and department head for DUSP. “Her work will be an important contribution toward the long-term health of our planet and of communities working to justly adapt to climate change.”

    Previously, Yarina was awarded a U.S. Scholarship Fulbright to New Zealand to research spatial mapping and policy implications of Pacific Islander migration to New Zealand.

    “My dissertation project looks at climate adaptation planning in delta regions,” she says. “My focus is on Vietnam’s Mekong River Delta, but I’m also looking at how models that are used in delta adaptation planning move between different deltas, including the Netherlands Rhine Delta and the Mississippi Delta.”

    Working on her masters at MIT, Yarina had a teaching fellowship in Singapore, where she conducted research on climate adaptation plans in four major cities in Southeast Asia.

    “Through that process I learned about the role of Dutch experts and Dutch models in shaping how climate adaptation planning was taking place in Southeast Asia,” she says. “This project expands on that work from looking at a single city to examining a regional plan at the scale of a delta.”

    Yarina holds a joint masters in architecture and masters of city planning from MIT, and a BS in architecture from the University of Michigan. More

  • in

    A dirt cheap solution? Common clay materials may help curb methane emissions

    Methane is a far more potent greenhouse gas than carbon dioxide, and it has a pronounced effect within first two decades of its presence in the atmosphere. In the recent international climate negotiations in Glasgow, abatement of methane emissions was identified as a major priority in attempts to curb global climate change quickly.

    Now, a team of researchers at MIT has come up with a promising approach to controlling methane emissions and removing it from the air, using an inexpensive and abundant type of clay called zeolite. The findings are described in the journal ACS Environment Au, in a paper by doctoral student Rebecca Brenneis, Associate Professor Desiree Plata, and two others.

    Although many people associate atmospheric methane with drilling and fracking for oil and natural gas, those sources only account for about 18 percent of global methane emissions, Plata says. The vast majority of emitted methane comes from such sources as slash-and-burn agriculture, dairy farming, coal and ore mining, wetlands, and melting permafrost. “A lot of the methane that comes into the atmosphere is from distributed and diffuse sources, so we started to think about how you could take that out of the atmosphere,” she says.

    The answer the researchers found was something dirt cheap — in fact, a special kind of “dirt,” or clay. They used zeolite clays, a material so inexpensive that it is currently used to make cat litter. Treating the zeolite with a small amount of copper, the team found, makes the material very effective at absorbing methane from the air, even at extremely low concentrations.

    The system is simple in concept, though much work remains on the engineering details. In their lab tests, tiny particles of the copper-enhanced zeolite material, similar to cat litter, were packed into a reaction tube, which was then heated from the outside as the stream of gas, with methane levels ranging from just 2 parts per million up to 2 percent concentration, flowed through the tube. That range covers everything that might exist in the atmosphere, down to subflammable levels that cannot be burned or flared directly.

    The process has several advantages over other approaches to removing methane from air, Plata says. Other methods tend to use expensive catalysts such as platinum or palladium, require high temperatures of at least 600 degrees Celsius, and tend to require complex cycling between methane-rich and oxygen-rich streams, making the devices both more complicated and more risky, as methane and oxygen are highly combustible on their own and in combination.

    “The 600 degrees where they run these reactors makes it almost dangerous to be around the methane,” as well as the pure oxygen, Brenneis says. “They’re solving the problem by just creating a situation where there’s going to be an explosion.” Other engineering complications also arise from the high operating temperatures. Unsurprisingly, such systems have not found much use.

    As for the new process, “I think we’re still surprised at how well it works,” says Plata, who is the Gilbert W. Winslow Associate Professor of Civil and Environmental Engineering. The process seems to have its peak effectiveness at about 300 degrees Celsius, which requires far less energy for heating than other methane capture processes. It also can work at concentrations of methane lower than other methods can address, even small fractions of 1 percent, which most methods cannot remove, and does so in air rather than pure oxygen, a major advantage for real-world deployment.

    The method converts the methane into carbon dioxide. That might sound like a bad thing, given the worldwide efforts to combat carbon dioxide emissions. “A lot of people hear ‘carbon dioxide’ and they panic; they say ‘that’s bad,’” Plata says. But she points out that carbon dioxide is much less impactful in the atmosphere than methane, which is about 80 times stronger as a greenhouse gas over the first 20 years, and about 25 times stronger for the first century. This effect arises from that fact that methane turns into carbon dioxide naturally over time in the atmosphere. By accelerating that process, this method would drastically reduce the near-term climate impact, she says. And, even converting half of the atmosphere’s methane to carbon dioxide would increase levels of the latter by less than 1 part per million (about 0.2 percent of today’s atmospheric carbon dioxide) while saving about 16 percent of total radiative warming.

    The ideal location for such systems, the team concluded, would be in places where there is a relatively concentrated source of methane, such as dairy barns and coal mines. These sources already tend to have powerful air-handling systems in place, since a buildup of methane can be a fire, health, and explosion hazard. To surmount the outstanding engineering details, the team has just been awarded a $2 million grant from the U.S. Department of Energy to continue to develop specific equipment for methane removal in these types of locations.

    “The key advantage of mining air is that we move a lot of it,” she says. “You have to pull fresh air in to enable miners to breathe, and to reduce explosion risks from enriched methane pockets. So, the volumes of air that are moved in mines are enormous.” The concentration of methane is too low to ignite, but it’s in the catalysts’ sweet spot, she says.

    Adapting the technology to specific sites should be relatively straightforward. The lab setup the team used in their tests consisted of  “only a few components, and the technology you would put in a cow barn could be pretty simple as well,” Plata says. However, large volumes of gas do not flow that easily through clay, so the next phase of the research will focus on ways of structuring the clay material in a multiscale, hierarchical configuration that will aid air flow.

    “We need new technologies for oxidizing methane at concentrations below those used in flares and thermal oxidizers,” says Rob Jackson, a professor of earth systems science at Stanford University, who was not involved in this work. “There isn’t a cost-effective technology today for oxidizing methane at concentrations below about 2,000 parts per million.”

    Jackson adds, “Many questions remain for scaling this and all similar work: How quickly will the catalyst foul under field conditions? Can we get the required temperatures closer to ambient conditions? How scaleable will such technologies be when processing large volumes of air?”

    One potential major advantage of the new system is that the chemical process involved releases heat. By catalytically oxidizing the methane, in effect the process is a flame-free form of combustion. If the methane concentration is above 0.5 percent, the heat released is greater than the heat used to get the process started, and this heat could be used to generate electricity.

    The team’s calculations show that “at coal mines, you could potentially generate enough heat to generate electricity at the power plant scale, which is remarkable because it means that the device could pay for itself,” Plata says. “Most air-capture solutions cost a lot of money and would never be profitable. Our technology may one day be a counterexample.”

    Using the new grant money, she says, “over the next 18 months we’re aiming to demonstrate a proof of concept that this can work in the field,” where conditions can be more challenging than in the lab. Ultimately, they hope to be able to make devices that would be compatible with existing air-handling systems and could simply be an extra component added in place. “The coal mining application is meant to be at a stage that you could hand to a commercial builder or user three years from now,” Plata says.

    In addition to Plata and Brenneis, the team included Yale University PhD student Eric Johnson and former MIT postdoc Wenbo Shi. The work was supported by the Gerstner Philanthropies, Vanguard Charitable Trust, the Betty Moore Inventor Fellows Program, and MIT’s Research Support Committee. More

  • in

    Seeing the plasma edge of fusion experiments in new ways with artificial intelligence

    To make fusion energy a viable resource for the world’s energy grid, researchers need to understand the turbulent motion of plasmas: a mix of ions and electrons swirling around in reactor vessels. The plasma particles, following magnetic field lines in toroidal chambers known as tokamaks, must be confined long enough for fusion devices to produce significant gains in net energy, a challenge when the hot edge of the plasma (over 1 million degrees Celsius) is just centimeters away from the much cooler solid walls of the vessel.

    Abhilash Mathews, a PhD candidate in the Department of Nuclear Science and Engineering working at MIT’s Plasma Science and Fusion Center (PSFC), believes this plasma edge to be a particularly rich source of unanswered questions. A turbulent boundary, it is central to understanding plasma confinement, fueling, and the potentially damaging heat fluxes that can strike material surfaces — factors that impact fusion reactor designs.

    To better understand edge conditions, scientists focus on modeling turbulence at this boundary using numerical simulations that will help predict the plasma’s behavior. However, “first principles” simulations of this region are among the most challenging and time-consuming computations in fusion research. Progress could be accelerated if researchers could develop “reduced” computer models that run much faster, but with quantified levels of accuracy.

    For decades, tokamak physicists have regularly used a reduced “two-fluid theory” rather than higher-fidelity models to simulate boundary plasmas in experiment, despite uncertainty about accuracy. In a pair of recent publications, Mathews begins directly testing the accuracy of this reduced plasma turbulence model in a new way: he combines physics with machine learning.

    “A successful theory is supposed to predict what you’re going to observe,” explains Mathews, “for example, the temperature, the density, the electric potential, the flows. And it’s the relationships between these variables that fundamentally define a turbulence theory. What our work essentially examines is the dynamic relationship between two of these variables: the turbulent electric field and the electron pressure.”

    In the first paper, published in Physical Review E, Mathews employs a novel deep-learning technique that uses artificial neural networks to build representations of the equations governing the reduced fluid theory. With this framework, he demonstrates a way to compute the turbulent electric field from an electron pressure fluctuation in the plasma consistent with the reduced fluid theory. Models commonly used to relate the electric field to pressure break down when applied to turbulent plasmas, but this one is robust even to noisy pressure measurements.

    In the second paper, published in Physics of Plasmas, Mathews further investigates this connection, contrasting it against higher-fidelity turbulence simulations. This first-of-its-kind comparison of turbulence across models has previously been difficult — if not impossible — to evaluate precisely. Mathews finds that in plasmas relevant to existing fusion devices, the reduced fluid model’s predicted turbulent fields are consistent with high-fidelity calculations. In this sense, the reduced turbulence theory works. But to fully validate it, “one should check every connection between every variable,” says Mathews.

    Mathews’ advisor, Principal Research Scientist Jerry Hughes, notes that plasma turbulence is notoriously difficult to simulate, more so than the familiar turbulence seen in air and water. “This work shows that, under the right set of conditions, physics-informed machine-learning techniques can paint a very full picture of the rapidly fluctuating edge plasma, beginning from a limited set of observations. I’m excited to see how we can apply this to new experiments, in which we essentially never observe every quantity we want.”

    These physics-informed deep-learning methods pave new ways in testing old theories and expanding what can be observed from new experiments. David Hatch, a research scientist at the Institute for Fusion Studies at the University of Texas at Austin, believes these applications are the start of a promising new technique.

    “Abhi’s work is a major achievement with the potential for broad application,” he says. “For example, given limited diagnostic measurements of a specific plasma quantity, physics-informed machine learning could infer additional plasma quantities in a nearby domain, thereby augmenting the information provided by a given diagnostic. The technique also opens new strategies for model validation.”

    Mathews sees exciting research ahead.

    “Translating these techniques into fusion experiments for real edge plasmas is one goal we have in sight, and work is currently underway,” he says. “But this is just the beginning.”

    Mathews was supported in this work by the Manson Benedict Fellowship, Natural Sciences and Engineering Research Council of Canada, and U.S. Department of Energy Office of Science under the Fusion Energy Sciences program.​ More

  • in

    Helping to make nuclear fusion a reality

    Up until she served in the Peace Corps in Malawi, Rachel Bielajew was open to a career reboot. Having studied nuclear engineering as an undergraduate at the University of Michigan at Ann Arbor, graduate school had been on her mind. But seeing the drastic impacts of climate change play out in real-time in Malawi — the lives of the country’s subsistence farmers swing wildly, depending on the rains — convinced Bielajew of the importance of nuclear engineering. Bielajew was struck that her high school students in the small town of Chisenga had a shaky understanding of math, but universally understood global warming. “The concept of the changing world due to human impact was evident, and they could see it,” Bielajew says.

    Bielajew was looking to work on solutions that could positively impact global problems and feed her love of physics. Nuclear engineering, especially the study of fusion as a carbon-free energy source, checked off both boxes. Bielajew is now a fourth-year doctoral candidate in the Department of Nuclear Science and Engineering (NSE). She researches magnetic confinement fusion in the Plasma Science and Fusion Center (PSFC) with Professor Anne White.

    Researching fusion’s big challenge

    You need to confine plasma effectively in order to generate the extremely high temperatures (100 million degrees Celsius) fusion needs, without melting the walls of the tokamak, the device that hosts these reactions. Magnets can do the job, but “plasmas are weird, they behave strangely and are challenging to understand,” Bielajew says. Small instabilities in plasma can coalesce into fluctuating turbulence that can drive heat and particles out of the machine.

    In high-confinement mode, the edges of the plasma have less tolerance for such unruly behavior. “The turbulence gets damped out and sheared apart at the edge,” Bielajew says. This might seem like a good thing, but high-confinement plasmas have their own challenges. They are so tightly bound that they create edge-localized modes (ELMs), bursts of damaging particles and energy, that can be extremely damaging to the machine.

    The questions Bielajew is looking to answer: How do we get high confinement without ELMs? How do turbulence and transport play a role in plasmas? “We do not fully understand turbulence, even though we have studied it for a long time,” Bielajew says, “It is a big and important problem to solve for fusion to be a reality. I like that challenge,” Bielajew adds.

    A love of science

    Confronting such challenges head-on has been part of Bielajew’s toolkit since she was a child growing up in Ann Arbor, Michigan. Her father, Alex Bielajew, is a professor of nuclear engineering at the University of Michigan, and Bielajew’s mother also pursued graduate studies.

    Bielajew’s parents encouraged her to follow her own path and she found it led to her father’s chosen profession: nuclear engineering. Once she decided to pursue research in fusion, MIT stood out as a school she could set her sights on. “I knew that MIT had an extensive program in fusion and a lot of faculty in the field,” Bielajew says. The mechanics of the application were challenging: Chisenga had limited internet access, so Bielajew had to ride on the back of a pickup truck to meet a friend in a city a few hours away and use his phone as a hotspot to send the documents.

    A similar tenacity has surfaced in Bielajew’s approach to research during the Covid-19 pandemic. Working off a blueprint, Bielajew built the Correlation Cyclotron Emission Diagnostic, which measures turbulent electron temperature fluctuations. Through a collaboration, Bielajew conducts her plasma research at the ASDEX Upgrade tokamak in Germany. Traditionally, Bielajew would ship the diagnostic to Germany, follow and install it, and conduct the research in person. The pandemic threw a wrench in the plans, so Bielajew shipped the diagnostic and relied on team members to install it. She Zooms into the control room and trusts others to run the plasma experiments.

    DEI advocate

    Bielajew is very hands-on with another endeavor: improving diversity, equity, and inclusion (DEI) in her own backyard. Having grown up with parental encouragement and in an environment that never doubted her place as a woman in engineering, Bielajew realizes not everyone has the same opportunities. “I wish that the world was in a place where all I had to do was care about my research, but it’s not,” Bielajew says. While science can solve many problems, more fundamental ones about equity need humans to act in specific ways, she points out. “I want to see more women represented, more people of color. Everyone needs a voice in building a better world,” Bielajew says.

    To get there, Bielajew co-launched NSE’s Graduate Application Assistance Program, which connects underrepresented student applicants with NSE mentors. She has been the DEI officer with NSE’s student group, ANS, and is very involved in the department’s DEI committee.

    As for future research, Bielajew hopes to concentrate on the experiments that make her question existing paradigms about plasmas under high confinement. Bielajew has registered more head-scratching “hmm” moments than “a-ha” ones. Measurements from her experiments drive the need for more intensive study.

    Bielajew’s dogs, Dobby and Winky, keep her company through it all. They came home with her from Malawi. More

  • in

    Predator interactions chiefly determine where Prochlorococcus thrive

    Prochlorococcus are the smallest and most abundant photosynthesizing organisms on the planet. A single Prochlorococcus cell is dwarfed by a human red blood cell, yet globally the microbes number in the octillions and are responsible for a large fraction of the world’s oxygen production as they turn sunlight into energy.

    Prochlorococcus can be found in the ocean’s warm surface waters, and their population drops off dramatically in regions closer to the poles. Scientists have assumed that, as with many marine species, Prochlorococcus’ range is set by temperature: The colder the waters, the less likely the microbes are to live there.

    But MIT scientists have found that where the microbe lives is not determined primarily by temperature. While Prochlorococcus populations do drop off in colder waters, it’s a relationship with a shared predator, and not temperature, that sets the microbe’s range. These findings, published today in the Proceedings of the National Academy of Sciences, could help scientists predict how the microbes’ populations will shift with climate change.

    “People assume that if the ocean warms up, Prochlorococcus will move poleward. And that may be true, but not for the reason they’re predicting,” says study co-author Stephanie Dutkiewicz, senior research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “So, temperature is a bit of a red herring.”

    Dutkiewicz’s co-authors on the study are lead author and EAPS Research Scientist Christopher Follett, EAPS Professor Mick Follows, François Ribalet and Virginia Armbrust of the University of Washington, and Emily Zakem and David Caron of the University of Southern California at Los Angeles.

    Temperature’s collapse

    While temperature is thought to set the range of Prochloroccus and other phytoplankton in the ocean, Follett, Dutkiewicz, and their colleagues noticed a curious dissonance in data.

    The team examined observations from several research cruises that sailed through the northeast Pacific Ocean in 2003, 2016, and 2017. Each vessel traversed different latitudes, sampling waters continuously and measuring concentrations of various species of bacteria and phytoplankton, including Prochlorococcus. 

    The MIT team used the publicly archived cruise data to map out the locations where Prochlorococcus noticeably decreased or collapsed, along with each location’s ocean temperature. Surprisingly, they found that Prochlorococcus’ collapse occurred in regions of widely varying temperatures, ranging from around 13 to 18 degrees Celsius. Curiously, the upper end of this range has been shown in lab experiments to be suitable conditions for Prochlorococcus to grow and thrive.

    “Temperature itself was not able to explain where we saw these drop-offs,” Follett says.

    Follett was also working out an alternate idea related to Prochlorococcus and nutrient supply. As a byproduct of its photosynthesis, the microbe produces carbohydrate — an essential nutrient for heterotrophic bacteria, which are single-celled organisms that do not photosynthesize but live off the organic matter produced by phytoplankton.

    “Somewhere along the way, I wondered, what would happen if this food source Prochlorococcus was producing increased? What if we took that knob and spun it?” Follett says.

    In other words, how would the balance of Prochlorococcus and bacteria shift if the bacteria’s food increased as a result of, say, an increase in other carbohydrate-producing phytoplankton? The team also wondered: If the bacteria in question were about the same size as Prochlorococcus, the two would likely share a common grazer, or predator. How would the grazer’s population also shift with a change in carbohydrate supply?

    “Then we went to the whiteboard and started writing down equations and solving them for various cases, and realized that as soon as you reach an environment where other species add carbohydrates to the mix, bacteria and grazers grow up and annihilate Prochlorococcus,” Dutkiewicz says.

    Nutrient shift

    To test this idea, the researchers employed simulations of ocean circulation and marine ecosystem interactions. The team ran the MITgcm, a general circulation model that simulates, in this case, the ocean currents and regions of upwelling waters around the world. They overlaid a biogeochemistry model that simulates how nutrients are redistributed in the ocean. To all of this, they linked a complex ecosystem model that simulates the interactions between many different species of bacteria and phytoplankton, including Prochlorococcus.

    When they ran the simulations without incorporating a representation of bacteria, they found that Prochlorococcus persisted all the way to the poles, contrary to theory and observations. When they added in the equations outlining the relationship between the microbe, bacteria, and a shared predator, Prochlorococcus’ range shifted away from the poles, matching the observations of the original research cruises.

    In particular, the team observed that Prochlorococcus thrived in waters with very low nutrient levels, and where it is the dominant source of food for bacteria. These waters also happen to be warm, and Prochlorococcus and bacteria live in balance, along with their shared predator. But in more nutrient-rich enviroments, such as polar regions, where cold water and nutrients are upwelled from the deep ocean, many more species of phytoplankton can thrive. Bacteria can then feast and grow on more food sources, and in turn feed and grow more of its shared predator. Prochlorococcus, unable to keep up, is quickly decimated. 

    The results show that a relationship with a shared predator, and not temperature, sets Prochlorococcus’ range. Incorporating this mechanism into models will be crucial in predicting how the microbe — and possibly other marine species — will shift with climate change.

    “Prochlorococcus is a big harbinger of changes in the global ocean,” Dutkiewicz says. “If its range expands, that’s a canary — a sign that things have changed in the ocean by a great deal.”

    “There are reasons to believe its range will expand with a warming world,” Follett adds.” But we have to understand the physical mechanisms that set these ranges. And predictions just based on temperature will not be correct.” More