More stories

  • in

    Seeing the plasma edge of fusion experiments in new ways with artificial intelligence

    To make fusion energy a viable resource for the world’s energy grid, researchers need to understand the turbulent motion of plasmas: a mix of ions and electrons swirling around in reactor vessels. The plasma particles, following magnetic field lines in toroidal chambers known as tokamaks, must be confined long enough for fusion devices to produce significant gains in net energy, a challenge when the hot edge of the plasma (over 1 million degrees Celsius) is just centimeters away from the much cooler solid walls of the vessel.

    Abhilash Mathews, a PhD candidate in the Department of Nuclear Science and Engineering working at MIT’s Plasma Science and Fusion Center (PSFC), believes this plasma edge to be a particularly rich source of unanswered questions. A turbulent boundary, it is central to understanding plasma confinement, fueling, and the potentially damaging heat fluxes that can strike material surfaces — factors that impact fusion reactor designs.

    To better understand edge conditions, scientists focus on modeling turbulence at this boundary using numerical simulations that will help predict the plasma’s behavior. However, “first principles” simulations of this region are among the most challenging and time-consuming computations in fusion research. Progress could be accelerated if researchers could develop “reduced” computer models that run much faster, but with quantified levels of accuracy.

    For decades, tokamak physicists have regularly used a reduced “two-fluid theory” rather than higher-fidelity models to simulate boundary plasmas in experiment, despite uncertainty about accuracy. In a pair of recent publications, Mathews begins directly testing the accuracy of this reduced plasma turbulence model in a new way: he combines physics with machine learning.

    “A successful theory is supposed to predict what you’re going to observe,” explains Mathews, “for example, the temperature, the density, the electric potential, the flows. And it’s the relationships between these variables that fundamentally define a turbulence theory. What our work essentially examines is the dynamic relationship between two of these variables: the turbulent electric field and the electron pressure.”

    In the first paper, published in Physical Review E, Mathews employs a novel deep-learning technique that uses artificial neural networks to build representations of the equations governing the reduced fluid theory. With this framework, he demonstrates a way to compute the turbulent electric field from an electron pressure fluctuation in the plasma consistent with the reduced fluid theory. Models commonly used to relate the electric field to pressure break down when applied to turbulent plasmas, but this one is robust even to noisy pressure measurements.

    In the second paper, published in Physics of Plasmas, Mathews further investigates this connection, contrasting it against higher-fidelity turbulence simulations. This first-of-its-kind comparison of turbulence across models has previously been difficult — if not impossible — to evaluate precisely. Mathews finds that in plasmas relevant to existing fusion devices, the reduced fluid model’s predicted turbulent fields are consistent with high-fidelity calculations. In this sense, the reduced turbulence theory works. But to fully validate it, “one should check every connection between every variable,” says Mathews.

    Mathews’ advisor, Principal Research Scientist Jerry Hughes, notes that plasma turbulence is notoriously difficult to simulate, more so than the familiar turbulence seen in air and water. “This work shows that, under the right set of conditions, physics-informed machine-learning techniques can paint a very full picture of the rapidly fluctuating edge plasma, beginning from a limited set of observations. I’m excited to see how we can apply this to new experiments, in which we essentially never observe every quantity we want.”

    These physics-informed deep-learning methods pave new ways in testing old theories and expanding what can be observed from new experiments. David Hatch, a research scientist at the Institute for Fusion Studies at the University of Texas at Austin, believes these applications are the start of a promising new technique.

    “Abhi’s work is a major achievement with the potential for broad application,” he says. “For example, given limited diagnostic measurements of a specific plasma quantity, physics-informed machine learning could infer additional plasma quantities in a nearby domain, thereby augmenting the information provided by a given diagnostic. The technique also opens new strategies for model validation.”

    Mathews sees exciting research ahead.

    “Translating these techniques into fusion experiments for real edge plasmas is one goal we have in sight, and work is currently underway,” he says. “But this is just the beginning.”

    Mathews was supported in this work by the Manson Benedict Fellowship, Natural Sciences and Engineering Research Council of Canada, and U.S. Department of Energy Office of Science under the Fusion Energy Sciences program.​ More

  • in

    Helping to make nuclear fusion a reality

    Up until she served in the Peace Corps in Malawi, Rachel Bielajew was open to a career reboot. Having studied nuclear engineering as an undergraduate at the University of Michigan at Ann Arbor, graduate school had been on her mind. But seeing the drastic impacts of climate change play out in real-time in Malawi — the lives of the country’s subsistence farmers swing wildly, depending on the rains — convinced Bielajew of the importance of nuclear engineering. Bielajew was struck that her high school students in the small town of Chisenga had a shaky understanding of math, but universally understood global warming. “The concept of the changing world due to human impact was evident, and they could see it,” Bielajew says.

    Bielajew was looking to work on solutions that could positively impact global problems and feed her love of physics. Nuclear engineering, especially the study of fusion as a carbon-free energy source, checked off both boxes. Bielajew is now a fourth-year doctoral candidate in the Department of Nuclear Science and Engineering (NSE). She researches magnetic confinement fusion in the Plasma Science and Fusion Center (PSFC) with Professor Anne White.

    Researching fusion’s big challenge

    You need to confine plasma effectively in order to generate the extremely high temperatures (100 million degrees Celsius) fusion needs, without melting the walls of the tokamak, the device that hosts these reactions. Magnets can do the job, but “plasmas are weird, they behave strangely and are challenging to understand,” Bielajew says. Small instabilities in plasma can coalesce into fluctuating turbulence that can drive heat and particles out of the machine.

    In high-confinement mode, the edges of the plasma have less tolerance for such unruly behavior. “The turbulence gets damped out and sheared apart at the edge,” Bielajew says. This might seem like a good thing, but high-confinement plasmas have their own challenges. They are so tightly bound that they create edge-localized modes (ELMs), bursts of damaging particles and energy, that can be extremely damaging to the machine.

    The questions Bielajew is looking to answer: How do we get high confinement without ELMs? How do turbulence and transport play a role in plasmas? “We do not fully understand turbulence, even though we have studied it for a long time,” Bielajew says, “It is a big and important problem to solve for fusion to be a reality. I like that challenge,” Bielajew adds.

    A love of science

    Confronting such challenges head-on has been part of Bielajew’s toolkit since she was a child growing up in Ann Arbor, Michigan. Her father, Alex Bielajew, is a professor of nuclear engineering at the University of Michigan, and Bielajew’s mother also pursued graduate studies.

    Bielajew’s parents encouraged her to follow her own path and she found it led to her father’s chosen profession: nuclear engineering. Once she decided to pursue research in fusion, MIT stood out as a school she could set her sights on. “I knew that MIT had an extensive program in fusion and a lot of faculty in the field,” Bielajew says. The mechanics of the application were challenging: Chisenga had limited internet access, so Bielajew had to ride on the back of a pickup truck to meet a friend in a city a few hours away and use his phone as a hotspot to send the documents.

    A similar tenacity has surfaced in Bielajew’s approach to research during the Covid-19 pandemic. Working off a blueprint, Bielajew built the Correlation Cyclotron Emission Diagnostic, which measures turbulent electron temperature fluctuations. Through a collaboration, Bielajew conducts her plasma research at the ASDEX Upgrade tokamak in Germany. Traditionally, Bielajew would ship the diagnostic to Germany, follow and install it, and conduct the research in person. The pandemic threw a wrench in the plans, so Bielajew shipped the diagnostic and relied on team members to install it. She Zooms into the control room and trusts others to run the plasma experiments.

    DEI advocate

    Bielajew is very hands-on with another endeavor: improving diversity, equity, and inclusion (DEI) in her own backyard. Having grown up with parental encouragement and in an environment that never doubted her place as a woman in engineering, Bielajew realizes not everyone has the same opportunities. “I wish that the world was in a place where all I had to do was care about my research, but it’s not,” Bielajew says. While science can solve many problems, more fundamental ones about equity need humans to act in specific ways, she points out. “I want to see more women represented, more people of color. Everyone needs a voice in building a better world,” Bielajew says.

    To get there, Bielajew co-launched NSE’s Graduate Application Assistance Program, which connects underrepresented student applicants with NSE mentors. She has been the DEI officer with NSE’s student group, ANS, and is very involved in the department’s DEI committee.

    As for future research, Bielajew hopes to concentrate on the experiments that make her question existing paradigms about plasmas under high confinement. Bielajew has registered more head-scratching “hmm” moments than “a-ha” ones. Measurements from her experiments drive the need for more intensive study.

    Bielajew’s dogs, Dobby and Winky, keep her company through it all. They came home with her from Malawi. More

  • in

    Design’s new frontier

    In the 1960s, the advent of computer-aided design (CAD) sparked a revolution in design. For his PhD thesis in 1963, MIT Professor Ivan Sutherland developed Sketchpad, a game-changing software program that enabled users to draw, move, and resize shapes on a computer. Over the course of the next few decades, CAD software reshaped how everything from consumer products to buildings and airplanes were designed.

    “CAD was part of the first wave in computing in design. The ability of researchers and practitioners to represent and model designs using computers was a major breakthrough and still is one of the biggest outcomes of design research, in my opinion,” says Maria Yang, Gail E. Kendall Professor and director of MIT’s Ideation Lab.

    Innovations in 3D printing during the 1980s and 1990s expanded CAD’s capabilities beyond traditional injection molding and casting methods, providing designers even more flexibility. Designers could sketch, ideate, and develop prototypes or models faster and more efficiently. Meanwhile, with the push of a button, software like that developed by Professor Emeritus David Gossard of MIT’s CAD Lab could solve equations simultaneously to produce a new geometry on the fly.

    In recent years, mechanical engineers have expanded the computing tools they use to ideate, design, and prototype. More sophisticated algorithms and the explosion of machine learning and artificial intelligence technologies have sparked a second revolution in design engineering.

    Researchers and faculty at MIT’s Department of Mechanical Engineering are utilizing these technologies to re-imagine how the products, systems, and infrastructures we use are designed. These researchers are at the forefront of the new frontier in design.

    Computational design

    Faez Ahmed wants to reinvent the wheel, or at least the bicycle wheel. He and his team at MIT’s Design Computation & Digital Engineering Lab (DeCoDE) use an artificial intelligence-driven design method that can generate entirely novel and improved designs for a range of products — including the traditional bicycle. They create advanced computational methods to blend human-driven design with simulation-based design.

    “The focus of our DeCoDE lab is computational design. We are looking at how we can create machine learning and AI algorithms to help us discover new designs that are optimized based on specific performance parameters,” says Ahmed, an assistant professor of mechanical engineering at MIT.

    For their work using AI-driven design for bicycles, Ahmed and his collaborator Professor Daniel Frey wanted to make it easier to design customizable bicycles, and by extension, encourage more people to use bicycles over transportation methods that emit greenhouse gases.

    To start, the group gathered a dataset of 4,500 bicycle designs. Using this massive dataset, they tested the limits of what machine learning could do. First, they developed algorithms to group bicycles that looked similar together and explore the design space. They then created machine learning models that could successfully predict what components are key in identifying a bicycle style, such as a road bike versus a mountain bike.

    Once the algorithms were good enough at identifying bicycle designs and parts, the team proposed novel machine learning tools that could use this data to create a unique and creative design for a bicycle based on certain performance parameters and rider dimensions.

    Ahmed used a generative adversarial network — or GAN — as the basis of this model. GAN models utilize neural networks that can create new designs based on vast amounts of data. However, using GAN models alone would result in homogeneous designs that lack novelty and can’t be assessed in terms of performance. To address these issues in design problems, Ahmed has developed a new method which he calls “PaDGAN,” performance augmented diverse GAN.

    “When we apply this type of model, what we see is that we can get large improvements in the diversity, quality, as well as novelty of the designs,” Ahmed explains.

    Using this approach, Ahmed’s team developed an open-source computational design tool for bicycles freely available on their lab website. They hope to further develop a set of generalizable tools that can be used across industries and products.

    Longer term, Ahmed has his sights set on loftier goals. He hopes the computational design tools he develops could lead to “design democratization,” putting more power in the hands of the end user.

    “With these algorithms, you can have more individualization where the algorithm assists a customer in understanding their needs and helps them create a product that satisfies their exact requirements,” he adds.

    Using algorithms to democratize the design process is a goal shared by Stefanie Mueller, an associate professor in electrical engineering and computer science and mechanical engineering.

    Personal fabrication

    Platforms like Instagram give users the freedom to instantly edit their photographs or videos using filters. In one click, users can alter the palette, tone, and brightness of their content by applying filters that range from bold colors to sepia-toned or black-and-white. Mueller, X-Window Consortium Career Development Professor, wants to bring this concept of the Instagram filter to the physical world.

    “We want to explore how digital capabilities can be applied to tangible objects. Our goal is to bring reprogrammable appearance to the physical world,” explains Mueller, director of the HCI Engineering Group based out of MIT’s Computer Science and Artificial Intelligence Laboratory.

    Mueller’s team utilizes a combination of smart materials, optics, and computation to advance personal fabrication technologies that would allow end users to alter the design and appearance of the products they own. They tested this concept in a project they dubbed “Photo-Chromeleon.”

    First, a mix of photochromic cyan, magenta, and yellow dies are airbrushed onto an object — in this instance, a 3D sculpture of a chameleon. Using software they developed, the team sketches the exact color pattern they want to achieve on the object itself. An ultraviolet light shines on the object to activate the dyes.

    To actually create the physical pattern on the object, Mueller has developed an optimization algorithm to use alongside a normal office projector outfitted with red, green, and blue LED lights. These lights shine on specific pixels on the object for a given period of time to physically change the makeup of the photochromic pigments.

    “This fancy algorithm tells us exactly how long we have to shine the red, green, and blue light on every single pixel of an object to get the exact pattern we’ve programmed in our software,” says Mueller.

    Giving this freedom to the end user enables limitless possibilities. Mueller’s team has applied this technology to iPhone cases, shoes, and even cars. In the case of shoes, Mueller envisions a shoebox embedded with UV and LED light projectors. Users could put their shoes in the box overnight and the next day have a pair of shoes in a completely new pattern.

    Mueller wants to expand her personal fabrication methods to the clothes we wear. Rather than utilize the light projection technique developed in the PhotoChromeleon project, her team is exploring the possibility of weaving LEDs directly into clothing fibers, allowing people to change their shirt’s appearance as they wear it. These personal fabrication technologies could completely alter consumer habits.

    “It’s very interesting for me to think about how these computational techniques will change product design on a high level,” adds Mueller. “In the future, a consumer could buy a blank iPhone case and update the design on a weekly or daily basis.”

    Computational fluid dynamics and participatory design

    Another team of mechanical engineers, including Sili Deng, the Brit (1961) & Alex (1949) d’Arbeloff Career Development Professor, are developing a different kind of design tool that could have a large impact on individuals in low- and middle-income countries across the world.

    As Deng walked down the hallway of Building 1 on MIT’s campus, a monitor playing a video caught her eye. The video featured work done by mechanical engineers and MIT D-Lab on developing cleaner burning briquettes for cookstoves in Uganda. Deng immediately knew she wanted to get involved.

    “As a combustion scientist, I’ve always wanted to work on such a tangible real-world problem, but the field of combustion tends to focus more heavily on the academic side of things,” explains Deng.

    After reaching out to colleagues in MIT D-Lab, Deng joined a collaborative effort to develop a new cookstove design tool for the 3 billion people across the world who burn solid fuels to cook and heat their homes. These stoves often emit soot and carbon monoxide, leading not only to millions of deaths each year, but also worsening the world’s greenhouse gas emission problem.

    The team is taking a three-pronged approach to developing this solution, using a combination of participatory design, physical modeling, and experimental validation to create a tool that will lead to the production of high-performing, low-cost energy products.

    Deng and her team in the Deng Energy and Nanotechnology Group use physics-based modeling for the combustion and emission process in cookstoves.

    “My team is focused on computational fluid dynamics. We use computational and numerical studies to understand the flow field where the fuel is burned and releases heat,” says Deng.

    These flow mechanics are crucial to understanding how to minimize heat loss and make cookstoves more efficient, as well as learning how dangerous pollutants are formed and released in the process.

    Using computational methods, Deng’s team performs three-dimensional simulations of the complex chemistry and transport coupling at play in the combustion and emission processes. They then use these simulations to build a combustion model for how fuel is burned and a pollution model that predicts carbon monoxide emissions.

    Deng’s models are used by a group led by Daniel Sweeney in MIT D-Lab to test the experimental validation in prototypes of stoves. Finally, Professor Maria Yang uses participatory design methods to integrate user feedback, ensuring the design tool can actually be used by people across the world.

    The end goal for this collaborative team is to not only provide local manufacturers with a prototype they could produce themselves, but to also provide them with a tool that can tweak the design based on local needs and available materials.

    Deng sees wide-ranging applications for the computational fluid dynamics her team is developing.

    “We see an opportunity to use physics-based modeling, augmented with a machine learning approach, to come up with chemical models for practical fuels that help us better understand combustion. Therefore, we can design new methods to minimize carbon emissions,” she adds.

    While Deng is utilizing simulations and machine learning at the molecular level to improve designs, others are taking a more macro approach.

    Designing intelligent systems

    When it comes to intelligent design, Navid Azizan thinks big. He hopes to help create future intelligent systems that are capable of making decisions autonomously by using the enormous amounts of data emerging from the physical world. From smart robots and autonomous vehicles to smart power grids and smart cities, Azizan focuses on the analysis, design, and control of intelligent systems.

    Achieving such massive feats takes a truly interdisciplinary approach that draws upon various fields such as machine learning, dynamical systems, control, optimization, statistics, and network science, among others.

    “Developing intelligent systems is a multifaceted problem, and it really requires a confluence of disciplines,” says Azizan, assistant professor of mechanical engineering with a dual appointment in MIT’s Institute for Data, Systems, and Society (IDSS). “To create such systems, we need to go beyond standard approaches to machine learning, such as those commonly used in computer vision, and devise algorithms that can enable safe, efficient, real-time decision-making for physical systems.”

    For robot control to work in the complex dynamic environments that arise in the real world, real-time adaptation is key. If, for example, an autonomous vehicle is going to drive in icy conditions or a drone is operating in windy conditions, they need to be able to adapt to their new environment quickly.

    To address this challenge, Azizan and his collaborators at MIT and Stanford University have developed a new algorithm that combines adaptive control, a powerful methodology from control theory, with meta learning, a new machine learning paradigm.

    “This ‘control-oriented’ learning approach outperforms the existing ‘regression-oriented’ methods, which are mostly focused on just fitting the data, by a wide margin,” says Azizan.

    Another critical aspect of deploying machine learning algorithms in physical systems that Azizan and his team hope to address is safety. Deep neural networks are a crucial part of autonomous systems. They are used for interpreting complex visual inputs and making data-driven predictions of future behavior in real time. However, Azizan urges caution.

    “These deep neural networks are only as good as their training data, and their predictions can often be untrustworthy in scenarios not covered by their training data,” he says. Making decisions based on such untrustworthy predictions could lead to fatal accidents in autonomous vehicles or other safety-critical systems.

    To avoid these potentially catastrophic events, Azizan proposes that it is imperative to equip neural networks with a measure of their uncertainty. When the uncertainty is high, they can then be switched to a “safe policy.”

    In pursuit of this goal, Azizan and his collaborators have developed a new algorithm known as SCOD — Sketching Curvature of Out-of-Distribution Detection. This framework could be embedded within any deep neural network to equip them with a measure of their uncertainty.

    “This algorithm is model-agnostic and can be applied to neural networks used in various kinds of autonomous systems, whether it’s drones, vehicles, or robots,” says Azizan.

    Azizan hopes to continue working on algorithms for even larger-scale systems. He and his team are designing efficient algorithms to better control supply and demand in smart energy grids. According to Azizan, even if we create the most efficient solar panels and batteries, we can never achieve a sustainable grid powered by renewable resources without the right control mechanisms.

    Mechanical engineers like Ahmed, Mueller, Deng, and Azizan serve as the key to realizing the next revolution of computing in design.

    “MechE is in a unique position at the intersection of the computational and physical worlds,” Azizan says. “Mechanical engineers build a bridge between theoretical, algorithmic tools and real, physical world applications.”

    Sophisticated computational tools, coupled with the ground truth mechanical engineers have in the physical world, could unlock limitless possibilities for design engineering, well beyond what could have been imagined in those early days of CAD. More

  • in

    Radio-frequency wave scattering improves fusion simulations

    In the quest for fusion energy, understanding how radio-frequency (RF) waves travel (or “propagate”) in the turbulent interior of a fusion furnace is crucial to maintaining an efficient, continuously operating power plant. Transmitted by an antenna in the doughnut-shaped vacuum chamber common to magnetic confinement fusion devices called tokamaks, RF waves heat the plasma fuel and drive its current around the toroidal interior. The efficiency of this process can be affected by how the wave’s trajectory is altered (or “scattered”) by conditions within the chamber.

    Researchers have tried to study these RF processes using computer simulations to match the experimental conditions. A good match would validate the computer model, and raise confidence in using it to explore new physics and design future RF antennas that perform efficiently. While the simulations can accurately calculate how much total current is driven by RF waves, they do a poor job at predicting where exactly in the plasma this current is produced.

    Now, in a paper published in the Journal of Plasma Physics, MIT researchers suggest that the models for RF wave propagation used for these simulations have not properly taken into account the way these waves are scattered as they encounter dense, turbulent filaments present in the edge of the plasma known as the “scrape-off layer” (SOL).

    Bodhi Biswas, a graduate student at the Plasma Science and Fusion Center (PSFC) under the direction of Senior Research Scientist Paul Bonoli, School of Engineering Distinguished Professor of Engineering Anne White, and Principal Research Scientist Abhay Ram, who is the paper’s lead author. Ram compares the scattering that occurs in this situation to a wave of water hitting a lily pad: “The wave crashing with the lily pad will excite a secondary, scattered wave that makes circular ripples traveling outward from the plant. The incoming wave has transferred energy to the scattered wave. Some of this energy is reflected backwards (in relation to the incoming wave), some travels forwards, and some is deflected to the side. The specifics all depend on the particular attributes of the wave, the water, and the lily pad. In our case, the lily pad is the plasma filament.”

    Until now, researchers have not properly taken these filaments and the scattering they provoke into consideration when modeling the turbulence inside a tokamak, leading to an underestimation of wave scattering. Using data from PSFC tokamak Alcator C-Mod, Biswas shows that using the new method of modeling RF-wave scattering from SOL turbulence provides results considerably different from older models, and a much better match to experiments. Notably, the “lower-hybrid” wave spectrum, crucial to driving plasma current in a steady-state tokamak, appears to scatter asymmetrically, an important effect not accounted for in previous models.

    Biswas’s advisor Paul Bonoli is well acquainted with traditional “ray-tracing” models, which evaluate a wave trajectory by dividing it into a series of rays. He has used this model, with its limitations, for decades in his own research to understand plasma behavior. Bonoli says he is pleased that “the research results in Bodhi’s doctoral thesis have refocused attention on the profound effect that edge turbulence can have on the propagation and absorption of radio-frequency power.”

    Although ray-tracing treatments of scattering do not fully capture all the wave physics, a “full-wave” model that does would be prohibitively expensive. To solve the problem economically, Biswas splits his analysis into two parts: (1) using ray tracing to model the trajectory of the wave in the tokamak assuming no turbulence, while (2) modifying this ray-trajectory with the new scattering model that accounts for the turbulent plasma filaments.

    “This scattering model is a full-wave model, but computed over a small region and in a simplified geometry so that it is very quick to do,” says Biswas. “The result is a ray-tracing model that, for the first time, accounts for full-wave scattering physics.”

    Biswas notes that this model bridges the gap between simple scattering models that fail to match experiment and full-wave models that are prohibitively expensive, providing reasonable accuracy at low cost.

    “Our results suggest scattering is an important effect, and that it must be taken into account when designing future RF antennas. The low cost of our scattering model makes this very doable.”

    “This is exciting progress,” says Syun’ichi Shiraiwa, staff research physicist at the Princeton Plasma Physics Laboratory. “I believe that Bodhi’s work provides a clear path to the end of a long tunnel we have been in. His work not only demonstrates that the wave scattering, once accurately accounted for, can explain the experimental results, but also answers a puzzling question: why previous scattering models were incomplete, and their results unsatisfying.”

    Work is now underway to apply this model to more plasmas from Alcator C-Mod and other tokamaks. Biswas believes that this new model will be particularly applicable to high-density tokamak plasmas, for which the standard ray-tracing model has been noticeably inaccurate. He is also excited that the model could be validated by DIII-D National Fusion Facility, a fusion experiment on which the PSFC collaborates.

    “The DIII-D tokamak will soon be capable of launching lower hybrid waves and measuring its electric field in the scrape-off layer. These measurements could provide direct evidence of the asymmetric scattering effect predicted by our model.” More

  • in

    Study: Global cancer risk from burning organic matter comes from unregulated chemicals

    Whenever organic matter is burned, such as in a wildfire, a power plant, a car’s exhaust, or in daily cooking, the combustion releases polycyclic aromatic hydrocarbons (PAHs) — a class of pollutants that is known to cause lung cancer.

    There are more than 100 known types of PAH compounds emitted daily into the atmosphere. Regulators, however, have historically relied on measurements of a single compound, benzo(a)pyrene, to gauge a community’s risk of developing cancer from PAH exposure. Now MIT scientists have found that benzo(a)pyrene may be a poor indicator of this type of cancer risk.

    In a modeling study appearing today in the journal GeoHealth, the team reports that benzo(a)pyrene plays a small part — about 11 percent — in the global risk of developing PAH-associated cancer. Instead, 89 percent of that cancer risk comes from other PAH compounds, many of which are not directly regulated.

    Interestingly, about 17 percent of PAH-associated cancer risk comes from “degradation products” — chemicals that are formed when emitted PAHs react in the atmosphere. Many of these degradation products can in fact be more toxic than the emitted PAH from which they formed.

    The team hopes the results will encourage scientists and regulators to look beyond benzo(a)pyrene, to consider a broader class of PAHs when assessing a community’s cancer risk.

    “Most of the regulatory science and standards for PAHs are based on benzo(a)pyrene levels. But that is a big blind spot that could lead you down a very wrong path in terms of assessing whether cancer risk is improving or not, and whether it’s relatively worse in one place than another,” says study author Noelle Selin, a professor in MIT’s Institute for Data, Systems and Society, and the Department of Earth, Atmospheric and Planetary Sciences.

    Selin’s MIT co-authors include Jesse Kroll, Amy Hrdina, Ishwar Kohale, Forest White, and Bevin Engelward, and Jamie Kelly (who is now at University College London). Peter Ivatt and Mathew Evans at the University of York are also co-authors.

    Chemical pixels

    Benzo(a)pyrene has historically been the poster chemical for PAH exposure. The compound’s indicator status is largely based on early toxicology studies. But recent research suggests the chemical may not be the PAH representative that regulators have long relied upon.   

    “There has been a bit of evidence suggesting benzo(a)pyrene may not be very important, but this was from just a few field studies,” says Kelly, a former postdoc in Selin’s group and the study’s lead author.

    Kelly and his colleagues instead took a systematic approach to evaluate benzo(a)pyrene’s suitability as a PAH indicator. The team began by using GEOS-Chem, a global, three-dimensional chemical transport model that breaks the world into individual grid boxes and simulates within each box the reactions and concentrations of chemicals in the atmosphere.

    They extended this model to include chemical descriptions of how various PAH compounds, including benzo(a)pyrene, would react in the atmosphere. The team then plugged in recent data from emissions inventories and meteorological observations, and ran the model forward to simulate the concentrations of various PAH chemicals around the world over time.

    Risky reactions

    In their simulations, the researchers started with 16 relatively well-studied PAH chemicals, including benzo(a)pyrene, and traced the concentrations of these chemicals, plus the concentration of their degradation products over two generations, or chemical transformations. In total, the team evaluated 48 PAH species.

    They then compared these concentrations with actual concentrations of the same chemicals, recorded by monitoring stations around the world. This comparison was close enough to show that the model’s concentration predictions were realistic.

    Then within each model’s grid box, the researchers related the concentration of each PAH chemical to its associated cancer risk; to do this, they had to develop a new method based on previous studies in the literature to avoid double-counting risk from the different chemicals. Finally, they overlaid population density maps to predict the number of cancer cases globally, based on the concentration and toxicity of a specific PAH chemical in each location.

    Dividing the cancer cases by population produced the cancer risk associated with that chemical. In this way, the team calculated the cancer risk for each of the 48 compounds, then determined each chemical’s individual contribution to the total risk.

    This analysis revealed that benzo(a)pyrene had a surprisingly small contribution, of about 11 percent, to the overall risk of developing cancer from PAH exposure globally. Eighty-nine percent of cancer risk came from other chemicals. And 17 percent of this risk arose from degradation products.

    “We see places where you can find concentrations of benzo(a)pyrene are lower, but the risk is higher because of these degradation products,” Selin says. “These products can be orders of magnitude more toxic, so the fact that they’re at tiny concentrations doesn’t mean you can write them off.”

    When the researchers compared calculated PAH-associated cancer risks around the world, they found significant differences depending on whether that risk calculation was based solely on concentrations of benzo(a)pyrene or on a region’s broader mix of PAH compounds.

    “If you use the old method, you would find the lifetime cancer risk is 3.5 times higher in Hong Kong versus southern India, but taking into account the differences in PAH mixtures, you get a difference of 12 times,” Kelly says. “So, there’s a big difference in the relative cancer risk between the two places. And we think it’s important to expand the group of compounds that regulators are thinking about, beyond just a single chemical.”

    The team’s study “provides an excellent contribution to better understanding these ubiquitous pollutants,” says Elisabeth Galarneau, an air quality expert and PhD research scientist in Canada’s Department of the Environment. “It will be interesting to see how these results compare to work being done elsewhere … to pin down which (compounds) need to be tracked and considered for the protection of human and environmental health.”

    This research was conducted in MIT’s Superfund Research Center and is supported in part by the National Institute of Environmental Health Sciences Superfund Basic Research Program, and the National Institutes of Health. More

  • in

    Climate and sustainability classes expand at MIT

    In fall 2019, a new class, 6.S898/12.S992 (Climate Change Seminar), arrived at MIT. It was, at the time, the only course in the Department of Electrical Engineering and Computer Science (EECS) to tackle the science of climate change. The class covered climate models and simulations alongside atmospheric science, policy, and economics.

    Ron Rivest, MIT Institute Professor of Computer Science, was one of the class’s three instructors, with Alan Edelman of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and John Fernández of the Department of Urban Studies and Planning. “Computer scientists have much to contribute to climate science,” Rivest says. “In particular, the modeling and simulation of climate can benefit from advances in computer science.”

    Rivest is one of many MIT faculty members who have been working in recent years to bring topics in climate, sustainability, and the environment to students in a growing variety of fields. And students have said they want this trend to continue.

    “Sustainability is something that touches all disciplines,” says Megan Xu, a rising senior in biological engineering and advisory chair of the Undergraduate Association Sustainability Committee. “As students who have grown up knowing that climate change is real and witnessed climate disaster after disaster, we know this is a huge problem that needs to be addressed by our generation.”

    Expanding the course catalog

    As education program manager at the MIT Environmental Solutions Initiative, Sarah Meyers has repeatedly had a hand in launching new sustainability classes. She has steered grant money to faculty, brought together instructors, and helped design syllabi — all in the service of giving MIT students the same world-class education in climate and sustainability that they get in science and engineering.

    Her work has given Meyers a bird’s-eye view of MIT’s course offerings in this area. By her count, there are now over 120 undergraduate classes, across 23 academic departments, that teach climate, environment, and sustainability principles.

    “Educating the next generation is the most important way that MIT can have an impact on the world’s environmental challenges,” she says. “MIT students are going to be leaders in their fields, whatever they may be. If they really understand sustainable design practices, if they can balance the needs of all stakeholders to make ethical decisions, then that actually changes the way our world operates and can move humanity towards a more sustainable future.”

    Some sustainability classes are established institutions at MIT. Success stories include 2.00A (Fundamentals of Engineering Design: Explore Space, Sea and Earth), a hands-on engineering class popular with first-year students; and 21W.775 (Writing About Nature and Environmental Issues), which has helped undergraduates fulfill their HASS-H (humanities distribution subject) and CI-H (Communication Intensive subject in the Humanities, Arts, and Social Sciences) graduation requirements for 15 years.

    Expanding this list of classes is an institutional priority. In the recently released Climate Action Plan for the Decade, MIT pledged to recruit at least 20 additional faculty members who will teach climate-related classes.

    “I think it’s easy to find classes if you’re looking for sustainability classes to take,” says Naomi Lutz, a senior in mechanical engineering who helped advise the MIT administration on education measures in the Climate Action Plan. “I usually scroll through the titles of the classes in courses 1, 2, 11, and 12 to see if any are of interest. I also have used the Environment & Sustainability Minor class list to look for sustainability-related classes to take.

    “The coming years are critical for the future of our planet, so it’s important that we all learn about sustainability and think about how to address it,” she adds.

    Working with students’ schedules

    Still, despite all this activity, climate and sustainability are not yet mainstream parts of an MIT education. Last year, a survey of over 800 MIT undergraduates, taken by the Undergraduate Association Sustainability Committee, found that only one in four had ever taken a class related to sustainability. But it doesn’t seem to be from lack of interest in the topic. More than half of those surveyed said that sustainability is a factor in their career planning, and almost 80 percent try to practice sustainability in their daily lives.

    “I’ve often had conversations with students who were surprised to learn there are so many classes available,” says Meyers. “We do need to do a better job communicating about them, and making it as easy as possible to enroll.”

    A recurring challenge is helping students fit sustainability into their plans for graduation, which are often tightly mapped-out.

    “We each only have four years — around 32 to 40 classes — to absorb all that we can from this amazing place,” says Xu. “Many of these classes are mandated to be GIRs [General Institute Requirements] and major requirements. Many students recognize that sustainability is important, but might not have the time to devote an entire class to the topic if it would not count toward their requirements.”

    This was a central focus for the students who were involved in forming education recommendations for the Climate Action Plan. “We propose that more sustainability-related courses or tracks are offered in the most common majors, especially in Course 6 [EECS],” says Lutz. “If students can fulfill major requirements while taking courses that address environmental problems, we believe more students will pursue research and careers related to sustainability.”

    She also recommends that students look into the dozens of climate and sustainability classes that fulfill GIRs. “It’s really easy to take sustainability-related courses that fulfill HASS [Humanities, Arts, and Social Sciences] requirements,” she says. For example, students can meet their HASS-S (social sciences sistribution subject) requirement by taking 21H.185 (Environment and History), or fulfill their HASS-A requirement with CMS.374 (Transmedia Art, Extraction and Environmental Justice).

    Classes with impact

    For those students who do seek out sustainability classes early in their MIT careers, the experience can shape their whole education.

    “My first semester at MIT, I took Environment and History, co-taught by professors Susan Solomon and Harriet Ritvo,” says Xu. “It taught me that there is so much more involved than just science and hard facts to solving problems in sustainability and climate. I learned to look at problems with more of a focus on people, which has informed much of the extracurricular work that I’ve gone on to do at MIT.”

    And the faculty, too, sometimes find that teaching in this area opens new doors for them. Rivest, who taught the climate change seminar in Course 6, is now working to build a simplified climate model with his co-instructor Alan Edelman, their teaching assistant Henri Drake, and Professor John Deutch of the Department of Chemistry, who joined the class as a guest lecturer. “I very much enjoyed meeting new colleagues from all around MIT,” Rivest says. “Teaching a class like this fosters connections between computer scientists and climate scientists.”

    Which is why Meyers will continue helping to get these classes off the ground. “We know students think climate is a huge issue for their futures. We know faculty agree with them,” she says. “Everybody wants this to be part of an MIT education. The next step is to really reach out to students and departments to fill the classrooms. That’s the start of a virtuous cycle where enrollment drives more sustainability instruction in every part of MIT.” More

  • in

    A new approach to preventing human-induced earthquakes

    When humans pump large volumes of fluid into the ground, they can set off potentially damaging earthquakes, depending on the underlying geology. This has been the case in certain oil- and gas-producing regions, where wastewater, often mixed with oil, is disposed of by injecting it back into the ground — a process that has triggered sizable seismic events in recent years.

    Now MIT researchers, working with an interdisciplinary team of scientists from industry and academia, have developed a method to manage such human-induced seismicity, and have demonstrated that the technique successfully reduced the number of earthquakes occurring in an active oil field.

    Their results, appearing today in Nature, could help mitigate earthquakes caused by the oil and gas industry, not just from the injection of wastewater produced with oil, but also that produced from hydraulic fracturing, or “fracking.” The team’s approach could also help prevent quakes from other human activities, such as the filling of water reservoirs and aquifers, and the sequestration of carbon dioxide in deep geologic formations.

    “Triggered seismicity is a problem that goes way beyond producing oil,” says study lead author Bradford Hager, the Cecil and Ida Green Professor of Earth Sciences in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “This is a huge problem for society that will have to be confronted if we are to safely inject carbon dioxide into the subsurface. We demonstrated the kind of study that will be necessary for doing this.”

    The study’s co-authors include Ruben Juanes, professor of civil and environmental engineering at MIT, and collaborators from the University of California at Riverside, the University of Texas at Austin, Harvard University, and Eni, a multinational oil and gas company based in Italy.

    Safe injections

    Both natural and human-induced earthquakes occur along geologic faults, or fractures between two blocks of rock in the Earth’s crust. In stable periods, the rocks on either side of a fault are held in place by the pressures generated by surrounding rocks. But when a large volume of fluid is suddenly injected at high rates, it can upset a fault’s fluid stress balance. In some cases, this sudden injection can lubricate a fault and cause rocks on either side to slip and trigger an earthquake.

    The most common source of such fluid injections is from the oil and gas industry’s disposal of wastewater that is brought up along with oil. Field operators dispose of this water through injection wells that continuously pump the water back into the ground at high pressures.

    “There’s a lot of water produced with the oil, and that water is injected into the ground, which has caused a large number of quakes,” Hager notes. “So, for a while, oil-producing regions in Oklahoma had more magnitude 3 quakes than California, because of all this wastewater that was being injected.”

    In recent years, a similar problem arose in southern Italy, where injection wells on oil fields operated by Eni triggered microseisms in an area where large naturally occurring earthquakes had previously occurred. The company, looking for ways to address the problem, sought consulation from Hager and Juanes, both leading experts in seismicity and subsurface flows.

    “This was an opportunity for us to get access to high-quality seismic data about the subsurface, and learn how to do these injections safely,” Juanes says.

    Seismic blueprint

    The team made use of detailed information, accumulated by the oil company over years of operation in the Val D’Agri oil field, a region of southern Italy that lies in a tectonically active basin. The data included information about the region’s earthquake record, dating back to the 1600s, as well as the structure of rocks and faults, and the state of the subsurface corresponding to the various injection rates of each well.

    This video shows the change in stress on the geologic faults of the Val d’Agri field from 2001 to 2019, as predicted by a new MIT-derived model. Video credit: A. Plesch (Harvard University)

    This video shows small earthquakes occurring on the Costa Molina fault within the Val d’Agri field from 2004 to 2016. Each event is shown for two years fading from an initial bright color to the final dark color. Video credit: A. Plesch (Harvard University)

    The researchers integrated these data into a coupled subsurface flow and geomechanical model, which predicts how the stresses and strains of underground structures evolve as the volume of pore fluid, such as from the injection of water, changes. They connected this model to an earthquake mechanics model in order to translate the changes in underground stress and fluid pressure into a likelihood of triggering earthquakes. They then quantified the rate of earthquakes associated with various rates of water injection, and identified scenarios that were unlikely to trigger large quakes.

    When they ran the models using data from 1993 through 2016, the predictions of seismic activity matched with the earthquake record during this period, validating their approach. They then ran the models forward in time, through the year 2025, to predict the region’s seismic response to three different injection rates: 2,000, 2,500, and 3,000 cubic meters per day. The simulations showed that large earthquakes could be avoided if operators kept injection rates at 2,000 cubic meters per day — a flow rate comparable to a small public fire hydrant.

    Eni field operators implemented the team’s recommended rate at the oil field’s single water injection well over a 30-month period between January 2017 and June 2019. In this time, the team observed only a few tiny seismic events, which coincided with brief periods when operators went above the recommended injection rate.

    “The seismicity in the region has been very low in these two-and-a-half years, with around four quakes of 0.5 magnitude, as opposed to hundreds of quakes, of up to 3 magnitude, that were happening between 2006 and 2016,” Hager says. 

    The results demonstrate that operators can successfully manage earthquakes by adjusting injection rates, based on the underlying geology. Juanes says the team’s modeling approach may help to prevent earthquakes related to other processes, such as the building of water reservoirs and the sequestration of carbon dioxide — as long as there is detailed information about a region’s subsurface.

    “A lot of effort needs to go into understanding the geologic setting,” says Juanes, who notes that, if carbon sequestration were carried out on depleted oil fields, “such reservoirs could have this type of history, seismic information, and geologic interpretation that you could use to build similar models for carbon sequestration. We show it’s at least possible to manage seismicity in an operational setting. And we offer a blueprint for how to do it.”

    This research was supported, in part, by Eni. More