More stories

  • in

    3 Questions: Anuradha Annaswamy on building smart infrastructures

    Much of Anuradha Annaswamy’s research hinges on uncertainty. How does cloudy weather affect a grid powered by solar energy? How do we ensure that electricity is delivered to the consumer if a grid is powered by wind and the wind does not blow? What’s the best course of action if a bird hits a plane engine on takeoff? How can you predict the behavior of a cyber attacker?

    A senior research scientist in MIT’s Department of Mechanical Engineering, Annaswamy spends most of her research time dealing with decision-making under uncertainty. Designing smart infrastructures that are resilient to uncertainty can lead to safer, more reliable systems, she says.

    Annaswamy serves as the director of MIT’s Active Adaptive Control Laboratory. A world-leading expert in adaptive control theory, she was named president of the Institute of Electrical and Electronics Engineers Control Systems Society for 2020. Her team uses adaptive control and optimization to account for various uncertainties and anomalies in autonomous systems. In particular, they are developing smart infrastructures in the energy and transportation sectors.

    Using a combination of control theory, cognitive science, economic modeling, and cyber-physical systems, Annaswamy and her team have designed intelligent systems that could someday transform the way we travel and consume energy. Their research includes a diverse range of topics such as safer autopilot systems on airplanes, the efficient dispatch of resources in electrical grids, better ride-sharing services, and price-responsive railway systems.

    In a recent interview, Annaswamy spoke about how these smart systems could help support a safer and more sustainable future.

    Q: How is your team using adaptive control to make air travel safer?

    A: We want to develop an advanced autopilot system that can safely recover the airplane in the event of a severe anomaly — such as the wing becoming damaged mid-flight, or a bird flying into the engine. In the airplane, you have a pilot and autopilot to make decisions. We’re asking: How do you combine those two decision-makers?

    The answer we landed on was developing a shared pilot-autopilot control architecture. We collaborated with David Woods, an expert in cognitive engineering at The Ohio State University, to develop an intelligent system that takes the pilot’s behavior into account. For example, all humans have something known as “capacity for maneuver” and “graceful command degradation” that inform how we react in the face of adversity. Using mathematical models of pilot behavior, we proposed a shared control architecture where the pilot and the autopilot work together to make an intelligent decision on how to react in the face of uncertainties. In this system, the pilot reports the anomaly to an adaptive autopilot system that ensures resilient flight control.

    Q: How does your research on adaptive control fit into the concept of smart cities?

    A: Smart cities are an interesting way we can use intelligent systems to promote sustainability. Our team is looking at ride-sharing services in particular. Services like Uber and Lyft have provided new transportation options, but their impact on the carbon footprint has to be considered. We’re looking at developing a system where the number of passenger-miles per unit of energy is maximized through something called “shared mobility on demand services.” Using the alternating minimization approach, we’ve developed an algorithm that can determine the optimal route for multiple passengers traveling to various destinations.

    As with the pilot-autopilot dynamic, human behavior is at play here. In sociology there is an interesting concept of behavioral dynamics known as Prospect Theory. If we give passengers options with regards to which route their shared ride service will take, we are empowering them with free will to accept or reject a route. Prospect Theory shows that if you can use pricing as an incentive, people are much more loss-averse so they would be willing to walk a bit extra or wait a few minutes longer to join a low-cost ride with an optimized route. If everyone utilized a system like this, the carbon footprint of ride-sharing services could decrease substantially.

    Q: What other ways are you using intelligent systems to promote sustainability?

    A: Renewable energy and sustainability are huge drivers for our research. To enable a world where all of our energy is coming from renewable sources like solar or wind, we need to develop a smart grid that can account for the fact that the sun isn’t always shining and wind isn’t always blowing. These uncertainties are the biggest hurdles to achieving an all-renewable grid. Of course, there are many technologies being developed for batteries that can help store renewable energy, but we are taking a different approach.

    We have created algorithms that can optimally schedule distributed energy resources within the grid — this includes making decisions on when to use onsite generators, how to operate storage devices, and when to call upon demand response technologies, all in response to the economics of using such resources and their physical constraints. If we can develop an interconnected smart grid where, for example, the air conditioning setting in a house is set to 72 degrees instead of 69 degrees automatically when demand is high, there could be a substantial savings in energy usage without impacting human comfort. In one of our studies, we applied a distributed proximal atomic coordination algorithm to the grid in Tokyo to demonstrate how this intelligent system could account for the uncertainties present in a grid powered by renewable resources. More

  • in

    Understanding air pollution from space

    Climate change and air pollution are interlocking crises that threaten human health. Reducing emissions of some air pollutants can help achieve climate goals, and some climate mitigation efforts can in turn improve air quality.

    One part of MIT Professor Arlene Fiore’s research program is to investigate the fundamental science in understanding air pollutants — how long they persist and move through our environment to affect air quality.

    “We need to understand the conditions under which pollutants, such as ozone, form. How much ozone is formed locally and how much is transported long distances?” says Fiore, who notes that Asian air pollution can be transported across the Pacific Ocean to North America. “We need to think about processes spanning local to global dimensions.”

    Fiore, the Peter H. Stone and Paola Malanotte Stone Professor in Earth, Atmospheric and Planetary Sciences, analyzes data from on-the-ground readings and from satellites, along with models, to better understand the chemistry and behavior of air pollutants — which ultimately can inform mitigation strategies and policy setting.

    A global concern

    At the United Nations’ most recent climate change conference, COP26, air quality management was a topic discussed over two days of presentations.

    “Breathing is vital. It’s life. But for the vast majority of people on this planet right now, the air that they breathe is not giving life, but cutting it short,” said Sarah Vogel, senior vice president for health at the Environmental Defense Fund, at the COP26 session.

    “We need to confront this twin challenge now through both a climate and clean air lens, of targeting those pollutants that warm both the air and harm our health.”

    Earlier this year, the World Health Organization (WHO) updated its global air quality guidelines it had issued 15 years earlier for six key pollutants including ozone (O3), nitrogen dioxide (NO2), sulfur dioxide (SO2), and carbon monoxide (CO). The new guidelines are more stringent based on what the WHO stated is the “quality and quantity of evidence” of how these pollutants affect human health. WHO estimates that roughly 7 million premature deaths are attributable to the joint effects of air pollution.

    “We’ve had all these health-motivated reductions of aerosol and ozone precursor emissions. What are the implications for the climate system, both locally but also around the globe? How does air quality respond to climate change? We study these two-way interactions between air pollution and the climate system,” says Fiore.

    But fundamental science is still required to understand how gases, such as ozone and nitrogen dioxide, linger and move throughout the troposphere — the lowermost layer of our atmosphere, containing the air we breathe.

    “We care about ozone in the air we’re breathing where we live at the Earth’s surface,” says Fiore. “Ozone reacts with biological tissue, and can be damaging to plants and human lungs. Even if you’re a healthy adult, if you’re out running hard during an ozone smog event, you might feel an extra weight on your lungs.”

    Telltale signs from space

    Ozone is not emitted directly, but instead forms through chemical reactions catalyzed by radiation from the sun interacting with nitrogen oxides — pollutants released in large part from burning fossil fuels—and volatile organic compounds. However, current satellite instruments cannot sense ground-level ozone.

    “We can’t retrieve surface- or even near-surface ozone from space,” says Fiore of the satellite data, “although the anticipated launch of a new instrument looks promising for new advances in retrieving lower-tropospheric ozone”. Instead, scientists can look at signatures from other gas emissions to get a sense of ozone formation. “Nitrogen dioxide and formaldehyde are a heavy focus of our research because they serve as proxies for two of the key ingredients that go on to form ozone in the atmosphere.”

    To understand ozone formation via these precursor pollutants, scientists have gathered data for more than two decades using spectrometer instruments aboard satellites that measure sunlight in ultraviolet and visible wavelengths that interact with these pollutants in the Earth’s atmosphere — known as solar backscatter radiation.

    Satellites, such as NASA’s Aura, carry instruments like the Ozone Monitoring Instrument (OMI). OMI, along with European-launched satellites such as the Global Ozone Monitoring Experiment (GOME) and the Scanning Imaging Absorption spectroMeter for Atmospheric CartograpHY (SCIAMACHY), and the newest generation TROPOspheric Monitoring instrument (TROPOMI), all orbit the Earth, collecting data during daylight hours when sunlight is interacting with the atmosphere over a particular location.

    In a recent paper from Fiore’s group, former graduate student Xiaomeng Jin (now a postdoc at the University of California at Berkeley), demonstrated that she could bring together and “beat down the noise in the data,” as Fiore says, to identify trends in ozone formation chemistry over several U.S. metropolitan areas that “are consistent with our on-the-ground understanding from in situ ozone measurements.”

    “This finding implies that we can use these records to learn about changes in surface ozone chemistry in places where we lack on-the-ground monitoring,” says Fiore. Extracting these signals by stringing together satellite data — OMI, GOME, and SCIAMACHY — to produce a two-decade record required reconciling the instruments’ differing orbit days, times, and fields of view on the ground, or spatial resolutions. 

    Currently, spectrometer instruments aboard satellites are retrieving data once per day. However, newer instruments, such as the Geostationary Environment Monitoring Spectrometer launched in February 2020 by the National Institute of Environmental Research in the Ministry of Environment of South Korea, will monitor a particular region continuously, providing much more data in real time.

    Over North America, the Tropospheric Emissions: Monitoring of Pollution Search (TEMPO) collaboration between NASA and the Smithsonian Astrophysical Observatory, led by Kelly Chance of Harvard University, will provide not only a stationary view of the atmospheric chemistry over the continent, but also a finer-resolution view — with the instrument recording pollution data from only a few square miles per pixel (with an anticipated launch in 2022).

    “What we’re very excited about is the opportunity to have continuous coverage where we get hourly measurements that allow us to follow pollution from morning rush hour through the course of the day and see how plumes of pollution are evolving in real time,” says Fiore.

    Data for the people

    Providing Earth-observing data to people in addition to scientists — namely environmental managers, city planners, and other government officials — is the goal for the NASA Health and Air Quality Applied Sciences Team (HAQAST).

    Since 2016, Fiore has been part of HAQAST, including collaborative “tiger teams” — projects that bring together scientists, nongovernment entities, and government officials — to bring data to bear on real issues.

    For example, in 2017, Fiore led a tiger team that provided guidance to state air management agencies on how satellite data can be incorporated into state implementation plans (SIPs). “Submission of a SIP is required for any state with a region in non-attainment of U.S. National Ambient Air Quality Standards to demonstrate their approach to achieving compliance with the standard,” says Fiore. “What we found is that small tweaks in, for example, the metrics we use to convey the science findings, can go a long way to making the science more usable, especially when there are detailed policy frameworks in place that must be followed.”

    Now, in 2021, Fiore is part of two tiger teams announced by HAQAST in late September. One team is looking at data to address environmental justice issues, by providing data to assess communities disproportionately affected by environmental health risks. Such information can be used to estimate the benefits of governmental investments in environmental improvements for disproportionately burdened communities. The other team is looking at urban emissions of nitrogen oxides to try to better quantify and communicate uncertainties in the estimates of anthropogenic sources of pollution.

    “For our HAQAST work, we’re looking at not just the estimate of the exposure to air pollutants, or in other words their concentrations,” says Fiore, “but how confident are we in our exposure estimates, which in turn affect our understanding of the public health burden due to exposure. We have stakeholder partners at the New York Department of Health who will pair exposure datasets with health data to help prioritize decisions around public health.

    “I enjoy working with stakeholders who have questions that require science to answer and can make a difference in their decisions.” Fiore says. More

  • in

    Seeing the plasma edge of fusion experiments in new ways with artificial intelligence

    To make fusion energy a viable resource for the world’s energy grid, researchers need to understand the turbulent motion of plasmas: a mix of ions and electrons swirling around in reactor vessels. The plasma particles, following magnetic field lines in toroidal chambers known as tokamaks, must be confined long enough for fusion devices to produce significant gains in net energy, a challenge when the hot edge of the plasma (over 1 million degrees Celsius) is just centimeters away from the much cooler solid walls of the vessel.

    Abhilash Mathews, a PhD candidate in the Department of Nuclear Science and Engineering working at MIT’s Plasma Science and Fusion Center (PSFC), believes this plasma edge to be a particularly rich source of unanswered questions. A turbulent boundary, it is central to understanding plasma confinement, fueling, and the potentially damaging heat fluxes that can strike material surfaces — factors that impact fusion reactor designs.

    To better understand edge conditions, scientists focus on modeling turbulence at this boundary using numerical simulations that will help predict the plasma’s behavior. However, “first principles” simulations of this region are among the most challenging and time-consuming computations in fusion research. Progress could be accelerated if researchers could develop “reduced” computer models that run much faster, but with quantified levels of accuracy.

    For decades, tokamak physicists have regularly used a reduced “two-fluid theory” rather than higher-fidelity models to simulate boundary plasmas in experiment, despite uncertainty about accuracy. In a pair of recent publications, Mathews begins directly testing the accuracy of this reduced plasma turbulence model in a new way: he combines physics with machine learning.

    “A successful theory is supposed to predict what you’re going to observe,” explains Mathews, “for example, the temperature, the density, the electric potential, the flows. And it’s the relationships between these variables that fundamentally define a turbulence theory. What our work essentially examines is the dynamic relationship between two of these variables: the turbulent electric field and the electron pressure.”

    In the first paper, published in Physical Review E, Mathews employs a novel deep-learning technique that uses artificial neural networks to build representations of the equations governing the reduced fluid theory. With this framework, he demonstrates a way to compute the turbulent electric field from an electron pressure fluctuation in the plasma consistent with the reduced fluid theory. Models commonly used to relate the electric field to pressure break down when applied to turbulent plasmas, but this one is robust even to noisy pressure measurements.

    In the second paper, published in Physics of Plasmas, Mathews further investigates this connection, contrasting it against higher-fidelity turbulence simulations. This first-of-its-kind comparison of turbulence across models has previously been difficult — if not impossible — to evaluate precisely. Mathews finds that in plasmas relevant to existing fusion devices, the reduced fluid model’s predicted turbulent fields are consistent with high-fidelity calculations. In this sense, the reduced turbulence theory works. But to fully validate it, “one should check every connection between every variable,” says Mathews.

    Mathews’ advisor, Principal Research Scientist Jerry Hughes, notes that plasma turbulence is notoriously difficult to simulate, more so than the familiar turbulence seen in air and water. “This work shows that, under the right set of conditions, physics-informed machine-learning techniques can paint a very full picture of the rapidly fluctuating edge plasma, beginning from a limited set of observations. I’m excited to see how we can apply this to new experiments, in which we essentially never observe every quantity we want.”

    These physics-informed deep-learning methods pave new ways in testing old theories and expanding what can be observed from new experiments. David Hatch, a research scientist at the Institute for Fusion Studies at the University of Texas at Austin, believes these applications are the start of a promising new technique.

    “Abhi’s work is a major achievement with the potential for broad application,” he says. “For example, given limited diagnostic measurements of a specific plasma quantity, physics-informed machine learning could infer additional plasma quantities in a nearby domain, thereby augmenting the information provided by a given diagnostic. The technique also opens new strategies for model validation.”

    Mathews sees exciting research ahead.

    “Translating these techniques into fusion experiments for real edge plasmas is one goal we have in sight, and work is currently underway,” he says. “But this is just the beginning.”

    Mathews was supported in this work by the Manson Benedict Fellowship, Natural Sciences and Engineering Research Council of Canada, and U.S. Department of Energy Office of Science under the Fusion Energy Sciences program.​ More

  • in

    Helping to make nuclear fusion a reality

    Up until she served in the Peace Corps in Malawi, Rachel Bielajew was open to a career reboot. Having studied nuclear engineering as an undergraduate at the University of Michigan at Ann Arbor, graduate school had been on her mind. But seeing the drastic impacts of climate change play out in real-time in Malawi — the lives of the country’s subsistence farmers swing wildly, depending on the rains — convinced Bielajew of the importance of nuclear engineering. Bielajew was struck that her high school students in the small town of Chisenga had a shaky understanding of math, but universally understood global warming. “The concept of the changing world due to human impact was evident, and they could see it,” Bielajew says.

    Bielajew was looking to work on solutions that could positively impact global problems and feed her love of physics. Nuclear engineering, especially the study of fusion as a carbon-free energy source, checked off both boxes. Bielajew is now a fourth-year doctoral candidate in the Department of Nuclear Science and Engineering (NSE). She researches magnetic confinement fusion in the Plasma Science and Fusion Center (PSFC) with Professor Anne White.

    Researching fusion’s big challenge

    You need to confine plasma effectively in order to generate the extremely high temperatures (100 million degrees Celsius) fusion needs, without melting the walls of the tokamak, the device that hosts these reactions. Magnets can do the job, but “plasmas are weird, they behave strangely and are challenging to understand,” Bielajew says. Small instabilities in plasma can coalesce into fluctuating turbulence that can drive heat and particles out of the machine.

    In high-confinement mode, the edges of the plasma have less tolerance for such unruly behavior. “The turbulence gets damped out and sheared apart at the edge,” Bielajew says. This might seem like a good thing, but high-confinement plasmas have their own challenges. They are so tightly bound that they create edge-localized modes (ELMs), bursts of damaging particles and energy, that can be extremely damaging to the machine.

    The questions Bielajew is looking to answer: How do we get high confinement without ELMs? How do turbulence and transport play a role in plasmas? “We do not fully understand turbulence, even though we have studied it for a long time,” Bielajew says, “It is a big and important problem to solve for fusion to be a reality. I like that challenge,” Bielajew adds.

    A love of science

    Confronting such challenges head-on has been part of Bielajew’s toolkit since she was a child growing up in Ann Arbor, Michigan. Her father, Alex Bielajew, is a professor of nuclear engineering at the University of Michigan, and Bielajew’s mother also pursued graduate studies.

    Bielajew’s parents encouraged her to follow her own path and she found it led to her father’s chosen profession: nuclear engineering. Once she decided to pursue research in fusion, MIT stood out as a school she could set her sights on. “I knew that MIT had an extensive program in fusion and a lot of faculty in the field,” Bielajew says. The mechanics of the application were challenging: Chisenga had limited internet access, so Bielajew had to ride on the back of a pickup truck to meet a friend in a city a few hours away and use his phone as a hotspot to send the documents.

    A similar tenacity has surfaced in Bielajew’s approach to research during the Covid-19 pandemic. Working off a blueprint, Bielajew built the Correlation Cyclotron Emission Diagnostic, which measures turbulent electron temperature fluctuations. Through a collaboration, Bielajew conducts her plasma research at the ASDEX Upgrade tokamak in Germany. Traditionally, Bielajew would ship the diagnostic to Germany, follow and install it, and conduct the research in person. The pandemic threw a wrench in the plans, so Bielajew shipped the diagnostic and relied on team members to install it. She Zooms into the control room and trusts others to run the plasma experiments.

    DEI advocate

    Bielajew is very hands-on with another endeavor: improving diversity, equity, and inclusion (DEI) in her own backyard. Having grown up with parental encouragement and in an environment that never doubted her place as a woman in engineering, Bielajew realizes not everyone has the same opportunities. “I wish that the world was in a place where all I had to do was care about my research, but it’s not,” Bielajew says. While science can solve many problems, more fundamental ones about equity need humans to act in specific ways, she points out. “I want to see more women represented, more people of color. Everyone needs a voice in building a better world,” Bielajew says.

    To get there, Bielajew co-launched NSE’s Graduate Application Assistance Program, which connects underrepresented student applicants with NSE mentors. She has been the DEI officer with NSE’s student group, ANS, and is very involved in the department’s DEI committee.

    As for future research, Bielajew hopes to concentrate on the experiments that make her question existing paradigms about plasmas under high confinement. Bielajew has registered more head-scratching “hmm” moments than “a-ha” ones. Measurements from her experiments drive the need for more intensive study.

    Bielajew’s dogs, Dobby and Winky, keep her company through it all. They came home with her from Malawi. More

  • in

    Design’s new frontier

    In the 1960s, the advent of computer-aided design (CAD) sparked a revolution in design. For his PhD thesis in 1963, MIT Professor Ivan Sutherland developed Sketchpad, a game-changing software program that enabled users to draw, move, and resize shapes on a computer. Over the course of the next few decades, CAD software reshaped how everything from consumer products to buildings and airplanes were designed.

    “CAD was part of the first wave in computing in design. The ability of researchers and practitioners to represent and model designs using computers was a major breakthrough and still is one of the biggest outcomes of design research, in my opinion,” says Maria Yang, Gail E. Kendall Professor and director of MIT’s Ideation Lab.

    Innovations in 3D printing during the 1980s and 1990s expanded CAD’s capabilities beyond traditional injection molding and casting methods, providing designers even more flexibility. Designers could sketch, ideate, and develop prototypes or models faster and more efficiently. Meanwhile, with the push of a button, software like that developed by Professor Emeritus David Gossard of MIT’s CAD Lab could solve equations simultaneously to produce a new geometry on the fly.

    In recent years, mechanical engineers have expanded the computing tools they use to ideate, design, and prototype. More sophisticated algorithms and the explosion of machine learning and artificial intelligence technologies have sparked a second revolution in design engineering.

    Researchers and faculty at MIT’s Department of Mechanical Engineering are utilizing these technologies to re-imagine how the products, systems, and infrastructures we use are designed. These researchers are at the forefront of the new frontier in design.

    Computational design

    Faez Ahmed wants to reinvent the wheel, or at least the bicycle wheel. He and his team at MIT’s Design Computation & Digital Engineering Lab (DeCoDE) use an artificial intelligence-driven design method that can generate entirely novel and improved designs for a range of products — including the traditional bicycle. They create advanced computational methods to blend human-driven design with simulation-based design.

    “The focus of our DeCoDE lab is computational design. We are looking at how we can create machine learning and AI algorithms to help us discover new designs that are optimized based on specific performance parameters,” says Ahmed, an assistant professor of mechanical engineering at MIT.

    For their work using AI-driven design for bicycles, Ahmed and his collaborator Professor Daniel Frey wanted to make it easier to design customizable bicycles, and by extension, encourage more people to use bicycles over transportation methods that emit greenhouse gases.

    To start, the group gathered a dataset of 4,500 bicycle designs. Using this massive dataset, they tested the limits of what machine learning could do. First, they developed algorithms to group bicycles that looked similar together and explore the design space. They then created machine learning models that could successfully predict what components are key in identifying a bicycle style, such as a road bike versus a mountain bike.

    Once the algorithms were good enough at identifying bicycle designs and parts, the team proposed novel machine learning tools that could use this data to create a unique and creative design for a bicycle based on certain performance parameters and rider dimensions.

    Ahmed used a generative adversarial network — or GAN — as the basis of this model. GAN models utilize neural networks that can create new designs based on vast amounts of data. However, using GAN models alone would result in homogeneous designs that lack novelty and can’t be assessed in terms of performance. To address these issues in design problems, Ahmed has developed a new method which he calls “PaDGAN,” performance augmented diverse GAN.

    “When we apply this type of model, what we see is that we can get large improvements in the diversity, quality, as well as novelty of the designs,” Ahmed explains.

    Using this approach, Ahmed’s team developed an open-source computational design tool for bicycles freely available on their lab website. They hope to further develop a set of generalizable tools that can be used across industries and products.

    Longer term, Ahmed has his sights set on loftier goals. He hopes the computational design tools he develops could lead to “design democratization,” putting more power in the hands of the end user.

    “With these algorithms, you can have more individualization where the algorithm assists a customer in understanding their needs and helps them create a product that satisfies their exact requirements,” he adds.

    Using algorithms to democratize the design process is a goal shared by Stefanie Mueller, an associate professor in electrical engineering and computer science and mechanical engineering.

    Personal fabrication

    Platforms like Instagram give users the freedom to instantly edit their photographs or videos using filters. In one click, users can alter the palette, tone, and brightness of their content by applying filters that range from bold colors to sepia-toned or black-and-white. Mueller, X-Window Consortium Career Development Professor, wants to bring this concept of the Instagram filter to the physical world.

    “We want to explore how digital capabilities can be applied to tangible objects. Our goal is to bring reprogrammable appearance to the physical world,” explains Mueller, director of the HCI Engineering Group based out of MIT’s Computer Science and Artificial Intelligence Laboratory.

    Mueller’s team utilizes a combination of smart materials, optics, and computation to advance personal fabrication technologies that would allow end users to alter the design and appearance of the products they own. They tested this concept in a project they dubbed “Photo-Chromeleon.”

    First, a mix of photochromic cyan, magenta, and yellow dies are airbrushed onto an object — in this instance, a 3D sculpture of a chameleon. Using software they developed, the team sketches the exact color pattern they want to achieve on the object itself. An ultraviolet light shines on the object to activate the dyes.

    To actually create the physical pattern on the object, Mueller has developed an optimization algorithm to use alongside a normal office projector outfitted with red, green, and blue LED lights. These lights shine on specific pixels on the object for a given period of time to physically change the makeup of the photochromic pigments.

    “This fancy algorithm tells us exactly how long we have to shine the red, green, and blue light on every single pixel of an object to get the exact pattern we’ve programmed in our software,” says Mueller.

    Giving this freedom to the end user enables limitless possibilities. Mueller’s team has applied this technology to iPhone cases, shoes, and even cars. In the case of shoes, Mueller envisions a shoebox embedded with UV and LED light projectors. Users could put their shoes in the box overnight and the next day have a pair of shoes in a completely new pattern.

    Mueller wants to expand her personal fabrication methods to the clothes we wear. Rather than utilize the light projection technique developed in the PhotoChromeleon project, her team is exploring the possibility of weaving LEDs directly into clothing fibers, allowing people to change their shirt’s appearance as they wear it. These personal fabrication technologies could completely alter consumer habits.

    “It’s very interesting for me to think about how these computational techniques will change product design on a high level,” adds Mueller. “In the future, a consumer could buy a blank iPhone case and update the design on a weekly or daily basis.”

    Computational fluid dynamics and participatory design

    Another team of mechanical engineers, including Sili Deng, the Brit (1961) & Alex (1949) d’Arbeloff Career Development Professor, are developing a different kind of design tool that could have a large impact on individuals in low- and middle-income countries across the world.

    As Deng walked down the hallway of Building 1 on MIT’s campus, a monitor playing a video caught her eye. The video featured work done by mechanical engineers and MIT D-Lab on developing cleaner burning briquettes for cookstoves in Uganda. Deng immediately knew she wanted to get involved.

    “As a combustion scientist, I’ve always wanted to work on such a tangible real-world problem, but the field of combustion tends to focus more heavily on the academic side of things,” explains Deng.

    After reaching out to colleagues in MIT D-Lab, Deng joined a collaborative effort to develop a new cookstove design tool for the 3 billion people across the world who burn solid fuels to cook and heat their homes. These stoves often emit soot and carbon monoxide, leading not only to millions of deaths each year, but also worsening the world’s greenhouse gas emission problem.

    The team is taking a three-pronged approach to developing this solution, using a combination of participatory design, physical modeling, and experimental validation to create a tool that will lead to the production of high-performing, low-cost energy products.

    Deng and her team in the Deng Energy and Nanotechnology Group use physics-based modeling for the combustion and emission process in cookstoves.

    “My team is focused on computational fluid dynamics. We use computational and numerical studies to understand the flow field where the fuel is burned and releases heat,” says Deng.

    These flow mechanics are crucial to understanding how to minimize heat loss and make cookstoves more efficient, as well as learning how dangerous pollutants are formed and released in the process.

    Using computational methods, Deng’s team performs three-dimensional simulations of the complex chemistry and transport coupling at play in the combustion and emission processes. They then use these simulations to build a combustion model for how fuel is burned and a pollution model that predicts carbon monoxide emissions.

    Deng’s models are used by a group led by Daniel Sweeney in MIT D-Lab to test the experimental validation in prototypes of stoves. Finally, Professor Maria Yang uses participatory design methods to integrate user feedback, ensuring the design tool can actually be used by people across the world.

    The end goal for this collaborative team is to not only provide local manufacturers with a prototype they could produce themselves, but to also provide them with a tool that can tweak the design based on local needs and available materials.

    Deng sees wide-ranging applications for the computational fluid dynamics her team is developing.

    “We see an opportunity to use physics-based modeling, augmented with a machine learning approach, to come up with chemical models for practical fuels that help us better understand combustion. Therefore, we can design new methods to minimize carbon emissions,” she adds.

    While Deng is utilizing simulations and machine learning at the molecular level to improve designs, others are taking a more macro approach.

    Designing intelligent systems

    When it comes to intelligent design, Navid Azizan thinks big. He hopes to help create future intelligent systems that are capable of making decisions autonomously by using the enormous amounts of data emerging from the physical world. From smart robots and autonomous vehicles to smart power grids and smart cities, Azizan focuses on the analysis, design, and control of intelligent systems.

    Achieving such massive feats takes a truly interdisciplinary approach that draws upon various fields such as machine learning, dynamical systems, control, optimization, statistics, and network science, among others.

    “Developing intelligent systems is a multifaceted problem, and it really requires a confluence of disciplines,” says Azizan, assistant professor of mechanical engineering with a dual appointment in MIT’s Institute for Data, Systems, and Society (IDSS). “To create such systems, we need to go beyond standard approaches to machine learning, such as those commonly used in computer vision, and devise algorithms that can enable safe, efficient, real-time decision-making for physical systems.”

    For robot control to work in the complex dynamic environments that arise in the real world, real-time adaptation is key. If, for example, an autonomous vehicle is going to drive in icy conditions or a drone is operating in windy conditions, they need to be able to adapt to their new environment quickly.

    To address this challenge, Azizan and his collaborators at MIT and Stanford University have developed a new algorithm that combines adaptive control, a powerful methodology from control theory, with meta learning, a new machine learning paradigm.

    “This ‘control-oriented’ learning approach outperforms the existing ‘regression-oriented’ methods, which are mostly focused on just fitting the data, by a wide margin,” says Azizan.

    Another critical aspect of deploying machine learning algorithms in physical systems that Azizan and his team hope to address is safety. Deep neural networks are a crucial part of autonomous systems. They are used for interpreting complex visual inputs and making data-driven predictions of future behavior in real time. However, Azizan urges caution.

    “These deep neural networks are only as good as their training data, and their predictions can often be untrustworthy in scenarios not covered by their training data,” he says. Making decisions based on such untrustworthy predictions could lead to fatal accidents in autonomous vehicles or other safety-critical systems.

    To avoid these potentially catastrophic events, Azizan proposes that it is imperative to equip neural networks with a measure of their uncertainty. When the uncertainty is high, they can then be switched to a “safe policy.”

    In pursuit of this goal, Azizan and his collaborators have developed a new algorithm known as SCOD — Sketching Curvature of Out-of-Distribution Detection. This framework could be embedded within any deep neural network to equip them with a measure of their uncertainty.

    “This algorithm is model-agnostic and can be applied to neural networks used in various kinds of autonomous systems, whether it’s drones, vehicles, or robots,” says Azizan.

    Azizan hopes to continue working on algorithms for even larger-scale systems. He and his team are designing efficient algorithms to better control supply and demand in smart energy grids. According to Azizan, even if we create the most efficient solar panels and batteries, we can never achieve a sustainable grid powered by renewable resources without the right control mechanisms.

    Mechanical engineers like Ahmed, Mueller, Deng, and Azizan serve as the key to realizing the next revolution of computing in design.

    “MechE is in a unique position at the intersection of the computational and physical worlds,” Azizan says. “Mechanical engineers build a bridge between theoretical, algorithmic tools and real, physical world applications.”

    Sophisticated computational tools, coupled with the ground truth mechanical engineers have in the physical world, could unlock limitless possibilities for design engineering, well beyond what could have been imagined in those early days of CAD. More

  • in

    Radio-frequency wave scattering improves fusion simulations

    In the quest for fusion energy, understanding how radio-frequency (RF) waves travel (or “propagate”) in the turbulent interior of a fusion furnace is crucial to maintaining an efficient, continuously operating power plant. Transmitted by an antenna in the doughnut-shaped vacuum chamber common to magnetic confinement fusion devices called tokamaks, RF waves heat the plasma fuel and drive its current around the toroidal interior. The efficiency of this process can be affected by how the wave’s trajectory is altered (or “scattered”) by conditions within the chamber.

    Researchers have tried to study these RF processes using computer simulations to match the experimental conditions. A good match would validate the computer model, and raise confidence in using it to explore new physics and design future RF antennas that perform efficiently. While the simulations can accurately calculate how much total current is driven by RF waves, they do a poor job at predicting where exactly in the plasma this current is produced.

    Now, in a paper published in the Journal of Plasma Physics, MIT researchers suggest that the models for RF wave propagation used for these simulations have not properly taken into account the way these waves are scattered as they encounter dense, turbulent filaments present in the edge of the plasma known as the “scrape-off layer” (SOL).

    Bodhi Biswas, a graduate student at the Plasma Science and Fusion Center (PSFC) under the direction of Senior Research Scientist Paul Bonoli, School of Engineering Distinguished Professor of Engineering Anne White, and Principal Research Scientist Abhay Ram, who is the paper’s lead author. Ram compares the scattering that occurs in this situation to a wave of water hitting a lily pad: “The wave crashing with the lily pad will excite a secondary, scattered wave that makes circular ripples traveling outward from the plant. The incoming wave has transferred energy to the scattered wave. Some of this energy is reflected backwards (in relation to the incoming wave), some travels forwards, and some is deflected to the side. The specifics all depend on the particular attributes of the wave, the water, and the lily pad. In our case, the lily pad is the plasma filament.”

    Until now, researchers have not properly taken these filaments and the scattering they provoke into consideration when modeling the turbulence inside a tokamak, leading to an underestimation of wave scattering. Using data from PSFC tokamak Alcator C-Mod, Biswas shows that using the new method of modeling RF-wave scattering from SOL turbulence provides results considerably different from older models, and a much better match to experiments. Notably, the “lower-hybrid” wave spectrum, crucial to driving plasma current in a steady-state tokamak, appears to scatter asymmetrically, an important effect not accounted for in previous models.

    Biswas’s advisor Paul Bonoli is well acquainted with traditional “ray-tracing” models, which evaluate a wave trajectory by dividing it into a series of rays. He has used this model, with its limitations, for decades in his own research to understand plasma behavior. Bonoli says he is pleased that “the research results in Bodhi’s doctoral thesis have refocused attention on the profound effect that edge turbulence can have on the propagation and absorption of radio-frequency power.”

    Although ray-tracing treatments of scattering do not fully capture all the wave physics, a “full-wave” model that does would be prohibitively expensive. To solve the problem economically, Biswas splits his analysis into two parts: (1) using ray tracing to model the trajectory of the wave in the tokamak assuming no turbulence, while (2) modifying this ray-trajectory with the new scattering model that accounts for the turbulent plasma filaments.

    “This scattering model is a full-wave model, but computed over a small region and in a simplified geometry so that it is very quick to do,” says Biswas. “The result is a ray-tracing model that, for the first time, accounts for full-wave scattering physics.”

    Biswas notes that this model bridges the gap between simple scattering models that fail to match experiment and full-wave models that are prohibitively expensive, providing reasonable accuracy at low cost.

    “Our results suggest scattering is an important effect, and that it must be taken into account when designing future RF antennas. The low cost of our scattering model makes this very doable.”

    “This is exciting progress,” says Syun’ichi Shiraiwa, staff research physicist at the Princeton Plasma Physics Laboratory. “I believe that Bodhi’s work provides a clear path to the end of a long tunnel we have been in. His work not only demonstrates that the wave scattering, once accurately accounted for, can explain the experimental results, but also answers a puzzling question: why previous scattering models were incomplete, and their results unsatisfying.”

    Work is now underway to apply this model to more plasmas from Alcator C-Mod and other tokamaks. Biswas believes that this new model will be particularly applicable to high-density tokamak plasmas, for which the standard ray-tracing model has been noticeably inaccurate. He is also excited that the model could be validated by DIII-D National Fusion Facility, a fusion experiment on which the PSFC collaborates.

    “The DIII-D tokamak will soon be capable of launching lower hybrid waves and measuring its electric field in the scrape-off layer. These measurements could provide direct evidence of the asymmetric scattering effect predicted by our model.” More

  • in

    Study: Global cancer risk from burning organic matter comes from unregulated chemicals

    Whenever organic matter is burned, such as in a wildfire, a power plant, a car’s exhaust, or in daily cooking, the combustion releases polycyclic aromatic hydrocarbons (PAHs) — a class of pollutants that is known to cause lung cancer.

    There are more than 100 known types of PAH compounds emitted daily into the atmosphere. Regulators, however, have historically relied on measurements of a single compound, benzo(a)pyrene, to gauge a community’s risk of developing cancer from PAH exposure. Now MIT scientists have found that benzo(a)pyrene may be a poor indicator of this type of cancer risk.

    In a modeling study appearing today in the journal GeoHealth, the team reports that benzo(a)pyrene plays a small part — about 11 percent — in the global risk of developing PAH-associated cancer. Instead, 89 percent of that cancer risk comes from other PAH compounds, many of which are not directly regulated.

    Interestingly, about 17 percent of PAH-associated cancer risk comes from “degradation products” — chemicals that are formed when emitted PAHs react in the atmosphere. Many of these degradation products can in fact be more toxic than the emitted PAH from which they formed.

    The team hopes the results will encourage scientists and regulators to look beyond benzo(a)pyrene, to consider a broader class of PAHs when assessing a community’s cancer risk.

    “Most of the regulatory science and standards for PAHs are based on benzo(a)pyrene levels. But that is a big blind spot that could lead you down a very wrong path in terms of assessing whether cancer risk is improving or not, and whether it’s relatively worse in one place than another,” says study author Noelle Selin, a professor in MIT’s Institute for Data, Systems and Society, and the Department of Earth, Atmospheric and Planetary Sciences.

    Selin’s MIT co-authors include Jesse Kroll, Amy Hrdina, Ishwar Kohale, Forest White, and Bevin Engelward, and Jamie Kelly (who is now at University College London). Peter Ivatt and Mathew Evans at the University of York are also co-authors.

    Chemical pixels

    Benzo(a)pyrene has historically been the poster chemical for PAH exposure. The compound’s indicator status is largely based on early toxicology studies. But recent research suggests the chemical may not be the PAH representative that regulators have long relied upon.   

    “There has been a bit of evidence suggesting benzo(a)pyrene may not be very important, but this was from just a few field studies,” says Kelly, a former postdoc in Selin’s group and the study’s lead author.

    Kelly and his colleagues instead took a systematic approach to evaluate benzo(a)pyrene’s suitability as a PAH indicator. The team began by using GEOS-Chem, a global, three-dimensional chemical transport model that breaks the world into individual grid boxes and simulates within each box the reactions and concentrations of chemicals in the atmosphere.

    They extended this model to include chemical descriptions of how various PAH compounds, including benzo(a)pyrene, would react in the atmosphere. The team then plugged in recent data from emissions inventories and meteorological observations, and ran the model forward to simulate the concentrations of various PAH chemicals around the world over time.

    Risky reactions

    In their simulations, the researchers started with 16 relatively well-studied PAH chemicals, including benzo(a)pyrene, and traced the concentrations of these chemicals, plus the concentration of their degradation products over two generations, or chemical transformations. In total, the team evaluated 48 PAH species.

    They then compared these concentrations with actual concentrations of the same chemicals, recorded by monitoring stations around the world. This comparison was close enough to show that the model’s concentration predictions were realistic.

    Then within each model’s grid box, the researchers related the concentration of each PAH chemical to its associated cancer risk; to do this, they had to develop a new method based on previous studies in the literature to avoid double-counting risk from the different chemicals. Finally, they overlaid population density maps to predict the number of cancer cases globally, based on the concentration and toxicity of a specific PAH chemical in each location.

    Dividing the cancer cases by population produced the cancer risk associated with that chemical. In this way, the team calculated the cancer risk for each of the 48 compounds, then determined each chemical’s individual contribution to the total risk.

    This analysis revealed that benzo(a)pyrene had a surprisingly small contribution, of about 11 percent, to the overall risk of developing cancer from PAH exposure globally. Eighty-nine percent of cancer risk came from other chemicals. And 17 percent of this risk arose from degradation products.

    “We see places where you can find concentrations of benzo(a)pyrene are lower, but the risk is higher because of these degradation products,” Selin says. “These products can be orders of magnitude more toxic, so the fact that they’re at tiny concentrations doesn’t mean you can write them off.”

    When the researchers compared calculated PAH-associated cancer risks around the world, they found significant differences depending on whether that risk calculation was based solely on concentrations of benzo(a)pyrene or on a region’s broader mix of PAH compounds.

    “If you use the old method, you would find the lifetime cancer risk is 3.5 times higher in Hong Kong versus southern India, but taking into account the differences in PAH mixtures, you get a difference of 12 times,” Kelly says. “So, there’s a big difference in the relative cancer risk between the two places. And we think it’s important to expand the group of compounds that regulators are thinking about, beyond just a single chemical.”

    The team’s study “provides an excellent contribution to better understanding these ubiquitous pollutants,” says Elisabeth Galarneau, an air quality expert and PhD research scientist in Canada’s Department of the Environment. “It will be interesting to see how these results compare to work being done elsewhere … to pin down which (compounds) need to be tracked and considered for the protection of human and environmental health.”

    This research was conducted in MIT’s Superfund Research Center and is supported in part by the National Institute of Environmental Health Sciences Superfund Basic Research Program, and the National Institutes of Health. More

  • in

    Climate and sustainability classes expand at MIT

    In fall 2019, a new class, 6.S898/12.S992 (Climate Change Seminar), arrived at MIT. It was, at the time, the only course in the Department of Electrical Engineering and Computer Science (EECS) to tackle the science of climate change. The class covered climate models and simulations alongside atmospheric science, policy, and economics.

    Ron Rivest, MIT Institute Professor of Computer Science, was one of the class’s three instructors, with Alan Edelman of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and John Fernández of the Department of Urban Studies and Planning. “Computer scientists have much to contribute to climate science,” Rivest says. “In particular, the modeling and simulation of climate can benefit from advances in computer science.”

    Rivest is one of many MIT faculty members who have been working in recent years to bring topics in climate, sustainability, and the environment to students in a growing variety of fields. And students have said they want this trend to continue.

    “Sustainability is something that touches all disciplines,” says Megan Xu, a rising senior in biological engineering and advisory chair of the Undergraduate Association Sustainability Committee. “As students who have grown up knowing that climate change is real and witnessed climate disaster after disaster, we know this is a huge problem that needs to be addressed by our generation.”

    Expanding the course catalog

    As education program manager at the MIT Environmental Solutions Initiative, Sarah Meyers has repeatedly had a hand in launching new sustainability classes. She has steered grant money to faculty, brought together instructors, and helped design syllabi — all in the service of giving MIT students the same world-class education in climate and sustainability that they get in science and engineering.

    Her work has given Meyers a bird’s-eye view of MIT’s course offerings in this area. By her count, there are now over 120 undergraduate classes, across 23 academic departments, that teach climate, environment, and sustainability principles.

    “Educating the next generation is the most important way that MIT can have an impact on the world’s environmental challenges,” she says. “MIT students are going to be leaders in their fields, whatever they may be. If they really understand sustainable design practices, if they can balance the needs of all stakeholders to make ethical decisions, then that actually changes the way our world operates and can move humanity towards a more sustainable future.”

    Some sustainability classes are established institutions at MIT. Success stories include 2.00A (Fundamentals of Engineering Design: Explore Space, Sea and Earth), a hands-on engineering class popular with first-year students; and 21W.775 (Writing About Nature and Environmental Issues), which has helped undergraduates fulfill their HASS-H (humanities distribution subject) and CI-H (Communication Intensive subject in the Humanities, Arts, and Social Sciences) graduation requirements for 15 years.

    Expanding this list of classes is an institutional priority. In the recently released Climate Action Plan for the Decade, MIT pledged to recruit at least 20 additional faculty members who will teach climate-related classes.

    “I think it’s easy to find classes if you’re looking for sustainability classes to take,” says Naomi Lutz, a senior in mechanical engineering who helped advise the MIT administration on education measures in the Climate Action Plan. “I usually scroll through the titles of the classes in courses 1, 2, 11, and 12 to see if any are of interest. I also have used the Environment & Sustainability Minor class list to look for sustainability-related classes to take.

    “The coming years are critical for the future of our planet, so it’s important that we all learn about sustainability and think about how to address it,” she adds.

    Working with students’ schedules

    Still, despite all this activity, climate and sustainability are not yet mainstream parts of an MIT education. Last year, a survey of over 800 MIT undergraduates, taken by the Undergraduate Association Sustainability Committee, found that only one in four had ever taken a class related to sustainability. But it doesn’t seem to be from lack of interest in the topic. More than half of those surveyed said that sustainability is a factor in their career planning, and almost 80 percent try to practice sustainability in their daily lives.

    “I’ve often had conversations with students who were surprised to learn there are so many classes available,” says Meyers. “We do need to do a better job communicating about them, and making it as easy as possible to enroll.”

    A recurring challenge is helping students fit sustainability into their plans for graduation, which are often tightly mapped-out.

    “We each only have four years — around 32 to 40 classes — to absorb all that we can from this amazing place,” says Xu. “Many of these classes are mandated to be GIRs [General Institute Requirements] and major requirements. Many students recognize that sustainability is important, but might not have the time to devote an entire class to the topic if it would not count toward their requirements.”

    This was a central focus for the students who were involved in forming education recommendations for the Climate Action Plan. “We propose that more sustainability-related courses or tracks are offered in the most common majors, especially in Course 6 [EECS],” says Lutz. “If students can fulfill major requirements while taking courses that address environmental problems, we believe more students will pursue research and careers related to sustainability.”

    She also recommends that students look into the dozens of climate and sustainability classes that fulfill GIRs. “It’s really easy to take sustainability-related courses that fulfill HASS [Humanities, Arts, and Social Sciences] requirements,” she says. For example, students can meet their HASS-S (social sciences sistribution subject) requirement by taking 21H.185 (Environment and History), or fulfill their HASS-A requirement with CMS.374 (Transmedia Art, Extraction and Environmental Justice).

    Classes with impact

    For those students who do seek out sustainability classes early in their MIT careers, the experience can shape their whole education.

    “My first semester at MIT, I took Environment and History, co-taught by professors Susan Solomon and Harriet Ritvo,” says Xu. “It taught me that there is so much more involved than just science and hard facts to solving problems in sustainability and climate. I learned to look at problems with more of a focus on people, which has informed much of the extracurricular work that I’ve gone on to do at MIT.”

    And the faculty, too, sometimes find that teaching in this area opens new doors for them. Rivest, who taught the climate change seminar in Course 6, is now working to build a simplified climate model with his co-instructor Alan Edelman, their teaching assistant Henri Drake, and Professor John Deutch of the Department of Chemistry, who joined the class as a guest lecturer. “I very much enjoyed meeting new colleagues from all around MIT,” Rivest says. “Teaching a class like this fosters connections between computer scientists and climate scientists.”

    Which is why Meyers will continue helping to get these classes off the ground. “We know students think climate is a huge issue for their futures. We know faculty agree with them,” she says. “Everybody wants this to be part of an MIT education. The next step is to really reach out to students and departments to fill the classrooms. That’s the start of a virtuous cycle where enrollment drives more sustainability instruction in every part of MIT.” More