More stories

  • in

    These neurons have food on the brain

    A gooey slice of pizza. A pile of crispy French fries. Ice cream dripping down a cone on a hot summer day. When you look at any of these foods, a specialized part of your visual cortex lights up, according to a new study from MIT neuroscientists.

    This newly discovered population of food-responsive neurons is located in the ventral visual stream, alongside populations that respond specifically to faces, bodies, places, and words. The unexpected finding may reflect the special significance of food in human culture, the researchers say. 

    “Food is central to human social interactions and cultural practices. It’s not just sustenance,” says Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience and a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines. “Food is core to so many elements of our cultural identity, religious practice, and social interactions, and many other things that humans do.”

    The findings, based on an analysis of a large public database of human brain responses to a set of 10,000 images, raise many additional questions about how and why this neural population develops. In future studies, the researchers hope to explore how people’s responses to certain foods might differ depending on their likes and dislikes, or their familiarity with certain types of food.

    MIT postdoc Meenakshi Khosla is the lead author of the paper, along with MIT research scientist N. Apurva Ratan Murty. The study appears today in the journal Current Biology.

    Visual categories

    More than 20 years ago, while studying the ventral visual stream, the part of the brain that recognizes objects, Kanwisher discovered cortical regions that respond selectively to faces. Later, she and other scientists discovered other regions that respond selectively to places, bodies, or words. Most of those areas were discovered when researchers specifically set out to look for them. However, that hypothesis-driven approach can limit what you end up finding, Kanwisher says.

    “There could be other things that we might not think to look for,” she says. “And even when we find something, how do we know that that’s actually part of the basic dominant structure of that pathway, and not something we found just because we were looking for it?”

    To try to uncover the fundamental structure of the ventral visual stream, Kanwisher and Khosla decided to analyze a large, publicly available dataset of full-brain functional magnetic resonance imaging (fMRI) responses from eight human subjects as they viewed thousands of images.

    “We wanted to see when we apply a data-driven, hypothesis-free strategy, what kinds of selectivities pop up, and whether those are consistent with what had been discovered before. A second goal was to see if we could discover novel selectivities that either haven’t been hypothesized before, or that have remained hidden due to the lower spatial resolution of fMRI data,” Khosla says.

    To do that, the researchers applied a mathematical method that allows them to discover neural populations that can’t be identified from traditional fMRI data. An fMRI image is made up of many voxels — three-dimensional units that represent a cube of brain tissue. Each voxel contains hundreds of thousands of neurons, and if some of those neurons belong to smaller populations that respond to one type of visual input, their responses may be drowned out by other populations within the same voxel.

    The new analytical method, which Kanwisher’s lab has previously used on fMRI data from the auditory cortex, can tease out responses of neural populations within each voxel of fMRI data.

    Using this approach, the researchers found four populations that corresponded to previously identified clusters that respond to faces, places, bodies, and words. “That tells us that this method works, and it tells us that the things that we found before are not just obscure properties of that pathway, but major, dominant properties,” Kanwisher says.

    Intriguingly, a fifth population also emerged, and this one appeared to be selective for images of food.

    “We were first quite puzzled by this because food is not a visually homogenous category,” Khosla says. “Things like apples and corn and pasta all look so unlike each other, yet we found a single population that responds similarly to all these diverse food items.”

    The food-specific population, which the researchers call the ventral food component (VFC), appears to be spread across two clusters of neurons, located on either side of the FFA. The fact that the food-specific populations are spread out between other category-specific populations may help explain why they have not been seen before, the researchers say.

    “We think that food selectivity had been harder to characterize before because the populations that are selective for food are intermingled with other nearby populations that have distinct responses to other stimulus attributes. The low spatial resolution of fMRI prevents us from seeing this selectivity because the responses of different neural population get mixed in a voxel,” Khosla says.

    “The technique which the researchers used to identify category-sensitive cells or areas is impressive, and it recovered known category-sensitive systems, making the food category findings most impressive,” says Paul Rozin, a professor of psychology at the University of Pennsylvania, who was not involved in the study. “I can’t imagine a way for the brain to reliably identify the diversity of foods based on sensory features. That makes this all the more fascinating, and likely to clue us in about something really new.”

    Food vs non-food

    The researchers also used the data to train a computational model of the VFC, based on previous models Murty had developed for the brain’s face and place recognition areas. This allowed the researchers to run additional experiments and predict the responses of the VFC. In one experiment, they fed the model matched images of food and non-food items that looked very similar — for example, a banana and a yellow crescent moon.

    “Those matched stimuli have very similar visual properties, but the main attribute in which they differ is edible versus inedible,” Khosla says. “We could feed those arbitrary stimuli through the predictive model and see whether it would still respond more to food than non-food, without having to collect the fMRI data.”

    They could also use the computational model to analyze much larger datasets, consisting of millions of images. Those simulations helped to confirm that the VFC is highly selective for images of food.

    From their analysis of the human fMRI data, the researchers found that in some subjects, the VFC responded slightly more to processed foods such as pizza than unprocessed foods like apples. In the future they hope to explore how factors such as familiarity and like or dislike of a particular food might affect individuals’ responses to that food.

    They also hope to study when and how this region becomes specialized during early childhood, and what other parts of the brain it communicates with. Another question is whether this food-selective population will be seen in other animals such as monkeys, who do not attach the cultural significance to food that humans do.

    The research was funded by the National Institutes of Health, the National Eye Institute, and the National Science Foundation through the MIT Center for Brains, Minds, and Machines. More

  • in

    Using seismology for groundwater management

    As climate change increases the number of extreme weather events, such as megadroughts, groundwater management is key for sustaining water supply. But current groundwater monitoring tools are either costly or insufficient for deeper aquifers, limiting our ability to monitor and practice sustainable management in populated areas.

    Now, a new paper published in Nature Communications bridges seismology and hydrology with a pilot application that uses seismometers as a cost-effective way to monitor and map groundwater fluctuations.

    “Our measurements are independent from and complementary to traditional observations,” says Shujuan Mao PhD ’21, lead author on the paper. “It provides a new way to dictate groundwater management and evaluate the impact of human activity on shaping underground hydrologic systems.”

    Mao, currently a Thompson Postdoctoral Fellow in the Geophysics department at Stanford University, conducted most of the research during her PhD in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). Other contributors to the paper include EAPS department chair and Schlumberger Professor of Earth and Planetary Sciences Robert van der Hilst, as well as Michel Campillo and Albanne Lecointre from the Institut des Sciences de la Terre in France.

    While there are a few different methods currently used for measuring groundwater, they all come with notable drawbacks. Hydraulic heads, which drill through the ground and into the aquifers, are expensive and can only give limited information at the specific location they’re placed. Noninvasive techniques based on satellite- or airborne-sensing lack the sensitivity and resolution needed to observe deeper depths.

    Mao proposes using seismometers, which are instruments used to measure ground vibrations such as the waves produced by earthquakes. They can measure seismic velocity, which is the propagation speed of seismic waves. Seismic velocity measurements are unique to the mechanical state of rocks, or the ways rocks respond to their physical environment, and can tell us a lot about them.

    The idea of using seismic velocity to characterize property changes in rocks has long been used in laboratory-scale analysis, but only recently have scientists been able to measure it continuously in realistic-scale geological settings. For aquifer monitoring, Mao and her team associate the seismic velocity with the hydraulic property, or the water content, in the rocks.

    Seismic velocity measurements make use of ambient seismic fields, or background noise, recorded by seismometers. “The Earth’s surface is always vibrating, whether due to ocean waves, winds, or human activities,” she explains. “Most of the time those vibrations are really small and are considered ‘noise’ by traditional seismologists. But in recent years scientists have shown that the continuous noise records in fact contain a wealth of information about the properties and structures of the Earth’s interior.”

    To extract useful information from the noise records, Mao and her team used a technique called seismic interferometry, which analyzes wave interference to calculate the seismic velocity of the medium the waves pass through. For their pilot application, Mao and her team applied this analysis to basins in the Metropolitan Los Angeles region, an area suffering from worsening drought and a growing population.

    By doing this, Mao and her team were able to see how the aquifers changed physically over time at a high resolution. Their seismic velocity measurements verified measurements taken by hydraulic heads over the last 20 years, and the images matched very well with satellite data. They could also see differences in how the storage areas changed between counties in the area that used different water pumping practices, which is important for developing water management protocol.

    Mao also calls using the seismometers a “buy-one get-one free” deal, since seismometers are already in use for earthquake and tectonic studies not just across California, but worldwide, and could help “avoid the expensive cost of drilling and maintaining dedicated groundwater monitoring wells,” she says.

    Mao emphasizes that this study is just the beginning of exploring possible applications of seismic noise interferometry in this way. It can be used to monitor other near-surface systems, such as geothermal or volcanic systems, and Mao is currently applying it to oil and gas fields. But in places like California currently experiencing megadroughts, and who rely on groundwater for a large portion of their water needs, this kind of information is key for sustainable water management.

    “It’s really important, especially now, to characterize these changes in groundwater storage so that we can promote data-informed policymaking to help them thrive under increasing water stress,” she says.

    This study was funded, in part, by the European Research Council, with additional support from the Thompson Fellowship at Stanford University. More

  • in

    Fusion’s newest ambassador

    When high school senior Tuba Balta emailed MIT Plasma Science and Fusion Center (PSFC) Director Dennis Whyte in February, she was not certain she would get a response. As part of her final semester at BASIS Charter School, in Washington, she had been searching unsuccessfully for someone to sponsor an internship in fusion energy, a topic that had recently begun to fascinate her because “it’s not figured out yet.” Time was running out if she was to include the internship as part of her senior project.

    “I never say ‘no’ to a student,” says Whyte, who felt she could provide a youthful perspective on communicating the science of fusion to the general public.

    Posters explaining the basics of fusion science were being considered for the walls of a PSFC lounge area, a space used to welcome visitors who might not know much about the center’s focus: What is fusion? What is plasma? What is magnetic confinement fusion? What is a tokamak?

    Why couldn’t Balta be tasked with coming up with text for these posters, written specifically to be understandable, even intriguing, to her peers?

    Meeting the team

    Although most of the internship would be virtual, Balta visited MIT to meet Whyte and others who would guide her progress. A tour of the center showed her the past and future of the PSFC, one lab area revealing on her left the remains of the decades-long Alcator C-Mod tokamak and on her right the testing area for new superconducting magnets crucial to SPARC, designed in collaboration with MIT spinoff Commonwealth Fusion Systems.

    With Whyte, graduate student Rachel Bielajew, and Outreach Coordinator Paul Rivenberg guiding her content and style, Balta focused on one of eight posters each week. Her school also required her to keep a weekly blog of her progress, detailing what she was learning in the process of creating the posters.

    Finding her voice

    Balta admits that she was not looking forward to this part of the school assignment. But she decided to have fun with it, adopting an enthusiastic and conversational tone, as if she were sitting with friends around a lunch table. Each week, she was able to work out what she was composing for her posters and her final project by trying it out on her friends in the blog.

    Her posts won praise from her schoolmates for their clarity, as when in Week 3 she explained the concept of turbulence as it relates to fusion research, sending her readers to their kitchen faucets to experiment with the pressure and velocity of running tap water.

    The voice she found through her blog served her well during her final presentation about fusion at a school expo for classmates, parents, and the general public.

    “Most people are intimidated by the topic, which they shouldn’t be,” says Balta. “And it just made me happy to help other people understand it.”

    Her favorite part of the internship? “Getting to talk to people whose papers I was reading and ask them questions. Because when it comes to fusion, you can’t just look it up on Google.”

    Awaiting her first year at the University of Chicago, Balta reflects on the team spirit she experienced in communicating with researchers at the PSFC.

    “I think that was one of my big takeaways,” she says, “that you have to work together. And you should, because you’re always going to be missing some piece of information; but there’s always going to be somebody else who has that piece, and we can all help each other out.” More

  • in

    Four researchers with MIT ties earn Schmidt Science Fellowships

    Four researchers with MIT ties — Juncal Arbelaiz, Xiangkun (Elvis) Cao, Sandya Subramanian, and Heather Zlotnick ’17 — have been honored with competitive Schmidt Science Fellowships.

    Created in 2017, the fellows program aims to bring together the world’s brightest minds “to solve society’s toughest challenges.”

    The four MIT-affiliated researchers are among 29 Schmidt Science Fellows from around the world who will receive postdoctoral support for either one or two years with an annual stipend of $100,000, along with individualized mentoring and participation in the program’s Global Meeting Series. The fellows will also have opportunities to engage with thought-leaders from science, business, policy, and society. According to the award announcement, the fellows are expected to pursue research that shifts from the focus of their PhDs, to help expand and enhance their futures as scientific leaders.

    Juncal Arbelaiz is a PhD candidate in applied mathematics at MIT, who is completing her doctorate this summer. Her doctoral research at MIT is advised by Ali Jadbabaie, the JR East Professor of Engineering and head of the Department of Civil and Environmental Engineering; Anette Hosoi, the Neil and Jane Pappalardo Professor of Mechanical Engineering and associate dean of the School of Engineering; and Bassam Bamieh, professor of mechanical engineering and associate director of the Center for Control, Dynamical Systems, and Computation at the University of California at Santa Barbara. Arbelaiz’s research revolves around the design of optimal decentralized intelligence for spatially-distributed dynamical systems.

    “I cannot think of a better way to start my independent scientific career. I feel very excited and grateful for this opportunity,” says Arbelaiz. With her fellowship, she will enlist systems biology to explore how the nervous system encodes and processes sensory information to address future safety-critical artificial intelligence applications. “The Schmidt Science Fellowship will provide me with a unique opportunity to work at the intersection of biological and machine intelligence for two years and will be a steppingstone towards my longer-term objective of becoming a researcher in bio-inspired machine intelligence,” she says.

    Xiangkun (Elvis) Cao is currently a postdoc in the lab of T. Alan Hatton, the Ralph Landau Professor in Chemical Engineering, and an Impact Fellow at the MIT Climate and Sustainability Consortium. Cao received his PhD in mechanical engineering from Cornell University in 2021, during which he focused on microscopic precision in the simultaneous delivery of light and fluids by optofluidics, with advances relevant to health and sustainability applications. As a Schmidt Science Fellow, he plans to be co-advised by Hatton on carbon capture, and Ted Sargent, professor of chemistry at Northwestern University, on carbon utilization. Cao is passionate about integrated carbon capture and utilization (CCU) from molecular to process levels, machine learning to inspire smart CCU, and the nexus of technology, business, and policy for CCU.

    “The Schmidt Science Fellowship provides the perfect opportunity for me to work across disciplines to study integrated carbon capture and utilization from molecular to process levels,” Cao explains. “My vision is that by integrating carbon capture and utilization, we can concurrently make scientific discoveries and unlock economic opportunities while mitigating global climate change. This way, we can turn our carbon liability into an asset.”

    Sandya Subramanian, a 2021 PhD graduate of the Harvard-MIT Program in Health Sciences and Technology (HST) in the area of medical engineering and medical physics, is currently a postdoc at Stanford Data Science. She is focused on the topics of biomedical engineering, statistics, machine learning, neuroscience, and health care. Her research is on developing new technologies and methods to study the interactions between the brain, the autonomic nervous system, and the gut. “I’m extremely honored to receive the Schmidt Science Fellowship and to join the Schmidt community of leaders and scholars,” says Subramanian. “I’ve heard so much about the fellowship and the fact that it can open doors and give people confidence to pursue challenging or unique paths.”

    According to Subramanian, the autonomic nervous system and its interactions with other body systems are poorly understood but thought to be involved in several disorders, such as functional gastrointestinal disorders, Parkinson’s disease, diabetes, migraines, and eating disorders. The goal of her research is to improve our ability to monitor and quantify these physiologic processes. “I’m really interested in understanding how we can use physiological monitoring technologies to inform clinical decision-making, especially around the autonomic nervous system, and I look forward to continuing the work that I’ve recently started at Stanford as Schmidt Science Fellow,” she says. “A huge thank you to all of the mentors, colleagues, friends, and leaders I had the pleasure of meeting and working with at HST and MIT; I couldn’t have done this without everything I learned there.”

    Hannah Zlotnick ’17 attended MIT for her undergraduate studies, majoring in biological engineering with a minor in mechanical engineering. At MIT, Zlotnick was a student-athlete on the women’s varsity soccer team, a UROP student in Alan Grodzinsky’s laboratory, and a member of Pi Beta Phi. For her PhD, Zlotnick attended the University of Pennsylvania, and worked in Robert Mauck’s laboratory within the departments of Bioengineering and Orthopaedic Surgery.

    Zlotnick’s PhD research focused on harnessing remote forces, such as magnetism or gravity, to enhance engineered cartilage and osteochondral repair both in vitro and in large animal models. Zlotnick now plans to pivot to the field of biofabrication to create tissue models of the knee joint to assess potential therapeutics for osteoarthritis. “I am humbled to be a part of the Schmidt Science Fellows community, and excited to venture into the field of biofabrication,” Zlotnick says. “Hopefully this work uncovers new therapies for patients with inflammatory joint diseases.” More

  • in

    Kerry Emanuel: A climate scientist and meteorologist in the eye of the storm

    Kerry Emanuel once joked that whenever he retired, he would start a “hurricane safari” so other people could experience what it’s like to fly into the eye of a hurricane.

    “All of a sudden, the turbulence stops, the sun comes out, bright sunshine, and it’s amazingly calm. And you’re in this grand stadium [of clouds miles high],” he says. “It’s quite an experience.”

    While the hurricane safari is unlikely to come to fruition — “You can’t just conjure up a hurricane,” he explains — Emanuel, a world-leading expert on links between hurricanes and climate change, is retiring from teaching in the Department of Earth Atmospheric and Planetary Sciences (EAPS) at MIT after a more than 40-year career.

    Best known for his foundational contributions to the science of tropical cyclones, climate, and links between them, Emanuel has also been a prominent voice in public debates on climate change, and what we should do about it.

    “Kerry has had an enormous effect on the world through the students and junior scientists he has trained,” says William Boos PhD ’08, an atmospheric scientist at the University of California at Berkeley. “He’s a brilliant enough scientist and theoretician that he didn’t need any of us to accomplish what he has, but he genuinely cares about educating new generations of scientists and helping to launch their careers.”

    In recognition of Emanuel’s teaching career and contributions to science, a symposium was held in his honor at MIT on June 21 and 22, organized by several of his former students and collaborators, including Boos. Research presented at the symposium focused on the many fields influenced by Emanuel’s more than 200 published research papers — on everything from forecasting the risks posed by tropical cyclones to understanding how rainfall is produced by continent-sized patterns of atmospheric circulation.

    Emanuel’s career observing perturbations of Earth’s atmosphere started earlier than he can remember. “According to my older brother, from the age of 2, I would crawl to the window whenever there was a thunderstorm,” he says. At first, those were the rolling thunderheads of the Midwest where he grew up, then it was the edges of hurricanes during a few teenage years in Florida. Eventually, he would find himself watching from the very eye of the storm, both physically and mathematically.

    Emanuel attended MIT both as an undergraduate studying Earth and planetary sciences, and for his PhD in meteorology, writing a dissertation on thunderstorms that form ahead of cold fronts. Within the department, he worked with some of the central figures of modern meteorology such as Jule Charney, Fred Sanders, and Edward Lorenz — the founder of chaos theory.

    After receiving his PhD in 1978, Emanuel joined the faculty of the University of California at Los Angeles. During this period, he also took a semester sabbatical to film the wind speeds of tornadoes in Texas and Oklahoma. After three years, he returned to MIT and joined the Department of Meteorology in 1981. Two years later, the department merged with Earth and Planetary Sciences to form EAPS as it is known today, and where Emanuel has remained ever since.

    At MIT, he shifted scales. The thunderstorms and tornadoes that had been the focus of Emanuel’s research up to then were local atmospheric phenomena, or “mesoscale” in the language of meteorologists. The larger “synoptic scale” storms that are hurricanes blew into Emanuel’s research when as a young faculty member he was asked to teach a class in tropical meteorology; in prepping for the class, Emanuel found his notes on hurricanes from graduate school no longer made sense.

    “I realized I didn’t understand them because they couldn’t have been correct,” he says. “And so I set out to try to find a much better theoretical formulation for hurricanes.”

    He soon made two important contributions. In 1986, his paper “An Air-Sea Interaction Theory for Tropical Cyclones. Part 1: Steady-State Maintenance” developed a new theory for upper limits of hurricane intensity given atmospheric conditions. This work in turn led to even larger-scale questions to address. “That upper bound had to be dependent on climate, and it was likely to go up if we were to warm the climate,” Emanuel says — a phenomenon he explored in another paper, “The Dependence of Hurricane Intensity on Climate,” which showed how warming sea surface temperatures and changing atmospheric conditions from a warming climate would make hurricanes more destructive.

    “In my view, this is among the most remarkable achievements in theoretical geophysics,” says Adam Sobel PhD ’98, an atmospheric scientist at Columbia University who got to know Emanuel after he graduated and became interested in tropical meteorology. “From first principles, using only pencil-and-paper analysis and physical reasoning, he derives a quantitative bound on hurricane intensity that has held up well over decades of comparison to observations” and underpins current methods of predicting hurricane intensity and how it changes with climate.

    This and diverse subsequent work led to numerous honors, including membership to the American Philosophical Society, the National Academy of Sciences, and the American Academy of Arts and Sciences.

    Emanuel’s research was never confined to academic circles, however; when politicians and industry leaders voiced loud opposition to the idea that human-caused climate change posed a threat, he spoke up.

    “I felt kind of a duty to try to counter that,” says Emanuel. “I thought it was an interesting challenge to see if you could go out and convince what some people call climate deniers, skeptics, that this was a serious risk and we had to treat it as such.”

    In addition to many public lectures and media appearances discussing climate change, Emanuel penned a book for general audiences titled “What We Know About Climate Change,” in addition to a widely-read primer on climate change and risk assessment designed to influence business leaders.

    “Kerry has an unmatched physical understanding of tropical climate phenomena,” says Emanuel’s colleague, Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies at EAPS. “But he’s also a great communicator and has generously given his time to public outreach. His book ‘What We Know About Climate Change’ is a beautiful piece of work that is readily understandable and has captivated many a non-expert reader.”

    Along with a number of other prominent climate scientists, Emanuel also began advocating for expanding nuclear power as the most rapid path to decarbonizing the world’s energy systems.

    “I think the impediment to nuclear is largely irrational in the United States,” he says. “So, I’ve been trying to fight that just like I’ve been trying to fight climate denial.”

    One lesson Emanuel has taken from his public work on climate change is that skeptical audiences often respond better to issues framed in positive terms than to doom and gloom; he’s found emphasizing the potential benefits rather than the sacrifices involved in the energy transition can engage otherwise wary audiences.

    “It’s really not opposition to science, per se,” he says. “It’s fear of the societal changes they think are required to do something about it.”

    He has also worked to raise awareness about how insurance companies significantly underestimate climate risks in their policies, in particular by basing hurricane risk on unreliable historical data. One recent practical result has been a project by the First Street Foundation to assess the true flood risk of every property in the United States using hurricane models Emanuel developed.

    “I think it’s transformative,” Emanuel says of the project with First Street. “That may prove to be the most substantive research I’ve done.”

    Though Emanuel is retiring from teaching, he has no plans to stop working. “When I say ‘retire’ it’s in quotes,” he says. In 2011, Emanuel and Professor of Geophysics Daniel Rothman founded the Lorenz Center, a climate research center at MIT in honor of Emanuel’s mentor and friend Edward Lorenz. Emanuel will continue to participate in work at the center, which aims to counter what Emanuel describes as a trend away from “curiosity-driven” work in climate science.

    “Even if there were no such thing as global warming, [climate science] would still be a really, really exciting field,” says Emanuel. “There’s so much to understand about climate, about the climates of the past, about the climates of other planets.”

    In addition to work with the Lorenz Center, he’s become interested once again in tornadoes and severe local storms, and understanding whether climate also controls such local phenomena. He’s also involved in two of MIT’s Climate Grand Challenges projects focused on translating climate hazards to explicit financial and health risks — what will bring the dangers of climate change home to people, he says, is for the public to understand more concrete risks, like agricultural failure, water shortages, electricity shortages, and severe weather events. Capturing that will drive the next few years of his work.

    “I’m going to be stepping up research in some respects,” he says, now living full-time at his home in Maine.

    Of course, “retiring” does mean a bit more free time for new pursuits, like learning a language or an instrument, and “rediscovering the art of sailing,” says Emanuel. He’s looking forward to those days on the water, whatever storms are to come. More

  • in

    Evan Leppink: Seeking a way to better stabilize the fusion environment

    “Fusion energy was always one of those kind-of sci-fi technologies that you read about,” says nuclear science and engineering PhD candidate Evan Leppink. He’s recalling the time before fusion became a part of his daily hands-on experience at MIT’s Plasma Science and Fusion Center, where he is studying a unique way to drive current in a tokamak plasma using radiofrequency (RF) waves. 

    Now, an award from the U.S. Department of Energy’s (DOE) Office of Science Graduate Student Research (SCGSR) Program will support his work with a 12-month residency at the DIII-D National Fusion Facility in San Diego, California.

    Like all tokamaks, DIII-D generates hot plasma inside a doughnut-shaped vacuum chamber wrapped with magnets. Because plasma will follow magnetic field lines, tokamaks are able to contain the turbulent plasma fuel as it gets hotter and denser, keeping it away from the edges of the chamber where it could damage the wall materials. A key part of the tokamak concept is that part of the magnetic field is created by electrical currents in the plasma itself, which helps to confine and stabilize the configuration. Researchers often launch high-power RF waves into tokamaks to drive that current.

    Leppink will be contributing to research, led by his MIT advisor Steve Wukitch, that pursues launching RF waves in DIII-D using a unique compact antenna placed on the tokamak center column. Typically, antennas are placed inside the tokamak on the outer edge of the doughnut, farthest from the central hole (or column), primarily because access and installation are easier there. This is known as the “low-field side,” because the magnetic field is lower there than at the central column, the “high-field side.” This MIT-led experiment, for the first time, will mount an antenna on the high-field side. There is some theoretical evidence that placing the wave launcher there could improve power penetration and current drive efficiency. And because the plasma environment is less harsh on this side, the antenna will survive longer, a factor important for any future power-producing tokamak.

    Leppink’s work on DIII-D focuses specifically on measuring the density of plasmas generated in the tokamak, for which he developed a “reflectometer.” This small antenna launches microwaves into the plasma, which reflect back to the antenna to be measured. The time that it takes for these microwaves to traverse the plasma provides information about the plasma density, allowing researchers to build up detailed density profiles, data critical for injecting RF power into the plasma.

    “Research shows that when we try to inject these waves into the plasma to drive the current, they can lose power as they travel through the edge region of the tokamak, and can even have problems entering the core of the plasma, where we would most like to direct them,” says Leppink. “My diagnostic will measure that edge region on the high-field side near the launcher in great detail, which provides us a way to directly verify calculations or compare actual results with simulation results.”

    Although focused on his own research, Leppink has excelled at priming other students for success in their studies and research. In 2021 he received the NSE Outstanding Teaching Assistant and Mentorship Award.

    “The highlights of TA’ing for me were the times when I could watch students go from struggling with a difficult topic to fully understanding it, often with just a nudge in the right direction and then allowing them to follow their own intuition the rest of the way,” he says.

    The right direction for Leppink points toward San Diego and RF current drive experiments on DIII-D. He is grateful for the support from the SCGSR, a program created to prepare graduate students like him for science, technology, engineering, or mathematics careers important to the DOE Office of Science mission. It provides graduate thesis research opportunities through extended residency at DOE national laboratories. He has already made several trips to DIII-D, in part to install his reflectometer, and has been impressed with the size of the operation.

    “It takes a little while to kind of compartmentalize everything and say, ‘OK, well, here’s my part of the machine. This is what I’m doing.’ It can definitely be overwhelming at times. But I’m blessed to be able to work on what has been the workhorse tokamak of the United States for the past few decades.” More

  • in

    Study finds natural sources of air pollution exceed air quality guidelines in many regions

    Alongside climate change, air pollution is one of the biggest environmental threats to human health. Tiny particles known as particulate matter or PM2.5 (named for their diameter of just 2.5 micrometers or less) are a particularly hazardous type of pollutant. These particles are produced from a variety of sources, including wildfires and the burning of fossil fuels, and can enter our bloodstream, travel deep into our lungs, and cause respiratory and cardiovascular damage. Exposure to particulate matter is responsible for millions of premature deaths globally every year.

    In response to the increasing body of evidence on the detrimental effects of PM2.5, the World Health Organization (WHO) recently updated its air quality guidelines, lowering its recommended annual PM2.5 exposure guideline by 50 percent, from 10 micrograms per meter cubed (μm3) to 5 μm3. These updated guidelines signify an aggressive attempt to promote the regulation and reduction of anthropogenic emissions in order to improve global air quality.

    A new study by researchers in the MIT Department of Civil and Environmental Engineering explores if the updated air quality guideline of 5 μm3 is realistically attainable across different regions of the world, particularly if anthropogenic emissions are aggressively reduced. 

    The first question the researchers wanted to investigate was to what degree moving to a no-fossil-fuel future would help different regions meet this new air quality guideline.

    “The answer we found is that eliminating fossil-fuel emissions would improve air quality around the world, but while this would help some regions come into compliance with the WHO guidelines, for many other regions high contributions from natural sources would impede their ability to meet that target,” says senior author Colette Heald, the Germeshausen Professor in the MIT departments of Civil and Environmental Engineering, and Earth, Atmospheric and Planetary Sciences. 

    The study by Heald, Professor Jesse Kroll, and graduate students Sidhant Pai and Therese Carter, published June 6 in the journal Environmental Science and Technology Letters, finds that over 90 percent of the global population is currently exposed to average annual concentrations that are higher than the recommended guideline. The authors go on to demonstrate that over 50 percent of the world’s population would still be exposed to PM2.5 concentrations that exceed the new air quality guidelines, even in the absence of all anthropogenic emissions.

    This is due to the large natural sources of particulate matter — dust, sea salt, and organics from vegetation — that still exist in the atmosphere when anthropogenic emissions are removed from the air. 

    “If you live in parts of India or northern Africa that are exposed to large amounts of fine dust, it can be challenging to reduce PM2.5 exposures below the new guideline,” says Sidhant Pai, co-lead author and graduate student. “This study challenges us to rethink the value of different emissions abatement controls across different regions and suggests the need for a new generation of air quality metrics that can enable targeted decision-making.”

    The researchers conducted a series of model simulations to explore the viability of achieving the updated PM2.5 guidelines worldwide under different emissions reduction scenarios, using 2019 as a representative baseline year. 

    Their model simulations used a suite of different anthropogenic sources that could be turned on and off to study the contribution of a particular source. For instance, the researchers conducted a simulation that turned off all human-based emissions in order to determine the amount of PM2.5 pollution that could be attributed to natural and fire sources. By analyzing the chemical composition of the PM2.5 aerosol in the atmosphere (e.g., dust, sulfate, and black carbon), the researchers were also able to get a more accurate understanding of the most important PM2.5 sources in a particular region. For example, elevated PM2.5 concentrations in the Amazon were shown to predominantly consist of carbon-containing aerosols from sources like deforestation fires. Conversely, nitrogen-containing aerosols were prominent in Northern Europe, with large contributions from vehicles and fertilizer usage. The two regions would thus require very different policies and methods to improve their air quality. 

    “Analyzing particulate pollution across individual chemical species allows for mitigation and adaptation decisions that are specific to the region, as opposed to a one-size-fits-all approach, which can be challenging to execute without an understanding of the underlying importance of different sources,” says Pai. 

    When the WHO air quality guidelines were last updated in 2005, they had a significant impact on environmental policies. Scientists could look at an area that was not in compliance and suggest high-level solutions to improve the region’s air quality. But as the guidelines have tightened, globally-applicable solutions to manage and improve air quality are no longer as evident. 

    “Another benefit of speciating is that some of the particles have different toxicity properties that are correlated to health outcomes,” says Therese Carter, co-lead author and graduate student. “It’s an important area of research that this work can help motivate. Being able to separate out that piece of the puzzle can provide epidemiologists with more insights on the different toxicity levels and the impact of specific particles on human health.”

    The authors view these new findings as an opportunity to expand and iterate on the current guidelines.  

    “Routine and global measurements of the chemical composition of PM2.5 would give policymakers information on what interventions would most effectively improve air quality in any given location,” says Jesse Kroll, a professor in the MIT departments of Civil and Environmental Engineering and Chemical Engineering. “But it would also provide us with new insights into how different chemical species in PM2.5 affect human health.”

    “I hope that as we learn more about the health impacts of these different particles, our work and that of the broader atmospheric chemistry community can help inform strategies to reduce the pollutants that are most harmful to human health,” adds Heald. More

  • in

    How the universe got its magnetic field

    When we look out into space, all of the astrophysical objects that we see are embedded in magnetic fields. This is true not only in the neighborhood of stars and planets, but also in the deep space between galaxies and galactic clusters. These fields are weak — typically much weaker than those of a refrigerator magnet — but they are dynamically significant in the sense that they have profound effects on the dynamics of the universe. Despite decades of intense interest and research, the origin of these cosmic magnetic fields remains one of the most profound mysteries in cosmology.

    In previous research, scientists came to understand how turbulence, the churning motion common to fluids of all types, could amplify preexisting magnetic fields through the so-called dynamo process. But this remarkable discovery just pushed the mystery one step deeper. If a turbulent dynamo could only amplify an existing field, where did the “seed” magnetic field come from in the first place?

    We wouldn’t have a complete and self-consistent answer to the origin of astrophysical magnetic fields until we understood how the seed fields arose. New work carried out by MIT graduate student Muni Zhou, her advisor Nuno Loureiro, a professor of nuclear science and engineering at MIT, and colleagues at Princeton University and the University of Colorado at Boulder provides an answer that shows the basic processes that generate a field from a completely unmagnetized state to the point where it is strong enough for the dynamo mechanism to take over and amplify the field to the magnitudes that we observe.

    Magnetic fields are everywhere

    Naturally occurring magnetic fields are seen everywhere in the universe. They were first observed on Earth thousands of years ago, through their interaction with magnetized minerals like lodestone, and used for navigation long before people had any understanding of their nature or origin. Magnetism on the sun was discovered at the beginning of the 20th century by its effects on the spectrum of light that the sun emitted. Since then, more powerful telescopes looking deep into space found that the fields were ubiquitous.

    And while scientists had long learned how to make and use permanent magnets and electromagnets, which had all sorts of practical applications, the natural origins of magnetic fields in the universe remained a mystery. Recent work has provided part of the answer, but many aspects of this question are still under debate.

    Amplifying magnetic fields — the dynamo effect

    Scientists started thinking about this problem by considering the way that electric and magnetic fields were produced in the laboratory. When conductors, like copper wire, move in magnetic fields, electric fields are created. These fields, or voltages, can then drive electrical currents. This is how the electricity that we use every day is produced. Through this process of induction, large generators or “dynamos” convert mechanical energy into the electromagnetic energy that powers our homes and offices. A key feature of dynamos is that they need magnetic fields in order to work.

    But out in the universe, there are no obvious wires or big steel structures, so how do the fields arise? Progress on this problem began about a century ago as scientists pondered the source of the Earth’s magnetic field. By then, studies of the propagation of seismic waves showed that much of the Earth, below the cooler surface layers of the mantle, was liquid, and that there was a core composed of molten nickel and iron. Researchers theorized that the convective motion of this hot, electrically conductive liquid and the rotation of the Earth combined in some way to generate the Earth’s field.

    Eventually, models emerged that showed how the convective motion could amplify an existing field. This is an example of “self-organization” — a feature often seen in complex dynamical systems — where large-scale structures grow spontaneously from small-scale dynamics. But just like in a power station, you needed a magnetic field to make a magnetic field.

    A similar process is at work all over the universe. However, in stars and galaxies and in the space between them, the electrically conducting fluid is not molten metal, but plasma — a state of matter that exists at extremely high temperatures where the electrons are ripped away from their atoms. On Earth, plasmas can be seen in lightning or neon lights. In such a medium, the dynamo effect can amplify an existing magnetic field, provided it starts at some minimal level.

    Making the first magnetic fields

    Where does this seed field come from? That’s where the recent work of Zhou and her colleagues, published May 5 in PNAS, comes in. Zhou developed the underlying theory and performed numerical simulations on powerful supercomputers that show how the seed field can be produced and what fundamental processes are at work. An important aspect of the plasma that exists between stars and galaxies is that it is extraordinarily diffuse — typically about one particle per cubic meter. That is a very different situation from the interior of stars, where the particle density is about 30 orders of magnitude higher. The low densities mean that the particles in cosmological plasmas never collide, which has important effects on their behavior that had to be included in the model that these researchers were developing.   

    Calculations performed by the MIT researchers followed the dynamics in these plasmas, which developed from well-ordered waves but became turbulent as the amplitude grew and the interactions became strongly nonlinear. By including detailed effects of the plasma dynamics at small scales on macroscopic astrophysical processes, they demonstrated that the first magnetic fields can be spontaneously produced through generic large-scale motions as simple as sheared flows. Just like the terrestrial examples, mechanical energy was converted into magnetic energy.

    An important output of their computation was the amplitude of the expected spontaneously generated magnetic field. What this showed was that the field amplitude could rise from zero to a level where the plasma is “magnetized” — that is, where the plasma dynamics are strongly affected by the presence of the field. At this point, the traditional dynamo mechanism can take over and raise the fields to the levels that are observed. Thus, their work represents a self-consistent model for the generation of magnetic fields at cosmological scale.

    Professor Ellen Zweibel of the University of Wisconsin at Madison notes that “despite decades of remarkable progress in cosmology, the origin of magnetic fields in the universe remains unknown. It is wonderful to see state-of-the-art plasma physics theory and numerical simulation brought to bear on this fundamental problem.”

    Zhou and co-workers will continue to refine their model and study the handoff from the generation of the seed field to the amplification phase of the dynamo. An important part of their future research will be to determine if the process can work on a time scale consistent with astronomical observations. To quote the researchers, “This work provides the first step in the building of a new paradigm for understanding magnetogenesis in the universe.”

    This work was funded by the National Science Foundation CAREER Award and the Future Investigators of NASA Earth and Space Science Technology (FINESST) grant. More