More stories

  • in

    New tool predicts flood risk from hurricanes in a warming climate

    Coastal cities and communities will face more frequent major hurricanes with climate change in the coming years. To help prepare coastal cities against future storms, MIT scientists have developed a method to predict how much flooding a coastal community is likely to experience as hurricanes evolve over the next decades.

    When hurricanes make landfall, strong winds whip up salty ocean waters that generate storm surge in coastal regions. As the storms move over land, torrential rainfall can induce further flooding inland. When multiple flood sources such as storm surge and rainfall interact, they can compound a hurricane’s hazards, leading to significantly more flooding than would result from any one source alone. The new study introduces a physics-based method for predicting how the risk of such complex, compound flooding may evolve under a warming climate in coastal cities.

    One example of compound flooding’s impact is the aftermath from Hurricane Sandy in 2012. The storm made landfall on the East Coast of the United States as heavy winds whipped up a towering storm surge that combined with rainfall-driven flooding in some areas to cause historic and devastating floods across New York and New Jersey.

    In their study, the MIT team applied the new compound flood-modeling method to New York City to predict how climate change may influence the risk of compound flooding from Sandy-like hurricanes over the next decades.  

    They found that, in today’s climate, a Sandy-level compound flooding event will likely hit New York City every 150 years. By midcentury, a warmer climate will drive up the frequency of such flooding, to every 60 years. At the end of the century, destructive Sandy-like floods will deluge the city every 30 years — a fivefold increase compared to the present climate.

    “Long-term average damages from weather hazards are usually dominated by the rare, intense events like Hurricane Sandy,” says study co-author Kerry Emanuel, professor emeritus of atmospheric science at MIT. “It is important to get these right.”

    While these are sobering projections, the researchers hope the flood forecasts can help city planners prepare and protect against future disasters. “Our methodology equips coastal city authorities and policymakers with essential tools to conduct compound flooding risk assessments from hurricanes in coastal cities at a detailed, granular level, extending to each street or building, in both current and future decades,” says study author Ali Sarhadi, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences.

    The team’s open-access study appears online today in the Bulletin of the American Meteorological Society. Co-authors include Raphaël Rousseau-Rizzi at MIT’s Lorenz Center, Kyle Mandli at Columbia University, Jeffrey Neal at the University of Bristol, Michael Wiper at the Charles III University of Madrid, and Monika Feldmann at the Swiss Federal Institute of Technology Lausanne.

    The seeds of floods

    To forecast a region’s flood risk, weather modelers typically look to the past. Historical records contain measurements of previous hurricanes’ wind speeds, rainfall, and spatial extent, which scientists use to predict where and how much flooding may occur with coming storms. But Sarhadi believes that the limitations and brevity of these historical records are insufficient for predicting future hurricanes’ risks.

    “Even if we had lengthy historical records, they wouldn’t be a good guide for future risks because of climate change,” he says. “Climate change is changing the structural characteristics, frequency, intensity, and movement of hurricanes, and we cannot rely on the past.”

    Sarhadi and his colleagues instead looked to predict a region’s risk of hurricane flooding in a changing climate using a physics-based risk assessment methodology. They first paired simulations of hurricane activity with coupled ocean and atmospheric models over time. With the hurricane simulations, developed originally by Emanuel, the researchers virtually scatter tens of thousands of “seeds” of hurricanes into a simulated climate. Most seeds dissipate, while a few grow into category-level storms, depending on the conditions of the ocean and atmosphere.

    When the team drives these hurricane simulations with climate models of ocean and atmospheric conditions under certain global temperature projections, they can see how hurricanes change, for instance in terms of intensity, frequency, and size, under past, current, and future climate conditions.

    The team then sought to precisely predict the level and degree of compound flooding from future hurricanes in coastal cities. The researchers first used rainfall models to simulate rain intensity for a large number of simulated hurricanes, then applied numerical models to hydraulically translate that rainfall intensity into flooding on the ground during landfalling of hurricanes, given information about a region such as its surface and topography characteristics. They also simulated the same hurricanes’ storm surges, using hydrodynamic models to translate hurricanes’ maximum wind speed and sea level pressure into surge height in coastal areas. The simulation further assessed the propagation of ocean waters into coastal areas, causing coastal flooding.

    Then, the team developed a numerical hydrodynamic model to predict how two sources of hurricane-induced flooding, such as storm surge and rain-driven flooding, would simultaneously interact through time and space, as simulated hurricanes make landfall in coastal regions such as New York City, in both current and future climates.  

    “There’s a complex, nonlinear hydrodynamic interaction between saltwater surge-driven flooding and freshwater rainfall-driven flooding, that forms compound flooding that a lot of existing methods ignore,” Sarhadi says. “As a result, they underestimate the risk of compound flooding.”

    Amplified risk

    With their flood-forecasting method in place, the team applied it to a specific test case: New York City. They used the multipronged method to predict the city’s risk of compound flooding from hurricanes, and more specifically from Sandy-like hurricanes, in present and future climates. Their simulations showed that the city’s odds of experiencing Sandy-like flooding will increase significantly over the next decades as the climate warms, from once every 150 years in the current climate, to every 60 years by 2050, and every 30 years by 2099.

    Interestingly, they found that much of this increase in risk has less to do with how hurricanes themselves will change with warming climates, but with how sea levels will increase around the world.

    “In future decades, we will experience sea level rise in coastal areas, and we also incorporated that effect into our models to see how much that would increase the risk of compound flooding,” Sarhadi explains. “And in fact, we see sea level rise is playing a major role in amplifying the risk of compound flooding from hurricanes in New York City.”

    The team’s methodology can be applied to any coastal city to assess the risk of compound flooding from hurricanes and extratropical storms. With this approach, Sarhadi hopes decision-makers can make informed decisions regarding the implementation of adaptive measures, such as reinforcing coastal defenses to enhance infrastructure and community resilience.

    “Another aspect highlighting the urgency of our research is the projected 25 percent increase in coastal populations by midcentury, leading to heightened exposure to damaging storms,” Sarhadi says. “Additionally, we have trillions of dollars in assets situated in coastal flood-prone areas, necessitating proactive strategies to reduce damages from compound flooding from hurricanes under a warming climate.”

    This research was supported, in part, by Homesite Insurance. More

  • in

    The science and art of complex systems

    As a high school student, Gosha Geogdzhayev attended Saturday science classes at Columbia University, including one called The Physics of Climate Change. “They showed us a satellite image of the Earth’s atmosphere, and I thought, ‘Wow, this is so beautiful,’” he recalls. Since then, climate science has been one of his driving interests.

    With the MIT Department of Earth, Atmospheric and Planetary Sciences and the BC3 Climate Grand Challenges project, Geogdzhayev is creating climate model “emulators” in order to localize the large-scale data provided by global climate models (GCMs). As he explains, GCMs can make broad predictions about climate change, but they are not proficient at analyzing impacts in localized areas. However, simpler “emulator” models can learn from GCMs and other data sources to answer specialized questions. The model Geogdzhayev is currently working on will project the frequency of extreme heat events in Nigeria.

    A senior majoring in physics, Geogdzhayev hopes that his current and future research will help reshape the scientific approach to studying climate trends. More accurate predictions of climate conditions could have benefits far beyond scientific analysis, and affect the decisions of policymakers, businesspeople, and truly anyone concerned about climate change.

    “I have this fascination with complex systems, and reducing that complexity and picking it apart,” Geogdzhayev says.

    His pursuit of discovery has led him from Berlin, Germany, to Princeton, New Jersey, with stops in between. He has worked with Transsolar KlimaEngineering, NASA, NOAA, FU Berlin, and MIT, including through the MIT Climate Stability Consortium’s Climate Scholars Program, in research positions that explore climate science in different ways. His projects have involved applications such as severe weather alerts, predictions of late seasonal freezes, and eco-friendly building design. 

    The written word

    Originating even earlier than his passion for climate science is Geogdzhayev’s love of writing. He recently discovered original poetry dating back all the way to middle school. In this poetry he found a coincidental throughline to his current life: “There was one poem about climate, actually. It was so bad,” he says, laughing. “But it was cool to see.”

    As a scientist, Geogdzhayev finds that poetry helps quiet his often busy mind. Writing provides a vehicle to understand himself, and therefore to communicate more effectively with others, which he sees as necessary for success in his field.

    “A lot of good work comes from being able to communicate with other people. And poetry is a way for me to flex those muscles. If I can communicate with myself, and if I can communicate myself to others, that is transferable to science,” he says.

    Since last spring Geogdzhayev has attended poetry workshop classes at Harvard University, which he enjoys partly because it nudges him to explore spaces outside of MIT.

    He has contributed prolifically to platforms on campus as well. Since his first year, he has written as a staff blogger for MIT Admissions, creating posts about his life at MIT for prospective students. He has also written for the yearly fashion publication “Infinite Magazine.”

    Merging both science and writing, a peer-reviewed publication by Geogdzhayev will soon be published in the journal “Physica D: Nonlinear Phenomena.” The piece explores the validity of climate statistics under climate change through an abstract mathematical system.

    Leading with heart

    Geogdzhayev enjoys being a collaborator, but also excels in leadership positions. When he first arrived at MIT, his dorm, Burton Conner, was closed for renovation, and he could not access that living community directly. Once his sophomore year arrived however, he was quick to volunteer to streamline the process to get new students involved, and eventually became floor chair for his living community, Burton 1.

    Following the social stagnation caused by the Covid-19 pandemic and the dorm renovation, he helped rebuild a sense of community for his dorm by planning social events and governmental organization for the floor. He now regards the members of Burton 1 as his closest friends and partners in “general tomfoolery.”

    This sense of leadership is coupled with an affinity for teaching. Geogdzhayev is a peer mentor in the Physics Mentorship Program and taught climate modeling classes to local high school students as a part of SPLASH. He describes these experiences as “very fun” and can imagine himself as a university professor dedicated to both teaching and research.

    Following graduation, Geogdzhayev intends to pursue a PhD in climate science or applied math. “I can see myself working on research for the rest of my life,” he says. More

  • in

    Celebrating five years of MIT.nano

    There is vast opportunity for nanoscale innovation to transform the world in positive ways — expressed MIT.nano Director Vladimir Bulović as he posed two questions to attendees at the start of the inaugural Nano Summit: “Where are we heading? And what is the next big thing we can develop?”

    “The answer to that puts into perspective our main purpose — and that is to change the world,” Bulović, the Fariborz Maseeh Professor of Emerging Technologies, told an audience of more than 325 in-person and 150 virtual participants gathered for an exploration of nano-related research at MIT and a celebration of MIT.nano’s fifth anniversary.

    Over a decade ago, MIT embarked on a massive project for the ultra-small — building an advanced facility to support research at the nanoscale. Construction of MIT.nano in the heart of MIT’s campus, a process compared to assembling a ship in a bottle, began in 2015, and the facility launched in October 2018.

    Fast forward five years: MIT.nano now contains nearly 170 tools and instruments serving more than 1,200 trained researchers. These individuals come from over 300 principal investigator labs, representing more than 50 MIT departments, labs, and centers. The facility also serves external users from industry, other academic institutions, and over 130 startup and multinational companies.

    A cross section of these faculty and researchers joined industry partners and MIT community members to kick off the first Nano Summit, which is expected to become an annual flagship event for MIT.nano and its industry consortium. Held on Oct. 24, the inaugural conference was co-hosted by the MIT Industrial Liaison Program.

    Six topical sessions highlighted recent developments in quantum science and engineering, materials, advanced electronics, energy, biology, and immersive data technology. The Nano Summit also featured startup ventures and an art exhibition.

    Watch the videos here.

    Seeing and manipulating at the nanoscale — and beyond

    “We need to develop new ways of building the next generation of materials,” said Frances Ross, the TDK Professor in Materials Science and Engineering (DMSE). “We need to use electron microscopy to help us understand not only what the structure is after it’s built, but how it came to be. I think the next few years in this piece of the nano realm are going to be really amazing.”

    Speakers in the session “The Next Materials Revolution,” chaired by MIT.nano co-director for Characterization.nano and associate professor in DMSE James LeBeau, highlighted areas in which cutting-edge microscopy provides insights into the behavior of functional materials at the nanoscale, from anti-ferroelectrics to thin-film photovoltaics and 2D materials. They shared images and videos collected using the instruments in MIT.nano’s characterization suites, which were specifically designed and constructed to minimize mechanical-vibrational and electro-magnetic interference.

    Later, in the “Biology and Human Health” session chaired by Boris Magasanik Professor of Biology Thomas Schwartz, biologists echoed the materials scientists, stressing the importance of the ultra-quiet, low-vibration environment in Characterization.nano to obtain high-resolution images of biological structures.

    “Why is MIT.nano important for us?” asked Schwartz. “An important element of biology is to understand the structure of biology macromolecules. We want to get to an atomic resolution of these structures. CryoEM (cryo-electron microscopy) is an excellent method for this. In order to enable the resolution revolution, we had to get these instruments to MIT. For that, MIT.nano was fantastic.”

    Seychelle Vos, the Robert A. Swanson (1969) Career Development Professor of Life Sciences, shared CryoEM images from her lab’s work, followed by biology Associate Professor Joey Davis who spoke about image processing. When asked about the next stage for CryoEM, Davis said he’s most excited about in-situ tomography, noting that there are new instruments being designed that will improve the current labor-intensive process.

    To chart the future of energy, chemistry associate professor Yogi Surendranath is also using MIT.nano to see what is happening at the nanoscale in his research to use renewable electricity to change carbon dioxide into fuel.

    “MIT.nano has played an immense role, not only in facilitating our ability to make nanostructures, but also to understand nanostructures through advanced imaging capabilities,” said Surendranath. “I see a lot of the future of MIT.nano around the question of how nanostructures evolve and change under the conditions that are relevant to their function. The tools at MIT.nano can help us sort that out.”

    Tech transfer and quantum computing

    The “Advanced Electronics” session chaired by Jesús del Alamo, the Donner Professor of Science in the Department of Electrical Engineering and Computer Science (EECS), brought together industry partners and MIT faculty for a panel discussion on the future of semiconductors and microelectronics. “Excellence in innovation is not enough, we also need to be excellent in transferring these to the marketplace,” said del Alamo. On this point, panelists spoke about strengthening the industry-university connection, as well as the importance of collaborative research environments and of access to advanced facilities, such as MIT.nano, for these environments to thrive.

    The session came on the heels of a startup exhibit in which eleven START.nano companies presented their technologies in health, energy, climate, and virtual reality, among other topics. START.nano, MIT.nano’s hard-tech accelerator, provides participants use of MIT.nano’s facilities at a discounted rate and access to MIT’s startup ecosystem. The program aims to ease hard-tech startups’ transition from the lab to the marketplace, surviving common “valleys of death” as they move from idea to prototype to scaling up.

    When asked about the state of quantum computing in the “Quantum Science and Engineering” session, physics professor Aram Harrow related his response to these startup challenges. “There are quite a few valleys to cross — there are the technical valleys, and then also the commercial valleys.” He spoke about scaling superconducting qubits and qubits made of suspended trapped ions, and the need for more scalable architectures, which we have the ingredients for, he said, but putting everything together is quite challenging.

    Throughout the session, William Oliver, professor of physics and the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science, asked the panelists how MIT.nano can address challenges in assembly and scalability in quantum science.

    “To harness the power of students to innovate, you really need to allow them to get their hands dirty, try new things, try all their crazy ideas, before this goes into a foundry-level process,” responded Kevin O’Brien, associate professor in EECS. “That’s what my group has been working on at MIT.nano, building these superconducting quantum processors using the state-of-the art fabrication techniques in MIT.nano.”

    Connecting the digital to the physical

    In his reflections on the semiconductor industry, Douglas Carlson, senior vice president for technology at MACOM, stressed connecting the digital world to real-world application. Later, in the “Immersive Data Technology” session, MIT.nano associate director Brian Anthony explained how, at the MIT.nano Immersion Lab, researchers are doing just that.

    “We think about and facilitate work that has the human immersed between hardware, data, and experience,” said Anthony, principal research scientist in mechanical engineering. He spoke about using the capabilities of the Immersion Lab to apply immersive technologies to different areas — health, sports, performance, manufacturing, and education, among others. Speakers in this session gave specific examples in hardware, pediatric health, and opera.

    Anthony connected this third pillar of MIT.nano to the fab and characterization facilities, highlighting how the Immersion Lab supports work conducted in other parts of the building. The Immersion Lab’s strength, he said, is taking novel work being developed inside MIT.nano and bringing it up to the human scale to think about applications and uses.

    Artworks that are scientifically inspired

    The Nano Summit closed with a reception at MIT.nano where guests could explore the facility and gaze through the cleanroom windows, where users were actively conducting research. Attendees were encouraged to visit an exhibition on MIT.nano’s first- and second-floor galleries featuring work by students from the MIT Program in Art, Culture, and Technology (ACT) who were invited to utilize MIT.nano’s tool sets and environments as inspiration for art.

    In his closing remarks, Bulović reflected on the community of people who keep MIT.nano running and who are using the tools to advance their research. “Today we are celebrating the facility and all the work that has been done over the last five years to bring it to where it is today. It is there to function not just as a space, but as an essential part of MIT’s mission in research, innovation, and education. I hope that all of us here today take away a deep appreciation and admiration for those who are leading the journey into the nano age.” More

  • in

    Uncovering how biomes respond to climate change

    Before Leila Mirzagholi arrived at MIT’s Department of Civil and Environmental Engineering (CEE) to begin her postdoc appointment, she had spent most of her time in academia building cosmological models to detect properties of gravitational waves in the cosmos.

    But as a member of Assistant Professor César Terrer’s lab in CEE, Mirzagholi uses her physics and mathematical background to improve our understanding of the different factors that influence how much carbon land ecosystems can store under climate change.

    “What was always important to me was thinking about how to solve a problem and putting all the pieces together and building something from scratch,” Mirzagholi says, adding this was one of the reasons that it was possible for her to switch fields — and what drives her today as a climate scientist.

    Growing up in Iran, Mirzagholi knew she wanted to be a scientist from an early age. As a kid, she became captivated by physics, spending most of her free time in a local cultural center that hosted science events. “I remember in that center there was an observatory that held observational tours and it drew me into science,” says Mirzgholi. She also remembers a time when she was a kid watching the science fiction film “Contact” that introduces a female scientist character who finds evidence of extraterrestrial life and builds a spaceship to make first contact: “After that movie my mind was set on pursuing astrophysics.”

    With the encouragement of her parents to develop a strong mathematical background before pursuing physics, she earned a bachelor’s degree in mathematics from Tehran University. Then she completed a one-year master class in mathematics at Utrecht University before completing her PhD in theoretical physics at Max Planck Institute for Astrophysics in Munich. There, Mirzgholi’s thesis focused on developing cosmological models with a focus on phenomenological aspects like propagation of gravitational waves on the cosmic microwave background.

    Midway through her PhD, Mirzgholi became discouraged with building models to explain the dynamics of the early universe because there is little new data. “It starts to get personal and becomes a game of: ‘Is it my model or your model?’” she explains. She grew frustrated not knowing when the models she’d built would ever be tested.

    It was at this time that Mirzgholi started reading more about the topics of climate change and climate science. “I was really motivated by the problems and the nature of the problems, especially to make global terrestrial ecology more quantitative,” she says. She also liked the idea of contributing to a global problem that we are all facing. She started to think, “maybe I can do my part, I can work on research beneficial for society and the planet.”

    She made the switch following her PhD and started as a postdoc in the Crowther Lab at ETH Zurich, working on understanding the effects of environmental changes on global vegetation activity. After a stint at ETH, where her colleagues collaborated on projects with the Terrer Lab, she relocated to Cambridge, Massachusetts, to join the lab and CEE.

    Her latest article in Science, which was published in July and co-authored by researchers from ETH, shows how global warming affects the timing of autumn leaf senescence. “It’s important to understand the length of the growing season, and how much the forest or other biomes will have the capacity to take in carbon from the atmosphere.” Using remote sensing data, she was able to understand when the growing season will end under a warming climate. “We distinguish two dates — when autumn is onsetting and the leaves are starting to turn yellow, versus when the leaves are 50 percent yellow — to represent the progression of leaf senescence,” she says.

    In the context of rising temperature, when the warming is happening plays a crucial role. If warming temperatures happen before the summer solstice, it triggers trees to begin their seasonal cycles faster, leading to reduced photosynthesis, ending in an earlier autumn. On the other hand, if the warming happens after the summer solstice, it delays the discoloration process, making autumn last longer. “For every degree Celsius of pre-solstice warming, the onset of leaf senescence advances by 1.9 days, while each degree Celsius of post-solstice warming delays the senescence process by 2.6 days,” she explains. Understanding the timing of autumn leaf senescence is essential in efforts to predict carbon storage capacity when modeling global carbon cycles.

    Another problem she’s working on in the Terrer Lab is discovering how deforestation is changing our local climate. How much is it cooling or warming the temperature, and how is the hydrological cycle changing because of deforestation? Investigating these questions will give insight into how much we can depend on natural solutions for carbon uptake to help mitigate climate change. “Quantitatively, we want to put a number to the amount of carbon uptake from various natural solutions, as opposed to other solutions,” she says.

    With year-and-a-half left in her postdoc appointment, Mirzagholi has begun considering her next career steps. She likes the idea of applying to climate scientist jobs in industry or national labs, as well as tenure track faculty positions. Whether she pursues a career in academia or industry, Mirzagholi aims to continue conducting fundamental climate science research. Her multidisciplinary background in physics, mathematics, and climate science has given her a multifaceted perspective, which she applies to every research problem.

    “Looking back, I’m grateful for all my educational experiences from spending time in the cultural center as a kid, my background in physics, the support from colleagues at the Crowther lab at ETH who facilitated my transition from physics to ecology, and now working at MIT alongside Professor Terrer, because it’s shaped my career path and the researcher I am today.” More

  • in

    Fast-tracking fusion energy’s arrival with AI and accessibility

    As the impacts of climate change continue to grow, so does interest in fusion’s potential as a clean energy source. While fusion reactions have been studied in laboratories since the 1930s, there are still many critical questions scientists must answer to make fusion power a reality, and time is of the essence. As part of their strategy to accelerate fusion energy’s arrival and reach carbon neutrality by 2050, the U.S. Department of Energy (DoE) has announced new funding for a project led by researchers at MIT’s Plasma Science and Fusion Center (PSFC) and four collaborating institutions.

    Cristina Rea, a research scientist and group leader at the PSFC, will serve as the primary investigator for the newly funded three-year collaboration to pilot the integration of fusion data into a system that can be read by AI-powered tools. The PSFC, together with scientists from the College of William and Mary, the University of Wisconsin at Madison, Auburn University, and the nonprofit HDF Group, plan to create a holistic fusion data platform, the elements of which could offer unprecedented access for researchers, especially underrepresented students. The project aims to encourage diverse participation in fusion and data science, both in academia and the workforce, through outreach programs led by the group’s co-investigators, of whom four out of five are women. 

    The DoE’s award, part of a $29 million funding package for seven projects across 19 institutions, will support the group’s efforts to distribute data produced by fusion devices like the PSFC’s Alcator C-Mod, a donut-shaped “tokamak” that utilized powerful magnets to control and confine fusion reactions. Alcator C-Mod operated from 1991 to 2016 and its data are still being studied, thanks in part to the PSFC’s commitment to the free exchange of knowledge.

    Currently, there are nearly 50 public experimental magnetic confinement-type fusion devices; however, both historical and current data from these devices can be difficult to access. Some fusion databases require signing user agreements, and not all data are catalogued and organized the same way. Moreover, it can be difficult to leverage machine learning, a class of AI tools, for data analysis and to enable scientific discovery without time-consuming data reorganization. The result is fewer scientists working on fusion, greater barriers to discovery, and a bottleneck in harnessing AI to accelerate progress.

    The project’s proposed data platform addresses technical barriers by being FAIR — Findable, Interoperable, Accessible, Reusable — and by adhering to UNESCO’s Open Science (OS) recommendations to improve the transparency and inclusivity of science; all of the researchers’ deliverables will adhere to FAIR and OS principles, as required by the DoE. The platform’s databases will be built using MDSplusML, an upgraded version of the MDSplus open-source software developed by PSFC researchers in the 1980s to catalogue the results of Alcator C-Mod’s experiments. Today, nearly 40 fusion research institutes use MDSplus to store and provide external access to their fusion data. The release of MDSplusML aims to continue that legacy of open collaboration.

    The researchers intend to address barriers to participation for women and disadvantaged groups not only by improving general access to fusion data, but also through a subsidized summer school that will focus on topics at the intersection of fusion and machine learning, which will be held at William and Mary for the next three years.

    Of the importance of their research, Rea says, “This project is about responding to the fusion community’s needs and setting ourselves up for success. Scientific advancements in fusion are enabled via multidisciplinary collaboration and cross-pollination, so accessibility is absolutely essential. I think we all understand now that diverse communities have more diverse ideas, and they allow faster problem-solving.”

    The collaboration’s work also aligns with vital areas of research identified in the International Atomic Energy Agency’s “AI for Fusion” Coordinated Research Project (CRP). Rea was selected as the technical coordinator for the IAEA’s CRP emphasizing community engagement and knowledge access to accelerate fusion research and development. In a letter of support written for the group’s proposed project, the IAEA stated that, “the work [the researchers] will carry out […] will be beneficial not only to our CRP but also to the international fusion community in large.”

    PSFC Director and Hitachi America Professor of Engineering Dennis Whyte adds, “I am thrilled to see PSFC and our collaborators be at the forefront of applying new AI tools while simultaneously encouraging and enabling extraction of critical data from our experiments.”

    “Having the opportunity to lead such an important project is extremely meaningful, and I feel a responsibility to show that women are leaders in STEM,” says Rea. “We have an incredible team, strongly motivated to improve our fusion ecosystem and to contribute to making fusion energy a reality.” More

  • in

    Simple superconducting device could dramatically cut energy use in computing, other applications

    MIT scientists and their colleagues have created a simple superconducting device that could transfer current through electronic devices much more efficiently than is possible today. As a result, the new diode, a kind of switch, could dramatically cut the amount of energy used in high-power computing systems, a major problem that is estimated to become much worse. Even though it is in the early stages of development, the diode is more than twice as efficient as similar ones reported by others. It could even be integral to emerging quantum computing technologies.

    The work, which is reported in the July 13 online issue of Physical Review Letters, is also the subject of a news story in Physics Magazine.

    “This paper showcases that the superconducting diode is an entirely solved problem from an engineering perspective,” says Philip Moll, director of the Max Planck Institute for the Structure and Dynamics of Matter in Germany. Moll was not involved in the work. “The beauty of [this] work is that [Moodera and colleagues] obtained record efficiencies without even trying [and] their structures are far from optimized yet.”

    “Our engineering of a superconducting diode effect that is robust and can operate over a wide temperature range in simple systems can potentially open the door for novel technologies,” says Jagadeesh Moodera, leader of the current work and a senior research scientist in MIT’s Department of Physics. Moodera is also affiliated with the Materials Research Laboratory, the Francis Bitter Magnet Laboratory, and the Plasma Science and Fusion Center (PSFC).

    The nanoscopic rectangular diode — about 1,000 times thinner than the diameter of a human hair — is easily scalable. Millions could be produced on a single silicon wafer.

    Toward a superconducting switch

    Diodes, devices that allow current to travel easily in one direction but not in the reverse, are ubiquitous in computing systems. Modern semiconductor computer chips contain billions of diode-like devices known as transistors. However, these devices can get very hot due to electrical resistance, requiring vast amounts of energy to cool the high-power systems in the data centers behind myriad modern technologies, including cloud computing. According to a 2018 news feature in Nature, these systems could use nearly 20 percent of the world’s power in 10 years.

    As a result, work toward creating diodes made of superconductors has been a hot topic in condensed matter physics. That’s because superconductors transmit current with no resistance at all below a certain low temperature (the critical temperature), and are therefore much more efficient than their semiconducting cousins, which have noticeable energy loss in the form of heat.

    Until now, however, other approaches to the problem have involved much more complicated physics. “The effect we found is due [in part] to a ubiquitous property of superconductors that can be realized in a very simple, straightforward manner. It just stares you in the face,” says Moodera.

    Says Moll of the Max Planck Institute, “The work is an important counterpoint to the current fashion to associate superconducting diodes [with] exotic physics, such as finite-momentum pairing states. While in reality, a superconducting diode is a common and widespread phenomenon present in classical materials, as a result of certain broken symmetries.”

    A somewhat serendipitous discovery

    In 2020 Moodera and colleagues observed evidence of an exotic particle pair known as Majorana fermions. These particle pairs could lead to a new family of topological qubits, the building blocks of quantum computers. While pondering approaches to creating superconducting diodes, the team realized that the material platform they developed for the Majorana work might also be applied to the diode problem.

    They were right. Using that general platform, they developed different iterations of superconducting diodes, each more efficient than the last. The first, for example, consisted of a nanoscopically thin layer of vanadium, a superconductor, which was patterned into a structure common to electronics (the Hall bar). When they applied a tiny magnetic field comparable to the Earth’s magnetic field, they saw the diode effect — a giant polarity dependence for current flow.

    They then created another diode, this time layering a superconductor with a ferromagnet (a ferromagnetic insulator in their case), a material that produces its own tiny magnetic field. After applying a tiny magnetic field to magnetize the ferromagnet so that it produces its own field, they found an even bigger diode effect that was stable even after the original magnetic field was turned off.

    Ubiquitous properties

    The team went on to figure out what was happening.

    In addition to transmitting current with no resistance, superconductors also have other, less well-known but just as ubiquitous properties. For example, they don’t like magnetic fields getting inside. When exposed to a tiny magnetic field, superconductors produce an internal supercurrent that induces its own magnetic flux that cancels the external field, thereby maintaining their superconducting state. This phenomenon, known as the Meissner screening effect, can be thought of as akin to our bodies’ immune system releasing antibodies to fight the infection of bacteria and other pathogens. This works, however, only up to some limit. Similarly, superconductors cannot entirely keep out large magnetic fields.

    The diodes the team created make use of this universal Meissner screening effect. The tiny magnetic field they applied — either directly, or through the adjacent ferromagnetic layer — activates the material’s screening current mechanism for expelling the external magnetic field and maintaining superconductivity.

    The team also found that another key factor in optimizing these superconductor diodes is tiny differences between the two sides, or edges, of the diode devices. These differences “create some sort of asymmetry in the way the magnetic field enters the superconductor,” Moodera says.

    By engineering their own form of edges on diodes to optimize these differences — for example, one edge with sawtooth features, while the other edge not intentionally altered — the team found that they could increase the efficiency from 20 percent to more than 50 percent. This discovery opens the door for devices whose edges could be “tuned” for even higher efficiencies, Moodera says.

    In sum, the team discovered that the edge asymmetries within superconducting diodes, the ubiquitous Meissner screening effect found in all superconductors, and a third property of superconductors known as vortex pinning all came together to produce the diode effect.

    “It is fascinating to see how inconspicuous yet ubiquitous factors can create a significant effect in observing the diode effect,” says Yasen Hou, first author of the paper and a postdoc at the Francis Bitter Magnet Laboratory and the PSFC. “What’s more exciting is that [this work] provides a straightforward approach with huge potential to further improve the efficiency.”

    Christoph Strunk is a professor at the University of Regensburg in Germany. Says Strunk, who was not involved in the research, “the present work demonstrates that the supercurrent in simple superconducting strips can become nonreciprocal. Moreover, when combined with a ferromagnetic insulator, the diode effect can even be maintained in the absence of an external magnetic field. The rectification direction can be programmed by the remnant magnetization of the magnetic layer, which may have high potential for future applications. The work is important and appealing both from the basic research and from the applications point of view.”

    Teenage contributors

    Moodera noted that the two researchers who created the engineered edges did so while still in high school during a summer at Moodera’s lab. They are Ourania Glezakou-Elbert of Richland, Washington, who will be going to Princeton University this fall, and Amith Varambally of Vestavia Hills, Alabama, who will be entering Caltech.

    Says Varambally, “I didn’t know what to expect when I set foot in Boston last summer, and certainly never expected to [be] a coauthor in a Physical Review Letters paper.

    “Every day was exciting, whether I was reading dozens of papers to better understand the diode phenomena, or operating machinery to fabricate new diodes for study, or engaging in conversations with Ourania, Dr. Hou, and Dr. Moodera about our research.

    “I am profoundly grateful to Dr. Moodera and Dr. Hou for providing me with the opportunity to work on such a fascinating project, and to Ourania for being a great research partner and friend.”

    In addition to Moodera and Hou, corresponding authors of the paper are professors Patrick A. Lee of the MIT Department of Physics and Akashdeep Kamra of Autonomous University of Madrid. Other authors from MIT are Liang Fu and Margarita Davydova of the Department of Physics, and Hang Chi, Alessandro Lodesani, and Yingying Wu, all of the Francis Bitter Magnet Laboratory and the Plasma Science and Fusion Center. Chi is also affiliated with the U.S. Army CCDC Research Laboratory.

    Authors also include Fabrizio Nichele, Markus F. Ritter, and Daniel Z. Haxwell of IBM Research Europe; Stefan Ilićof Materials Physics Center (CFM-MPC); and F. Sebastian Bergeret of CFM-MPC and Donostia International Physics Center.

    This work was supported by the Air Force Office of Sponsored Research, the Office of Naval Research, the National Science Foundation, and the Army Research Office. Additional funders are the European Research Council, the European Union’s Horizon 2020 Research and Innovation Framework Programme, the Spanish Ministry of Science and Innovation, the A. v. Humboldt Foundation, and the Department of Energy’s Office of Basic Sciences. More

  • in

    Study: The ocean’s color is changing as a consequence of climate change

    The ocean’s color has changed significantly over the last 20 years, and the global trend is likely a consequence of human-induced climate change, report scientists at MIT, the National Oceanography Center in the U.K., and elsewhere.  

    In a study appearing today in Nature, the team writes that they have detected changes in ocean color over the past two decades that cannot be explained by natural, year-to-year variability alone. These color shifts, though subtle to the human eye, have occurred over 56 percent of the world’s oceans — an expanse that is larger than the total land area on Earth.

    In particular, the researchers found that tropical ocean regions near the equator have become steadily greener over time. The shift in ocean color indicates that ecosystems within the surface ocean must also be changing, as the color of the ocean is a literal reflection of the organisms and materials in its waters.

    At this point, the researchers cannot say how exactly marine ecosystems are changing to reflect the shifting color. But they are pretty sure of one thing: Human-induced climate change is likely the driver.

    “I’ve been running simulations that have been telling me for years that these changes in ocean color are going to happen,” says study co-author Stephanie Dutkiewicz, senior research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences and the Center for Global Change Science. “To actually see it happening for real is not surprising, but frightening. And these changes are consistent with man-induced changes to our climate.”

    “This gives additional evidence of how human activities are affecting life on Earth over a huge spatial extent,” adds lead author B. B. Cael PhD ’19 of the National Oceanography Center in Southampton, U.K. “It’s another way that humans are affecting the biosphere.”

    The study’s co-authors also include Stephanie Henson of the National Oceanography Center, Kelsey Bisson at Oregon State University, and Emmanuel Boss of the University of Maine.

    Above the noise

    The ocean’s color is a visual product of whatever lies within its upper layers. Generally, waters that are deep blue reflect very little life, whereas greener waters indicate the presence of ecosystems, and mainly phytoplankton — plant-like microbes that are abundant in upper ocean and that contain the green pigment chlorophyll. The pigment helps plankton harvest sunlight, which they use to capture carbon dioxide from the atmosphere and convert it into sugars.

    Phytoplankton are the foundation of the marine food web that sustains progressively more complex organisms, on up to krill, fish, and seabirds and marine mammals. Phytoplankton are also a powerful muscle in the ocean’s ability to capture and store carbon dioxide. Scientists are therefore keen to monitor phytoplankton across the surface oceans and to see how these essential communities might respond to climate change. To do so, scientists have tracked changes in chlorophyll, based on the ratio of how much blue versus green light is reflected from the ocean surface, which can be monitored from space

    But around a decade ago, Henson, who is a co-author of the current study, published a paper with others, which showed that, if scientists were tracking chlorophyll alone, it would take at least 30 years of continuous monitoring to detect any trend that was driven specifically by climate change. The reason, the team argued, was that the large, natural variations in chlorophyll from year to year would overwhelm any anthropogenic influence on chlorophyll concentrations. It would therefore take several decades to pick out a meaningful, climate-change-driven signal amid the normal noise.

    In 2019, Dutkiewicz and her colleagues published a separate paper, showing through a new model that the natural variation in other ocean colors is much smaller compared to that of chlorophyll. Therefore, any signal of climate-change-driven changes should be easier to detect over the smaller, normal variations of other ocean colors. They predicted that such changes should be apparent within 20, rather than 30 years of monitoring.

    “So I thought, doesn’t it make sense to look for a trend in all these other colors, rather than in chlorophyll alone?” Cael says. “It’s worth looking at the whole spectrum, rather than just trying to estimate one number from bits of the spectrum.”

     The power of seven

    In the current study, Cael and the team analyzed measurements of ocean color taken by the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard the Aqua satellite, which has been monitoring ocean color for 21 years. MODIS takes measurements in seven visible wavelengths, including the two colors researchers traditionally use to estimate chlorophyll.

    The differences in color that the satellite picks up are too subtle for human eyes to differentiate. Much of the ocean appears blue to our eye, whereas the true color may contain a mix of subtler wavelengths, from blue to green and even red.

    Cael carried out a statistical analysis using all seven ocean colors measured by the satellite from 2002 to 2022 together. He first looked at how much the seven colors changed from region to region during a given year, which gave him an idea of their natural variations. He then zoomed out to see how these annual variations in ocean color changed over a longer stretch of two decades. This analysis turned up a clear trend, above the normal year-to-year variability.

    To see whether this trend is related to climate change, he then looked to Dutkiewicz’s model from 2019. This model simulated the Earth’s oceans under two scenarios: one with the addition of greenhouse gases, and the other without it. The greenhouse-gas model predicted that a significant trend should show up within 20 years and that this trend should cause changes to ocean color in about 50 percent of the world’s surface oceans — almost exactly what Cael found in his analysis of real-world satellite data.

    “This suggests that the trends we observe are not a random variation in the Earth system,” Cael says. “This is consistent with anthropogenic climate change.”

    The team’s results show that monitoring ocean colors beyond chlorophyll could give scientists a clearer, faster way to detect climate-change-driven changes to marine ecosystems.

    “The color of the oceans has changed,” Dutkiewicz says. “And we can’t say how. But we can say that changes in color reflect changes in plankton communities, that will impact everything that feeds on plankton. It will also change how much the ocean will take up carbon, because different types of plankton have different abilities to do that. So, we hope people take this seriously. It’s not only models that are predicting these changes will happen. We can now see it happening, and the ocean is changing.”

    This research was supported, in part, by NASA. More

  • in

    A new mathematical “blueprint” is accelerating fusion device development

    Developing commercial fusion energy requires scientists to understand sustained processes that have never before existed on Earth. But with so many unknowns, how do we make sure we’re designing a device that can successfully harness fusion power?

    We can fill gaps in our understanding using computational tools like algorithms and data simulations to knit together experimental data and theory, which allows us to optimize fusion device designs before they’re built, saving much time and resources.

    Currently, classical supercomputers are used to run simulations of plasma physics and fusion energy scenarios, but to address the many design and operating challenges that still remain, more powerful computers are a necessity, and of great interest to plasma researchers and physicists.

    Quantum computers’ exponentially faster computing speeds have offered plasma and fusion scientists the tantalizing possibility of vastly accelerated fusion device development. Quantum computers could reconcile a fusion device’s many design parameters — for example, vessel shape, magnet spacing, and component placement — at a greater level of detail, while also completing the tasks faster. However, upgrading to a quantum computer is no simple task.

    In a paper, “Dyson maps and unitary evolution for Maxwell equations in tensor dielectric media,” recently published in Physics Review A, Abhay K. Ram, a research scientist at the MIT Plasma Science and Fusion Center (PSFC), and his co-authors Efstratios Koukoutsis, Kyriakos Hizanidis, and George Vahala present a framework that would facilitate the use of quantum computers to study electromagnetic waves in plasma and its manipulation in magnetic confinement fusion devices.

    Quantum computers excel at simulating quantum physics phenomena, but many topics in plasma physics are predicated on the classical physics model. A plasma (which is the “dielectric media” referenced in the paper’s title) consists of many particles — electrons and ions — the collective behaviors of which are effectively described using classic statistical physics. In contrast, quantum effects that influence atomic and subatomic scales are averaged out in classical plasma physics.  

    Furthermore, the descriptive limitations of quantum mechanics aren’t suited to plasma. In a fusion device, plasmas are heated and manipulated using electromagnetic waves, which are one of the most important and ubiquitous occurrences in the universe. The behaviors of electromagnetic waves, including how waves are formed and interact with their surroundings, are described by Maxwell’s equations — a foundational component of classical plasma physics, and of general physics as well. The standard form of Maxwell’s equations is not expressed in “quantum terms,” however, so implementing the equations on a quantum computer is like fitting a square peg in a round hole: it doesn’t work.

    Consequently, for plasma physicists to take advantage of quantum computing’s power for solving problems, classical physics must be translated into the language of quantum mechanics. The researchers tackled this translational challenge, and in their paper, they reveal that a Dyson map can bridge the translational divide between classical physics and quantum mechanics. Maps are mathematical functions that demonstrate how to take an input from one kind of space and transform it to an output that is meaningful in a different kind of space. In the case of Maxwell’s equations, a Dyson map allows classical electromagnetic waves to be studied in the space utilized by quantum computers. In essence, it reconfigures the square peg so it will fit into the round hole without compromising any physics.

    The work also gives a blueprint of a quantum circuit encoded with equations expressed in quantum bits (“qubits”) rather than classical bits so the equations may be used on quantum computers. Most importantly, these blueprints can be coded and tested on classical computers.

    “For years we have been studying wave phenomena in plasma physics and fusion energy science using classical techniques. Quantum computing and quantum information science is challenging us to step out of our comfort zone, thereby ensuring that I have not ‘become comfortably numb,’” says Ram, quoting a Pink Floyd song.

    The paper’s Dyson map and circuits have put quantum computing power within reach, fast-tracking an improved understanding of plasmas and electromagnetic waves, and putting us that much closer to the ideal fusion device design.    More