More stories

  • in

    Q&A: Options for the Diablo Canyon nuclear plant

    The Diablo Canyon nuclear plant in California, the only one still operating in the state, is set to close in 2025. A team of researchers at MIT’s Center for Advanced Nuclear Energy Systems, Abdul Latif Jameel Water and Food Systems Lab, and Center for Energy and Environmental Policy Research; Stanford’s Precourt Energy Institute; and energy analysis firm LucidCatalyst LLC have analyzed the potential benefits the plant could provide if its operation were extended to 2030 or 2045.

    They found that this nuclear plant could simultaneously help to stabilize the state’s electric grid, provide desalinated water to supplement the state’s chronic water shortages, and provide carbon-free hydrogen fuel for transportation. MIT News asked report co-authors Jacopo Buongiorno, the TEPCO Professor of Nuclear Science and Engineering, and John Lienhard, the Jameel Professor of Water and Food, to discuss the group’s findings.

    Q: Your report suggests co-locating a major desalination plant alongside the existing Diablo Canyon power plant. What would be the potential benefits from operating a desalination plant in conjunction with the power plant?

    Lienhard: The cost of desalinated water produced at Diablo Canyon would be lower than for a stand-alone plant because the cost of electricity would be significantly lower and you could take advantage of the existing infrastructure for the intake of seawater and the outfall of brine. Electricity would be cheaper because the location takes advantage of Diablo Canyon’s unique capability to provide low cost, zero-carbon baseload power.

    Depending on the scale at which the desalination plant is built, you could make a very significant impact on the water shortfalls of state and federal projects in the area. In fact, one of the numbers that came out of this study was that an intermediate-sized desalination plant there would produce more fresh water than the highest estimate of the net yield from the proposed Delta Conveyance Project on the Sacramento River. You could get that amount of water at Diablo Canyon for an investment cost less than half as large, and without the associated impacts that would come with the Delta Conveyance Project.

    And the technology envisioned for desalination here, reverse osmosis, is available off the shelf. You can buy this equipment today. In fact, it’s already in use in California and thousands of other places around the world.

    Q: You discuss in the report three potential products from the Diablo Canyon plant:  desalinatinated water, power for the grid, and clean hydrogen. How well can the plant accommodate all of those efforts, and are there advantages to combining them as opposed to doing any one of them separately?

    Buongiorno: California, like many other regions in the world, is facing multiple challenges as it seeks to reduce carbon emissions on a grand scale. First, the wide deployment of intermittent energy sources such as solar and wind creates a great deal of variability on the grid that can be balanced by dispatchable firm power generators like Diablo. So, the first mission for Diablo is to continue to provide reliable, clean electricity to the grid.

    The second challenge is the prolonged drought and water scarcity for the state in general. And one way to address that is water desalination co-located with the nuclear plant at the Diablo site, as John explained.

    The third challenge is related to decarbonization the transportation sector. A possible approach is replacing conventional cars and trucks with vehicles powered by fuel cells which consume hydrogen. Hydrogen has to be produced from a primary energy source. Nuclear power, through a process called electrolysis, can do that quite efficiently and in a manner that is carbon-free.

    Our economic analysis took into account the expected revenue from selling these multiple products — electricity for the grid, hydrogen for the transportation sector, water for farmers or other local users — as well as the costs associated with deploying the new facilities needed to produce desalinated water and hydrogen. We found that, if Diablo’s operating license was extended until 2035, it would cut carbon emissions by an average of 7 million metric tons a year — a more than 11 percent reduction from 2017 levels — and save ratepayers $2.6 billion in power system costs.

    Further delaying the retirement of Diablo to 2045 would spare 90,000 acres of land that would need to be dedicated to renewable energy production to replace the facility’s capacity, and it would save ratepayers up to $21 billion in power system costs.

    Finally, if Diablo was operated as a polygeneration facility that provides electricity, desalinated water, and hydrogen simultaneously, its value, quantified in terms of dollars per unit electricity generated, could increase by 50 percent.

    Lienhard: Most of the desalination scenarios that we considered did not consume the full electrical output of that plant, meaning that under most scenarios you would continue to make electricity and do something with it, beyond just desalination. I think it’s also important to remember that this power plant produces 15 percent of California’s carbon-free electricity today and is responsible for 8 percent of the state’s total electrical production. In other words, Diablo Canyon is a very large factor in California’s decarbonization. When or if this plant goes offline, the near-term outcome is likely to be increased reliance on natural gas to produce electricity, meaning a rise in California’s carbon emissions.

    Q: This plant in particular has been highly controversial since its inception. What’s your assessment of the plant’s safety beyond its scheduled shutdown, and how do you see this report as contributing to the decision-making about that shutdown?

    Buongiorno: The Diablo Canyon Nuclear Power Plant has a very strong safety record. The potential safety concern for Diablo is related to its proximity to several fault lines. Being located in California, the plant was designed to withstand large earthquakes to begin with. Following the Fukushima accident in 2011, the Nuclear Regulatory Commission reviewed the plant’s ability to withstand external events (e.g., earthquakes, tsunamis, floods, tornadoes, wildfires, hurricanes) of exceptionally rare and severe magnitude. After nine years of assessment the NRC’s conclusion is that “existing seismic capacity or effective flood protection [at Diablo Canyon] will address the unbounded reevaluated hazards.” That is, Diablo was designed and built to withstand even the rarest and strongest earthquakes that are physically possible at this site.

    As an additional level of protection, the plant has been retrofitted with special equipment and procedures meant to ensure reliable cooling of the reactor core and spent fuel pool under a hypothetical scenario in which all design-basis safety systems have been disabled by a severe external event.

    Lienhard: As for the potential impact of this report, PG&E [the California utility] has already made the decision to shut down the plant, and we and others hope that decision will be revisited and reversed. We believe that this report gives the relevant stakeholders and policymakers a lot of information about options and value associated with keeping the plant running, and about how California could benefit from clean water and clean power generated at Diablo Canyon. It’s not up to us to make the decision, of course — that is a decision that must be made by the people of California. All we can do is provide information.

    Q: What are the biggest challenges or obstacles to seeing these ideas implemented?

    Lienhard: California has very strict environmental protection regulations, and it’s good that they do. One of the areas of great concern to California is the health of the ocean and protection of the coastal ecosystem. As a result, very strict rules are in place about the intake and outfall of both power plants and desalination plants, to protect marine life. Our analysis suggests that this combined plant can be implemented within the parameters prescribed by the California Ocean Plan and that it can meet the regulatory requirements.

    We believe that deeper analysis would be needed before you could proceed. You would need to do site studies and really get out into the water and look in detail at what’s there. But the preliminary analysis is positive. A second challenge is that the discourse in California around nuclear power has generally not been very supportive, and similarly some groups in California oppose desalination. We expect that that both of those points of view would be part of the conversation about whether or not to procede with this project.

    Q: How particular is this analysis to the specifics of this location? Are there aspects of it that apply to other nuclear plants, domestically or globally?

    Lienhard: Hundreds of nuclear plants around the world are situated along the coast, and many are in water stressed regions. Although our analysis focused on Diablo Canyon, we believe that the general findings are applicable to many other seaside nuclear plants, so that this approach and these conclusions could potentially be applied at hundreds of sites worldwide. More

  • in

    MIT Energy Initiative awards seven Seed Fund grants for early-stage energy research

    The MIT Energy Initiative (MITEI) has awarded seven Seed Fund grants to support novel, early-stage energy research by faculty and researchers at MIT. The awardees hail from a range of disciplines, but all strive to bring their backgrounds and expertise to address the global climate crisis by improving the efficiency, scalability, and adoption of clean energy technologies.

    “Solving climate change is truly an interdisciplinary challenge,” says MITEI Director Robert C. Armstrong. “The Seed Fund grants foster collaboration and innovation from across all five of MIT’s schools and one college, encouraging an ‘all hands on deck approach’ to developing the energy solutions that will prove critical in combatting this global crisis.”

    This year, MITEI’s Seed Fund grant program received 70 proposals from 86 different principal investigators (PIs) across 25 departments, labs, and centers. Of these proposals, 31 involved collaborations between two or more PIs, including 24 that involved multiple departments.

    The winning projects reflect this collaborative nature with topics addressing the optimization of low-energy thermal cooling in buildings; the design of safe, robust, and resilient distributed power systems; and how to design and site wind farms with consideration of wind resource uncertainty due to climate change.

    Increasing public support for low-carbon technologies

    One winning team aims to leverage work done in the behavioral sciences to motivate sustainable behaviors and promote the adoption of clean energy technologies.

    “Objections to scalable low-carbon technologies such as nuclear energy and carbon sequestration have made it difficult to adopt these technologies and reduce greenhouse gas emissions,” says Howard Herzog, a senior research scientist at MITEI and co-PI. “These objections tend to neglect the sheer scale of energy generation required and the inability to meet this demand solely with other renewable energy technologies.”

    This interdisciplinary team — which includes researchers from MITEI, the Department of Nuclear Science and Engineering, and the MIT Sloan School of Management — plans to convene industry professionals and academics, as well as behavioral scientists, to identify common objections, design messaging to overcome them, and prove that these messaging campaigns have long-lasting impacts on attitudes toward scalable low-carbon technologies.

    “Our aim is to provide a foundation for shifting the public and policymakers’ views about these low-carbon technologies from something they, at best, tolerate, to something they actually welcome,” says co-PI David Rand, the Erwin H. Schell Professor and professor of management science and brain and cognitive sciences at MIT Sloan School of Management.

    Siting and designing wind farms

    Michael Howland, an assistant professor of civil and environmental engineering, will use his Seed Fund grant to develop a foundational methodology for wind farm siting and design that accounts for the uncertainty of wind resources resulting from climate change.

    “The optimal wind farm design and its resulting cost of energy is inherently dependent on the wind resource at the location of the farm,” says Howland. “But wind farms are currently sited and designed based on short-term climate records that do not account for the future effects of climate change on wind patterns.”

    Wind farms are capital-intensive infrastructure that cannot be relocated and often have lifespans exceeding 20 years — all of which make it especially important that developers choose the right locations and designs based not only on wind patterns in the historical climate record, but also based on future predictions. The new siting and design methodology has the potential to replace current industry standards to enable a more accurate risk analysis of wind farm development and energy grid expansion under climate change-driven energy resource uncertainty.

    Membraneless electrolyzers for hydrogen production

    Producing hydrogen from renewable energy-powered water electrolyzers is central to realizing a sustainable and low-carbon hydrogen economy, says Kripa Varanasi, a professor of mechanical engineering and a Seed Fund award recipient. The idea of using hydrogen as a fuel has existed for decades, but it has yet to be widely realized at a considerable scale. Varanasi hopes to change that with his Seed Fund grant.

    “The critical economic hurdle for successful electrolyzers to overcome is the minimization of the capital costs associated with their deployment,” says Varanasi. “So, an immediate task at hand to enable electrochemical hydrogen production at scale will be to maximize the effectiveness of the most mature, least complex, and least expensive water electrolyzer technologies.”

    To do this, he aims to combine the advantages of existing low-temperature alkaline electrolyzer designs with a novel membraneless electrolyzer technology that harnesses a gas management system architecture to minimize complexity and costs, while also improving efficiency. Varanasi hopes his project will demonstrate scalable concepts for cost-effective electrolyzer technology design to help realize a decarbonized hydrogen economy.

    Since its establishment in 2008, the MITEI Seed Fund Program has supported 194 energy-focused seed projects through grants totaling more than $26 million. This funding comes primarily from MITEI’s founding and sustaining members, supplemented by gifts from generous donors.

    Recipients of the 2021 MITEI Seed Fund grants are:

    “Design automation of safe, robust, and resilient distributed power systems” — Chuchu Fan of the Department of Aeronautics and Astronautics
    “Advanced MHD topping cycles: For fission, fusion, solar power plants” — Jeffrey Freidberg of the Department of Nuclear Science and Engineering and Dennis Whyte of the Plasma Science and Fusion Center
    “Robust wind farm siting and design under climate-change‐driven wind resource uncertainty” — Michael Howland of the Department of Civil and Environmental Engineering
    “Low-energy thermal comfort for buildings in the Global South: Optimal design of integrated structural-thermal systems” — Leslie Norford of the Department of Architecture and Caitlin Mueller of the departments of Architecture and Civil and Environmental Engineering
    “New low-cost, high energy-density boron-based redox electrolytes for nonaqueous flow batteries” — Alexander Radosevich of the Department of Chemistry
    “Increasing public support for scalable low-carbon energy technologies using behavorial science insights” — David Rand of the MIT Sloan School of Management, Koroush Shirvan of the Department of Nuclear Science and Engineering, Howard Herzog of the MIT Energy Initiative, and Jacopo Buongiorno of the Department of Nuclear Science and Engineering
    “Membraneless electrolyzers for efficient hydrogen production using nanoengineered 3D gas capture electrode architectures” — Kripa Varanasi of the Department of Mechanical Engineering More

  • in

    Crossing disciplines, adding fresh eyes to nuclear engineering

    Sometimes patterns repeat in nature. Spirals appear in sunflowers and hurricanes. Branches occur in veins and lightning. Limiao Zhang, a doctoral student in MIT’s Department of Nuclear Science and Engineering, has found another similarity: between street traffic and boiling water, with implications for preventing nuclear meltdowns.

    Growing up in China, Zhang enjoyed watching her father repair things around the house. He couldn’t fulfill his dream of becoming an engineer, instead joining the police force, but Zhang did have that opportunity and studied mechanical engineering at Three Gorges University. Being one of four girls among about 50 boys in the major didn’t discourage her. “My father always told me girls can do anything,” she says. She graduated at the top of her class.

    In college, she and a team of classmates won a national engineering competition. They designed and built a model of a carousel powered by solar, hydroelectric, and pedal power. One judge asked how long the system could operate safely. “I didn’t have a perfect answer,” she recalls. She realized that engineering means designing products that not only function, but are resilient. So for her master’s degree, at Beihang University, she turned to industrial engineering and analyzed the reliability of critical infrastructure, in particular traffic networks.

    “Among all the critical infrastructures, nuclear power plants are quite special,” Zhang says. “Although one can provide very enormous carbon-free energy, once it fails, it can cause catastrophic results.” So she decided to switch fields again and study nuclear engineering. At the time she had no nuclear background, and hadn’t studied in the United States, but “I tried to step out of my comfort zone,” she says. “I just applied and MIT welcomed me.” Her supervisor, Matteo Bucci, and her classmates explained the basics of fission reactions as she adjusted to the new material, language, and environment. She doubted herself — “my friend told me, ‘I saw clouds above your head’” — but she passed her first-year courses and published her first paper soon afterward.

    Much of the work in Bucci’s lab deals with what’s called the boiling crisis. In many applications, such as nuclear plants and powerful computers, water cools things. When a hot surface boils water, bubbles cling to the surface before rising, but if too many form, they merge into a layer of vapor that insulates the surface. The heat has nowhere to go — a boiling crisis.

    Bucci invited Zhang into his lab in part because she saw a connection between traffic and heat transfer. The data plots of both phenomena look surprisingly similar. “The mathematical tools she had developed for the study of traffic jams were a completely different way of looking into our problem” Bucci says, “by using something which is intuitively not connected.”

    One can view bubbles as cars. The more there are, the more they interfere with each other. People studying boiling had focused on the physics of individual bubbles. Zhang instead uses statistical physics to analyze collective patterns of behavior. “She brings a different set of skills, a different set of knowledge, to our research,” says Guanyu Su, a postdoc in the lab. “That’s very refreshing.”

    In her first paper on the boiling crisis, published in Physical Review Letters, Zhang used theory and simulations to identify scale-free behavior in boiling: just as in traffic, the same patterns appear whether zoomed in or out, in terms of space or time. Both small and large bubbles matter. Using this insight, the team found certain physical parameters that could predict a boiling crisis. Zhang’s mathematical tools both explain experimental data and suggest new experiments to try. For a second paper, the team collected more data and found ways to predict the boiling crisis in a wider variety of conditions.

    Zhang’s thesis and third paper, both in progress, propose a universal law for explaining the crisis. “She translated the mechanism into a physical law, like F=ma or E=mc2,” Bucci says. “She came up with an equally simple equation.” Zhang says she’s learned a lot from colleagues in the department who are pioneering new nuclear reactors or other technologies, “but for my own work, I try to get down to the very basics of a phenomenon.”

    Bucci describes Zhang as determined, open-minded, and commendably self-critical. Su says she’s careful, optimistic, and courageous. “If I imagine going from heat transfer to city planning, that would be almost impossible for me,” he says. “She has a strong mind.” Last year, Zhang gave birth to a boy, whom she’s raising on her own as she does her research. (Her husband is stuck in China during the pandemic.) “This, to me,” Bucci says, “is almost superhuman.”

    Zhang will graduate at the end of the year, and has started looking for jobs back in China. She wants to continue in the energy field, though maybe not nuclear. “I will use my interdisciplinary knowledge,” she says. “I hope I can design safer and more efficient and more reliable systems to provide energy for our society.” More

  • in

    MIT-designed project achieves major advance toward fusion energy

    It was a moment three years in the making, based on intensive research and design work: On Sept. 5, for the first time, a large high-temperature superconducting electromagnet was ramped up to a field strength of 20 tesla, the most powerful magnetic field of its kind ever created on Earth. That successful demonstration helps resolve the greatest uncertainty in the quest to build the world’s first fusion power plant that can produce more power than it consumes, according to the project’s leaders at MIT and startup company Commonwealth Fusion Systems (CFS).

    That advance paves the way, they say, for the long-sought creation of practical, inexpensive, carbon-free power plants that could make a major contribution to limiting the effects of global climate change.

    “Fusion in a lot of ways is the ultimate clean energy source,” says Maria Zuber, MIT’s vice president for research and E. A. Griswold Professor of Geophysics. “The amount of power that is available is really game-changing.” The fuel used to create fusion energy comes from water, and “the Earth is full of water — it’s a nearly unlimited resource. We just have to figure out how to utilize it.”

    Developing the new magnet is seen as the greatest technological hurdle to making that happen; its successful operation now opens the door to demonstrating fusion in a lab on Earth, which has been pursued for decades with limited progress. With the magnet technology now successfully demonstrated, the MIT-CFS collaboration is on track to build the world’s first fusion device that can create and confine a plasma that produces more energy than it consumes. That demonstration device, called SPARC, is targeted for completion in 2025.

    “The challenges of making fusion happen are both technical and scientific,” says Dennis Whyte, director of MIT’s Plasma Science and Fusion Center, which is working with CFS to develop SPARC. But once the technology is proven, he says, “it’s an inexhaustible, carbon-free source of energy that you can deploy anywhere and at any time. It’s really a fundamentally new energy source.”

    Whyte, who is the Hitachi America Professor of Engineering, says this week’s demonstration represents a major milestone, addressing the biggest questions remaining about the feasibility of the SPARC design. “It’s really a watershed moment, I believe, in fusion science and technology,” he says.

    The sun in a bottle

    Fusion is the process that powers the sun: the merger of two small atoms to make a larger one, releasing prodigious amounts of energy. But the process requires temperatures far beyond what any solid material could withstand. To capture the sun’s power source here on Earth, what’s needed is a way of capturing and containing something that hot — 100,000,000 degrees or more — by suspending it in a way that prevents it from coming into contact with anything solid.

    That’s done through intense magnetic fields, which form a kind of invisible bottle to contain the hot swirling soup of protons and electrons, called a plasma. Because the particles have an electric charge, they are strongly controlled by the magnetic fields, and the most widely used configuration for containing them is a donut-shaped device called a tokamak. Most of these devices have produced their magnetic fields using conventional electromagnets made of copper, but the latest and largest version under construction in France, called ITER, uses what are known as low-temperature superconductors.

    The major innovation in the MIT-CFS fusion design is the use of high-temperature superconductors, which enable a much stronger magnetic field in a smaller space. This design was made possible by a new kind of superconducting material that became commercially available a few years ago. The idea initially arose as a class project in a nuclear engineering class taught by Whyte. The idea seemed so promising that it continued to be developed over the next few iterations of that class, leading to the ARC power plant design concept in early 2015. SPARC, designed to be about half the size of ARC, is a testbed to prove the concept before construction of the full-size, power-producing plant.

    Until now, the only way to achieve the colossally powerful magnetic fields needed to create a magnetic “bottle” capable of containing plasma heated up to hundreds of millions of degrees was to make them larger and larger. But the new high-temperature superconductor material, made in the form of a flat, ribbon-like tape, makes it possible to achieve a higher magnetic field in a smaller device, equaling the performance that would be achieved in an apparatus 40 times larger in volume using conventional low-temperature superconducting magnets. That leap in power versus size is the key element in ARC’s revolutionary design.

    The use of the new high-temperature superconducting magnets makes it possible to apply decades of experimental knowledge gained from the operation of tokamak experiments, including MIT’s own Alcator series. The new approach, led by Zach Hartwig, the MIT principal investigator and the Robert N. Noyce Career Development Assistant Professor of Nuclear Science and Engineering, uses a well-known design but scales everything down to about half the linear size and still achieves the same operational conditions because of the higher magnetic field.

    A series of scientific papers published last year outlined the physical basis and, by simulation, confirmed the viability of the new fusion device. The papers showed that, if the magnets worked as expected, the whole fusion system should indeed produce net power output, for the first time in decades of fusion research.

    Martin Greenwald, deputy director and senior research scientist at the PSFC, says unlike some other designs for fusion experiments, “the niche that we were filling was to use conventional plasma physics, and conventional tokamak designs and engineering, but bring to it this new magnet technology. So, we weren’t requiring innovation in a half-dozen different areas. We would just innovate on the magnet, and then apply the knowledge base of what’s been learned over the last decades.”

    That combination of scientifically established design principles and game-changing magnetic field strength is what makes it possible to achieve a plant that could be economically viable and developed on a fast track. “It’s a big moment,” says Bob Mumgaard, CEO of CFS. “We now have a platform that is both scientifically very well-advanced, because of the decades of research on these machines, and also commercially very interesting. What it does is allow us to build devices faster, smaller, and at less cost,” he says of the successful magnet demonstration. 

    Play video

    Proof of the concept

    Bringing that new magnet concept to reality required three years of intensive work on design, establishing supply chains, and working out manufacturing methods for magnets that may eventually need to be produced by the thousands.

    “We built a first-of-a-kind, superconducting magnet. It required a lot of work to create unique manufacturing processes and equipment. As a result, we are now well-prepared to ramp-up for SPARC production,” says Joy Dunn, head of operations at CFS. “We started with a physics model and a CAD design, and worked through lots of development and prototypes to turn a design on paper into this actual physical magnet.” That entailed building manufacturing capabilities and testing facilities, including an iterative process with multiple suppliers of the superconducting tape, to help them reach the ability to produce material that met the needed specifications — and for which CFS is now overwhelmingly the world’s biggest user.

    They worked with two possible magnet designs in parallel, both of which ended up meeting the design requirements, she says. “It really came down to which one would revolutionize the way that we make superconducting magnets, and which one was easier to build.” The design they adopted clearly stood out in that regard, she says.

    In this test, the new magnet was gradually powered up in a series of steps until reaching the goal of a 20 tesla magnetic field — the highest field strength ever for a high-temperature superconducting fusion magnet. The magnet is composed of 16 plates stacked together, each one of which by itself would be the most powerful high-temperature superconducting magnet in the world.

    “Three years ago we announced a plan,” says Mumgaard, “to build a 20-tesla magnet, which is what we will need for future fusion machines.” That goal has now been achieved, right on schedule, even with the pandemic, he says.

    Citing the series of physics papers published last year, Brandon Sorbom, the chief science officer at CFS, says “basically the papers conclude that if we build the magnet, all of the physics will work in SPARC. So, this demonstration answers the question: Can they build the magnet? It’s a very exciting time! It’s a huge milestone.”

    The next step will be building SPARC, a smaller-scale version of the planned ARC power plant. The successful operation of SPARC will demonstrate that a full-scale commercial fusion power plant is practical, clearing the way for rapid design and construction of that pioneering device can then proceed full speed.

    Zuber says that “I now am genuinely optimistic that SPARC can achieve net positive energy, based on the demonstrated performance of the magnets. The next step is to scale up, to build an actual power plant. There are still many challenges ahead, not the least of which is developing a design that allows for reliable, sustained operation. And realizing that the goal here is commercialization, another major challenge will be economic. How do you design these power plants so it will be cost effective to build and deploy them?”

    Someday in a hoped-for future, when there may be thousands of fusion plants powering clean electric grids around the world, Zuber says, “I think we’re going to look back and think about how we got there, and I think the demonstration of the magnet technology, for me, is the time when I believed that, wow, we can really do this.”

    The successful creation of a power-producing fusion device would be a tremendous scientific achievement, Zuber notes. But that’s not the main point. “None of us are trying to win trophies at this point. We’re trying to keep the planet livable.” More

  • in

    The boiling crisis — and how to avoid it

    It’s rare for a pre-teen to become enamored with thermodynamics, but those consumed by such a passion may consider themselves lucky to end up at a place like MIT. Madhumitha Ravichandran certainly does. A PhD student in Nuclear Science and Engineering (NSE), Ravichandran first encountered the laws of thermodynamics as a middle school student in Chennai, India. “They made complete sense to me,” she says. “While looking at the refrigerator at home, I wondered if I might someday build energy systems that utilized these same principles. That’s how it started, and I’ve sustained that interest ever since.”

    She’s now drawing on her knowledge of thermodynamics in research carried out in the laboratory of NSE Assistant Professor Matteo Bucci, her doctoral supervisor. Ravichandran and Bucci are gaining key insights into the “boiling crisis” — a problem that has long saddled the energy industry.

    Ravichandran was well prepared for this work by the time she arrived at MIT in 2017. As an undergraduate at India’s Sastra University, she pursued research on “two-phase flows,” examining the transitions water undergoes between its liquid and gaseous forms. She continued to study droplet evaporation and related phenomena during an internship in early 2017 in the Bucci Lab. That was an eye-opening experience, Ravichandran explains. “Back at my university in India, only 2 to 3 percent of the mechanical engineering students were women, and there were no women on the faculty. It was the first time I had faced social inequities because of my gender, and I went through some struggles, to say the least.”

    MIT offered a welcome contrast. “The amount of freedom I was given made me extremely happy,” she says. “I was always encouraged to explore my ideas, and I always felt included.” She was doubly happy because, midway through the internship, she learned that she’d been accepted to MIT’s graduate program.

    As a PhD student, her research has followed a similar path. She continues to study boiling and heat transfer, but Bucci gave this work some added urgency. They’re now investigating the aforementioned boiling crisis, which affects nuclear reactors and other kinds of power plants that rely on steam generation to drive turbines. In a light water nuclear reactor, water is heated by fuel rods in which nuclear fission has occurred. Heat removal is most efficient when the water circulating past the rods boils. However, if too many bubbles form on the surface, enveloping the fuel rods in a layer of vapor, heat transfer is greatly reduced. That’s not only diminishes power generation, it can also be dangerous because the fuel rods must be continuously cooled to avoid a dreaded meltdown accident.

    Nuclear plants operate at low power ratings to provide an ample safety margin and thereby prevent such a scenario from occurring. Ravichandran believes these standards may be overly cautious, owing to the fact that people aren’t yet sure of the conditions that bring about the boiling crisis. This hurts the economic viability of nuclear power, she says, at a time when we desperately need carbon-free power sources. But Ravichandran and other researchers in the Bucci Lab are starting to fill some major gaps in our understanding.

    They initially ran experiments to determine how quickly bubbles form when water hits a hot surface, how big the bubbles get, how long they grow, and how the surface temperature changes. “A typical experiment lasted two minutes, but it took more than three weeks to pick out every bubble that formed and track its growth and evolution,” Ravichandran explains.

    To streamline this process, she and Bucci are implementing a machine learning approach, based on neural network technology. Neural networks are good at recognizing patterns, including those associated with bubble nucleation. “These networks are data hungry,” Ravichandran says. “The more data they’re fed, the better they perform.” The networks were trained on experimental results pertaining to bubble formation on different surfaces; the networks were then tested on surfaces for which the NSE researchers had no data and didn’t know what to expect.

    After gaining experimental validation of the output from the machine learning models, the team is now trying to get these models to make reliable predictions as to when the bubble crisis, itself, will occur. The ultimate goal is to have a fully autonomous system that can not only predict the boiling crisis, but also show why it happens and automatically shut down experiments before things go too far and lab equipment starts melting.

    In the meantime, Ravichandran and Bucci have made some important theoretical advances, which they report on in a recently published paper for Applied Physics Letters. There had been a debate in the nuclear engineering community as to whether the boiling crisis is caused by bubbles covering the fuel rod surface or due to bubbles growing on top of each other, extending outward from the surface. Ravichandran and Bucci determined that it is a surface-level phenomenon. In addition, they’ve identified the three main factors that trigger the boiling crisis. First, there’s the number of bubbles that form over a given surface area and, second, the average bubble size. The third factor is the product of the bubble frequency (the number of bubbles forming within a second at a given site) and the time it takes for a bubble to reach its full size.

    Ravichandran is happy to have shed some new light on this issue but acknowledges that there’s still much work to be done. Although her research agenda is ambitious and nearly all consuming, she never forgets where she came from and the sense of isolation she felt while studying engineering as an undergraduate. She has, on her own initiative, been mentoring female engineering students in India, providing both research guidance and career advice.

    “I sometimes feel there was a reason I went through those early hardships,” Ravichandran says. “That’s what made me decide that I want to be an educator.” She’s also grateful for the opportunities that have opened up for her since coming to MIT. A recipient of a 2021-22 MathWorks Engineering Fellowship, she says, “now it feels like the only limits on me are those that I’ve placed on myself.” More

  • in

    A peculiar state of matter in layers of semiconductors

    Scientists around the world are developing new hardware for quantum computers, a new type of device that could accelerate drug design, financial modeling, and weather prediction. These computers rely on qubits, bits of matter that can represent some combination of 1 and 0 simultaneously. The problem is that qubits are fickle, degrading into regular bits when interactions with surrounding matter interfere. But new research at MIT suggests a way to protect their states, using a phenomenon called many-body localization (MBL).

    MBL is a peculiar phase of matter, proposed decades ago, that is unlike solid or liquid. Typically, matter comes to thermal equilibrium with its environment. That’s why soup cools and ice cubes melt. But in MBL, an object consisting of many strongly interacting bodies, such as atoms, never reaches such equilibrium. Heat, like sound, consists of collective atomic vibrations and can travel in waves; an object always has such heat waves internally. But when there’s enough disorder and enough interaction in the way its atoms are arranged, the waves can become trapped, thus preventing the object from reaching equilibrium.

    MBL had been demonstrated in “optical lattices,” arrangements of atoms at very cold temperatures held in place using lasers. But such setups are impractical. MBL had also arguably been shown in solid systems, but only with very slow temporal dynamics, in which the phase’s existence is hard to prove because equilibrium might be reached if researchers could wait long enough. The MIT research found a signatures of MBL in a “solid-state” system — one made of semiconductors — that would otherwise have reached equilibrium in the time it was watched.

    “It could open a new chapter in the study of quantum dynamics,” says Rahul Nandkishore, a physicist at the University of Colorado at Boulder, who was not involved in the work.

    Mingda Li, the Norman C Rasmussen Assistant Professor Nuclear Science and Engineering at MIT, led the new study, published in a recent issue of Nano Letters. The researchers built a system containing alternating semiconductor layers, creating a microscopic lasagna — aluminum arsenide, followed by gallium arsenide, and so on, for 600 layers, each 3 nanometers (millionths of a millimeter) thick. Between the layers they dispersed “nanodots,” 2-nanometer particles of erbium arsenide, to create disorder. The lasagna, or “superlattice,” came in three recipes: one with no nanodots, one in which nanodots covered 8 percent of each layer’s area, and one in which they covered 25 percent.

    According to Li, the team used layers of material, instead of a bulk material, to simplify the system so dissipation of heat across the planes was essentially one-dimensional. And they used nanodots, instead of mere chemical impurities, to crank up the disorder.

    To measure whether these disordered systems are still staying in equilibrium, the researchers measured them with X-rays. Using the Advanced Photon Source at Argonne National Lab, they shot beams of radiation at an energy of more than 20,000 electron volts, and to resolve the energy difference between the incoming X-ray and after its reflection off the sample’s surface, with an energy resolution less than one one-thousandth of an electron volt. To avoid penetrating the superlattice and hitting the underlying substrate, they shot it at an angle of just half a degree from parallel.

    Just as light can be measured as waves or particles, so too can heat. The collective atomic vibration for heat in the form of a heat-carrying unit is called a phonon. X-rays interact with these phonons, and by measuring how X-rays reflect off the sample, the experimenters can determine if it is in equilibrium.

    The researchers found that when the superlattice was cold — 30 kelvin, about -400 degrees Fahrenheit — and it contained nanodots, its phonons at certain frequencies remained were not in equilibrium.

    More work remains to prove conclusively that MBL has been achieved, but “this new quantum phase can open up a whole new platform to explore quantum phenomena,” Li says, “with many potential applications, from thermal storage to quantum computing.”

    To create qubits, some quantum computers employ specks of matter called quantum dots. Li says quantum dots similar to Li’s nanodots could act as qubits. Magnets could read or write their quantum states, while the many-body localization would keep them insulated from heat and other environmental factors.

    In terms of thermal storage, such a superlattice might switch in and out of an MBL phase by magnetically controlling the nanodots. It could insulate computer parts from heat at one moment, then allow parts to disperse heat when it won’t cause damage. Or it could allow heat to build up and be harnessed later for generating electricity.

    Conveniently, superlattices with nanodots can be constructed using traditional techniques for fabricating semiconductors, alongside other elements of computer chips. According to Li, “It’s a much larger design space than with chemical doping, and there are numerous applications.”

    “I am excited to see that signatures of MBL can now also be found in real material systems,” says Immanuel Bloch, scientific director at the Max-Planck-Institute of Quantum Optics, of the new work. “I believe this will help us to better understand the conditions under which MBL can be observed in different quantum many-body systems and how possible coupling to the environment affects the stability of the system. These are fundamental and important questions and the MIT experiment is an important step helping us to answer them.”

    Funding was provided by the U.S. Department of Energy’s Basic Energy Sciences program’s Neutron Scattering Program. More

  • in

    Why boiling droplets can race across hot oily surfaces

    When you’re frying something in a skillet and some droplets of water fall into the pan, you may have noticed those droplets skittering around on top of the film of hot oil. Now, that seemingly trivial phenomenon has been analyzed and understood for the first time by researchers at MIT — and may have important implications for microfluidic devices, heat transfer systems, and other useful functions.

    A droplet of boiling water on a hot surface will sometimes levitate on a thin vapor film, a well-studied phenomenon called the Leidenfrost effect. Because it is suspended on a cushion of vapor, the droplet can move across the surface with little friction. If the surface is coated with hot oil, which has much greater friction than the vapor film under a Leidenfrost droplet, the hot droplet should be expected to move much more slowly. But, counterintuitively, the series of experiments at MIT has showed that the opposite effect happens: The droplet on oil zooms away much more rapidly than on bare metal.

    This effect, which propels droplets across a heated oily surface 10 to 100 times faster than on bare metal, could potentially be used for self-cleaning or de-icing systems, or to propel tiny amounts of liquid through the tiny tubing of microfluidic devices used for biomedical and chemical research and testing. The findings are described today in a paper in the journal Physical Review Letters, written by graduate student Victor Julio Leon and professor of mechanical engineering Kripa Varanasi.

    In previous research, Varanasi and his team showed that it would be possible to harness this phenomenon for some of these potential applications, but the new work, producing such high velocities (approximately 50 times faster), could open up even more new uses, Varanasi says.

    After long and painstaking analysis, Leon and Varanasi were able to determine the reason for the rapid ejection of these droplets from the hot surface. Under the right conditions of high temperature, oil viscosity, and oil thickness, the oil will form a kind of thin cloak coating the outside of each water droplet. As the droplet heats up, tiny bubbles of vapor form along the interface between the droplet and the oil. Because these minuscule bubbles accumulate randomly along the droplet’s base, asymmetries develop, and the lowered friction under the bubble loosens the droplet’s attachment to the surface and propels it away.

    The oily film acts almost like the rubber of a balloon, and when the tiny vapor bubbles burst through, they impart a force and “the balloon just flies off because the air is going out one side, creating a momentum transfer,” Varanasi says. Without the oil cloak, the vapor bubbles would just flow out of the droplet in all directions, preventing self-propulsion, but the cloaking effect holds them in like the skin of the balloon.

    Researchers used extreme high-speed photography to reveal the details of the moving droplets. “You can actually see the fluctuations on the surface,” graduate student Victor Leon says.

    The phenomenon sounds simple, but it turns out to depend on a complex interplay between events happening at different timescales.

    This newly analyzed self-ejection phenomenon depends on a number of factors, including the droplet size, the thickness and viscosity of the oil film, the thermal conductivity of the surface, the surface tension of the different liquids in the system, the type of oil, and the texture of the surface.

    In their experiments, the lowest viscosity of the several oils they tested was about 100 times more viscous than the surrounding air. So, it would have been expected to make bubbles move much more slowly than on the air cushion of the Leidenfrost effect. “That gives an idea of how surprising it is that this droplet is moving faster,” Leon says.

    As boiling starts, bubbles will randomly form from some nucleation site that is not right at its center. Bubble formation will increase on that side, leading to the propulsion off in one direction. So far, the researchers have not been able to control the direction of that randomly induced propulsion, but they are now working on some possible ways to control the directionality in the future. “We have ideas of how to trigger the propulsion in controlled directions,” Leon says.

    Remarkably, the tests showed that even though the oil film of the surface, which was a silicon wafer, was only 10 to 100 microns thick — about the thickness of a human hair — its behavior didn’t match the equations for a thin film. Instead, because of the vaporization the film, it was actually behaving like an infinitely deep pool of oil. “We were kind of astounded” by that finding, Leon says. While a thin film should have caused it to stick, the virtually infinite pool gave the droplet much lower friction, allowing it to move more rapidly than expected, Leon says.

    The effect depends on the fact that the formation of the tiny bubbles is a much more rapid process than the transfer of heat through the oil film, about a thousand times faster, leaving plenty of time for the asymmetries within the droplet to accumulate. When the bubbles of vapor initially form at the oil-water interface, they are  much more insulating that the liquid of the droplet, leading to significant thermal disturbances in the oil film. These disturbances cause the droplet to vibrate, reducing friction and increasing vaporization rate.

    It took extreme high-speed photography to reveal the details of this rapid effect, Leon says, using a 100,000 frames per second video camera. “You can actually see the fluctuations on the surface,” Leon says.

    Initially, Varanasi says, “we were stumped at multiple levels as to what was going on, because the effect was so unexpected. … It’s a fairly complex answer to what may look seemingly simple, but it really creates this fast propulsion.”

    In practice, the effect means that in certain situations, a simple heating of a surface, by the right amount and with the right kind of oily coating, could cause corrosive scaling drops to be cleared from a surface. Further down the line, once the researchers have more control over directionality, the system could potentially substitute for some high-tech pumps in microfluidic devices to propel droplets through the right tubes at the right time. This might be especially useful in microgravity situations, where ordinary pumps don’t function as usual.

    It may also be possible to attach a payload to the droplets, creating a kind of microscale robotic delivery system, Varanasi says. And while their tests focused on water droplets, potentially it could apply to many different kinds of liquids and sublimating solids, he says.

    The work was supported by the National Science Foundation. More

  • in

    Using graphene foam to filter toxins from drinking water

    Some kinds of water pollution, such as algal blooms and plastics that foul rivers, lakes, and marine environments, lie in plain sight. But other contaminants are not so readily apparent, which makes their impact potentially more dangerous. Among these invisible substances is uranium. Leaching into water resources from mining operations, nuclear waste sites, or from natural subterranean deposits, the element can now be found flowing out of taps worldwide.

    In the United States alone, “many areas are affected by uranium contamination, including the High Plains and Central Valley aquifers, which supply drinking water to 6 million people,” says Ahmed Sami Helal, a postdoc in the Department of Nuclear Science and Engineering. This contamination poses a near and present danger. “Even small concentrations are bad for human health,” says Ju Li, the Battelle Energy Alliance Professor of Nuclear Science and Engineering and professor of materials science and engineering.

    Now, a team led by Li has devised a highly efficient method for removing uranium from drinking water. Applying an electric charge to graphene oxide foam, the researchers can capture uranium in solution, which precipitates out as a condensed solid crystal. The foam may be reused up to seven times without losing its electrochemical properties. “Within hours, our process can purify a large quantity of drinking water below the EPA limit for uranium,” says Li.

    A paper describing this work was published in this week Advanced Materials. The two first co-authors are Helal and Chao Wang, a postdoc at MIT during the study, who is now with the School of Materials Science and Engineering at Tongji University, Shanghai. Researchers from Argonne National Laboratory, Taiwan’s National Chiao Tung University, and the University of Tokyo also participated in the research. The Defense Threat Reduction Agency (U.S. Department of Defense) funded later stages of this work.

    Targeting the contaminant

    The project, launched three years ago, began as an effort to find better approaches to environmental cleanup of heavy metals from mining sites. To date, remediation methods for such metals as chromium, cadmium, arsenic, lead, mercury, radium, and uranium have proven limited and expensive. “These techniques are highly sensitive to organics in water, and are poor at separating out the heavy metal contaminants,” explains Helal. “So they involve long operation times, high capital costs, and at the end of extraction, generate more toxic sludge.”

    To the team, uranium seemed a particularly attractive target. Field testing from the U.S. Geological Service and the Environmental Protection Agency (EPA) has revealed unhealthy levels of uranium moving into reservoirs and aquifers from natural rock sources in the northeastern United States, from ponds and pits storing old nuclear weapons and fuel in places like Hanford, Washington, and from mining activities located in many western states. This kind of contamination is prevalent in many other nations as well. An alarming number of these sites show uranium concentrations close to or above the EPA’s recommended ceiling of 30 parts per billion (ppb) — a level linked to kidney damage, cancer risk, and neurobehavioral changes in humans.

    The critical challenge lay in finding a practical remediation process exclusively sensitive to uranium, capable of extracting it from solution without producing toxic residues. And while earlier research showed that electrically charged carbon fiber could filter uranium from water, the results were partial and imprecise.

    Wang managed to crack these problems — based on her investigation of the behavior of graphene foam used for lithium-sulfur batteries. “The physical performance of this foam was unique because of its ability to attract certain chemical species to its surface,” she says. “I thought the ligands in graphene foam would work well with uranium.”

    Simple, efficient, and clean

    The team set to work transforming graphene foam into the equivalent of a uranium magnet. They learned that by sending an electric charge through the foam, splitting water and releasing hydrogen, they could increase the local pH and induce a chemical change that pulled uranium ions out of solution. The researchers found that the uranium would graft itself onto the foam’s surface, where it formed a never-before-seen crystalline uranium hydroxide. On reversal of the electric charge, the mineral, which resembles fish scales, slipped easily off the foam.

    It took hundreds of tries to get the chemical composition and electrolysis just right. “We kept changing the functional chemical groups to get them to work correctly,” says Helal. “And the foam was initially quite fragile, tending to break into pieces, so we needed to make it stronger and more durable,” says Wang.

    This uranium filtration process is simple, efficient, and clean, according to Li: “Each time it’s used, our foam can capture four times its own weight of uranium, and we can achieve an extraction capacity of 4,000 mg per gram, which is a major improvement over other methods,” he says. “We’ve also made a major breakthrough in reusability, because the foam can go through seven cycles without losing its extraction efficiency.” The graphene foam functions as well in seawater, where it reduces uranium concentrations from 3 parts per million to 19.9 ppb, showing that other ions in the brine do not interfere with filtration.

    The team believes its low-cost, effective device could become a new kind of home water filter, fitting on faucets like those of commercial brands. “Some of these filters already have activated carbon, so maybe we could modify these, add low-voltage electricity to filter uranium,” says Li.

    “The uranium extraction this device achieves is very impressive when compared to existing methods,” says Ho Jin Ryu, associate professor of nuclear and quantum engineering at the Korea Advanced Institute of Science and Technology. Ryu, who was not involved in the research, believes that the demonstration of graphene foam reusability is a “significant advance,” and that “the technology of local pH control to enhance uranium deposition will be impactful because the scientific principle can be applied more generally to heavy metal extraction from polluted water.”

    The researchers have already begun investigating broader applications of their method. “There is a science to this, so we can modify our filters to be selective for other heavy metals such as lead, mercury, and cadmium,” says Li. He notes that radium is another significant danger for locales in the United States and elsewhere that lack resources for reliable drinking water infrastructure.

    “In the future, instead of a passive water filter, we could be using a smart filter powered by clean electricity that turns on electrolytic action, which could extract multiple toxic metals, tell you when to regenerate the filter, and give you quality assurance about the water you’re drinking.” More