More stories

  • in

    MIT engineers develop a magnetic transistor for more energy-efficient electronics

    Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity. The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.Overcoming the limitsIn an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.Leveraging magnetismThis lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities. More

  • in

    MIT’s work with Idaho National Laboratory advances America’s nuclear industry

    At the center of nuclear reactors across the United States, a new type of chromium-coated fuel is being used to make the reactors more efficient and more resistant to accidents. The fuel is one of many innovations sprung from collaboration between researchers at MIT and the Idaho National Laboratory (INL) — a relationship that has altered the trajectory of the country’s nuclear industry.Amid renewed excitement around nuclear energy in America, MIT’s research community is working to further develop next-generation fuels, accelerate the deployment of small modular reactors (SMRs), and enable the first nuclear reactor in space.Researchers at MIT and INL have worked closely for decades, and the collaboration takes many forms, including joint research efforts, student and postdoc internships, and a standing agreement that lets INL employees spend extended periods on MIT’s campus researching and teaching classes. MIT is also a founding member of the Battelle Energy Alliance, which has managed the Idaho National Laboratory for the Department of Energy since 2005.The collaboration gives MIT’s community a chance to work on the biggest problems facing America’s nuclear industry while bolstering INL’s research infrastructure.“The Idaho National Laboratory is the lead lab for nuclear energy technology in the United States today — that’s why it’s essential that MIT works hand in hand with INL,” says Jacopo Buongiorno, the Battelle Energy Alliance Professor in Nuclear Science and Engineering at MIT. “Countless MIT students and postdocs have interned at INL over the years, and a memorandum of understanding that strengthened the collaboration between MIT and INL in 2019 has been extended twice.”Ian Waitz, MIT’s vice president for research, adds, “The strong collaborative history between MIT and the Idaho National Laboratory enables us to jointly contribute practical technologies to enable the growth of clean, safe nuclear energy. It’s a clear example of how rigorous collaboration across sectors, and among the nation’s top research facilities, can advance U.S. economic prosperity, health, and well-being.”Research with impactMuch of MIT’s joint research with INL involves tests and simulations of new nuclear materials, fuels, and instrumentation. One of the largest collaborations was part of a global push for more accident-tolerant fuels in the wake of the nuclear accident that followed the 2011 earthquake and tsunami in Fukushima, Japan.In a series of studies involving INL and members of the nuclear energy industry, MIT researchers helped identify and evaluate alloy materials that could be deployed in the near term to not only bolster safety but also offer higher densities of fuel.“These new alloys can withstand much more challenging conditions during abnormal occurrences without reacting chemically with steam, which could result in hydrogen explosions during accidents,” explains Buongiorno, who is also the director of science and technology at MIT’s Nuclear Reactor Laboratory and the director of MIT’s Center for Advanced Nuclear Energy Systems. “The fuels can take much more abuse without breaking apart in the reactor, resulting in a higher safety margin.”The fuels tested at MIT were eventually adopted by power plants across the U.S., starting with the Byron Clean Energy Center in Ogle County, Illinois.“We’re also developing new materials, fuels, and instrumentation,” Buongiorno says. “People don’t just come to MIT and say, ‘I have this idea, evaluate it for me.’ We collaborate with industry and national labs to develop the new ideas together, and then we put them to the test,  reproducing the environment in which these materials and fuels would operate in commercial power reactors. That capability is quite unique.”Another major collaboration was led by Koroush Shirvan, MIT’s Atlantic Richfield Career Development Professor in Energy Studies. Shirvan’s team analyzed the costs associated with different reactor designs, eventually developing an open-source tool to help industry leaders evaluate the feasibility of different approaches.“The reason we’re not building a single nuclear reactor in the U.S. right now is cost and financial risk,” Shirvan says. “The projects have gone over budget by a factor of two and their schedule has lengthened by a factor of 1.5, so we’ve been doing a lot of work assessing the risk drivers. There’s also a lot of different types of reactors proposed, so we’ve looked at their cost potential as well and how those costs change if you can mass manufacture them.”Other INL-supported research of Shirvan’s involves exploring new manufacturing methods for nuclear fuels and testing materials for use in a nuclear reactor on the surface of the moon.“You want materials that are lightweight for these nuclear reactors because you have to send them to space, but there isn’t much data around how those light materials perform in nuclear environments,” Shirvan says.People and progressEvery summer, MIT students at every level travel to Idaho to conduct research in INL labs as interns.“It’s an example of our students getting access to cutting-edge research facilities,” Shirvan says.There are also several joint research appointments between the institutions. One such appointment is held by Sacit Cetiner, a distinguished scientist at INL who also currently runs the MIT and INL Joint Center for Reactor Instrumentation and Sensor Physics (CRISP) at MIT’s Nuclear Reactor Laboratory.CRISP focuses its research on key technology areas in the field of instrumentation and controls, which have long stymied the bottom line of nuclear power generation.“For the current light-water reactor fleet, operations and maintenance expenditures constitute a sizeable fraction of unit electricity generation cost,” says Cetiner. “In order to make advanced reactors economically competitive, it’s much more reasonable to address anticipated operational issues during the design phase. One such critical technology area is remote and autonomous operations. Working directly with INL, which manages the projects for the design and testing of several advanced reactors under a number of federal programs, gives our students, faculty, and researchers opportunities to make a real impact.”The sharing of experts helps strengthen MIT and the nation’s nuclear workforce overall.“MIT has a crucial role to play in advancing the country’s nuclear industry, whether that’s testing and developing new technologies or assessing the economic feasibility of new nuclear designs,” Buongiorno says. More

  • in

    New tool makes generative AI models more likely to create breakthrough materials

    The artificial intelligence models that turn text into images are also useful for generating new materials. Over the last few years, generative materials models from companies like Google, Microsoft, and Meta have drawn on their training data to help researchers design tens of millions of new materials.But when it comes to designing materials with exotic quantum properties like superconductivity or unique magnetic states, those models struggle. That’s too bad, because humans could use the help. For example, after a decade of research into a class of materials that could revolutionize quantum computing, called quantum spin liquids, only a dozen material candidates have been identified. The bottleneck means there are fewer materials to serve as the basis for technological breakthroughs.Now, MIT researchers have developed a technique that lets popular generative materials models create promising quantum materials by following specific design rules. The rules, or constraints, steer models to create materials with unique structures that give rise to quantum properties.“The models from these large companies generate materials optimized for stability,” says Mingda Li, MIT’s Class of 1947 Career Development Professor. “Our perspective is that’s not usually how materials science advances. We don’t need 10 million new materials to change the world. We just need one really good material.”The approach is described today in a paper published by Nature Materials. The researchers applied their technique to generate millions of candidate materials consisting of geometric lattice structures associated with quantum properties. From that pool, they synthesized two actual materials with exotic magnetic traits.“People in the quantum community really care about these geometric constraints, like the Kagome lattices that are two overlapping, upside-down triangles. We created materials with Kagome lattices because those materials can mimic the behavior of rare earth elements, so they are of high technical importance.” Li says.Li is the senior author of the paper. His MIT co-authors include PhD students Ryotaro Okabe, Mouyang Cheng, Abhijatmedhi Chotrattanapituk, and Denisse Cordova Carrizales; postdoc Manasi Mandal; undergraduate researchers Kiran Mak and Bowen Yu; visiting scholar Nguyen Tuan Hung; Xiang Fu ’22, PhD ’24; and professor of electrical engineering and computer science Tommi Jaakkola, who is an affiliate of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and Institute for Data, Systems, and Society. Additional co-authors include Yao Wang of Emory University, Weiwei Xie of Michigan State University, YQ Cheng of Oak Ridge National Laboratory, and Robert Cava of Princeton University.Steering models toward impactA material’s properties are determined by its structure, and quantum materials are no different. Certain atomic structures are more likely to give rise to exotic quantum properties than others. For instance, square lattices can serve as a platform for high-temperature superconductors, while other shapes known as Kagome and Lieb lattices can support the creation of materials that could be useful for quantum computing.To help a popular class of generative models known as a diffusion models produce materials that conform to particular geometric patterns, the researchers created SCIGEN (short for Structural Constraint Integration in GENerative model). SCIGEN is a computer code that ensures diffusion models adhere to user-defined constraints at each iterative generation step. With SCIGEN, users can give any generative AI diffusion model geometric structural rules to follow as it generates materials.AI diffusion models work by sampling from their training dataset to generate structures that reflect the distribution of structures found in the dataset. SCIGEN blocks generations that don’t align with the structural rules.To test SCIGEN, the researchers applied it to a popular AI materials generation model known as DiffCSP. They had the SCIGEN-equipped model generate materials with unique geometric patterns known as Archimedean lattices, which are collections of 2D lattice tilings of different polygons. Archimedean lattices can lead to a range of quantum phenomena and have been the focus of much research.“Archimedean lattices give rise to quantum spin liquids and so-called flat bands, which can mimic the properties of rare earths without rare earth elements, so they are extremely important,” says Cheng, a co-corresponding author of the work. “Other Archimedean lattice materials have large pores that could be used for carbon capture and other applications, so it’s a collection of special materials. In some cases, there are no known materials with that lattice, so I think it will be really interesting to find the first material that fits in that lattice.”The model generated over 10 million material candidates with Archimedean lattices. One million of those materials survived a screening for stability. Using the supercomputers in Oak Ridge National Laboratory, the researchers then took a smaller sample of 26,000 materials and ran detailed simulations to understand how the materials’ underlying atoms behaved. The researchers found magnetism in 41 percent of those structures.From that subset, the researchers synthesized two previously undiscovered compounds, TiPdBi and TiPbSb, at Xie and Cava’s labs. Subsequent experiments showed the AI model’s predictions largely aligned with the actual material’s properties.“We wanted to discover new materials that could have a huge potential impact by incorporating these structures that have been known to give rise to quantum properties,” says Okabe, the paper’s first author. “We already know that these materials with specific geometric patterns are interesting, so it’s natural to start with them.”Accelerating material breakthroughsQuantum spin liquids could unlock quantum computing by enabling stable, error-resistant qubits that serve as the basis of quantum operations. But no quantum spin liquid materials have been confirmed. Xie and Cava believe SCIGEN could accelerate the search for these materials.“There’s a big search for quantum computer materials and topological superconductors, and these are all related to the geometric patterns of materials,” Xie says. “But experimental progress has been very, very slow,” Cava adds. “Many of these quantum spin liquid materials are subject to constraints: They have to be in a triangular lattice or a Kagome lattice. If the materials satisfy those constraints, the quantum researchers get excited; it’s a necessary but not sufficient condition. So, by generating many, many materials like that, it immediately gives experimentalists hundreds or thousands more candidates to play with to accelerate quantum computer materials research.”“This work presents a new tool, leveraging machine learning, that can predict which materials will have specific elements in a desired geometric pattern,” says Drexel University Professor Steve May, who was not involved in the research. “This should speed up the development of previously unexplored materials for applications in next-generation electronic, magnetic, or optical technologies.”The researchers stress that experimentation is still critical to assess whether AI-generated materials can be synthesized and how their actual properties compare with model predictions. Future work on SCIGEN could incorporate additional design rules into generative models, including chemical and functional constraints.“People who want to change the world care about material properties more than the stability and structure of materials,” Okabe says. “With our approach, the ratio of stable materials goes down, but it opens the door to generate a whole bunch of promising materials.”The work was supported, in part, by the U.S. Department of Energy, the National Energy Research Scientific Computing Center, the National Science Foundation, and Oak Ridge National Laboratory. More

  • in

    New self-assembling material could be the key to recyclable EV batteries

    Today’s electric vehicle boom is tomorrow’s mountain of electronic waste. And while myriad efforts are underway to improve battery recycling, many EV batteries still end up in landfills.A research team from MIT wants to help change that with a new kind of self-assembling battery material that quickly breaks apart when submerged in a simple organic liquid. In a new paper published in Nature Chemistry, the researchers showed the material can work as the electrolyte in a functioning, solid-state battery cell and then revert back to its original molecular components in minutes.The approach offers an alternative to shredding the battery into a mixed, hard-to-recycle mass. Instead, because the electrolyte serves as the battery’s connecting layer, when the new material returns to its original molecular form, the entire battery disassembles to accelerate the recycling process.“So far in the battery industry, we’ve focused on high-performing materials and designs, and only later tried to figure out how to recycle batteries made with complex structures and hard-to-recycle materials,” says the paper’s first author Yukio Cho PhD ’23. “Our approach is to start with easily recyclable materials and figure out how to make them battery-compatible. Designing batteries for recyclability from the beginning is a new approach.”Joining Cho on the paper are PhD candidate Cole Fincher, Ty Christoff-Tempesta PhD ’22, Kyocera Professor of Ceramics Yet-Ming Chiang, Visiting Associate Professor Julia Ortony, Xiaobing Zuo, and Guillaume Lamour.Better batteriesThere’s a scene in one of the “Harry Potter” films where Professor Dumbledore cleans a dilapidated home with the flick of the wrist and a spell. Cho says that image stuck with him as a kid. (What better way to clean your room?) When he saw a talk by Ortony on engineering molecules so that they could assemble into complex structures and then revert back to their original form, he wondered if it could be used to make battery recycling work like magic.That would be a paradigm shift for the battery industry. Today, batteries require harsh chemicals, high heat, and complex processing to recycle. There are three main parts of a battery: the positively charged cathode, the negatively charged electrode, and the electrolyte that shuttles lithium ions between them. The electrolytes in most lithium-ion batteries are highly flammable and degrade over time into toxic byproducts that require specialized handling.To simplify the recycling process, the researchers decided to make a more sustainable electrolyte. For that, they turned to a class of molecules that self-assemble in water, named aramid amphiphiles (AAs), whose chemical structures and stability mimic that of Kevlar. The researchers further designed the AAs to contain polyethylene glycol (PEG), which can conduct lithium ions, on one end of each molecule. When the molecules are exposed to water, they spontaneously form nanoribbons with ion-conducting PEG surfaces and bases that imitate the robustness of Kevlar through tight hydrogen bonding. The result is a mechanically stable nanoribbon structure that conducts ions across its surface.“The material is composed of two parts,” Cho explains. “The first part is this flexible chain that gives us a nest, or host, for lithium ions to jump around. The second part is this strong organic material component that is used in the Kevlar, which is a bulletproof material. Those make the whole structure stable.”When added to water, the nanoribbons self-assemble to form millions of nanoribbons that can be hot-pressed into a solid-state material.“Within five minutes of being added to water, the solution becomes gel-like, indicating there are so many nanofibers formed in the liquid that they start to entangle each other,” Cho says. “What’s exciting is we can make this material at scale because of the self-assembly behavior.”The team tested the material’s strength and toughness, finding it could endure the stresses associated with making and running the battery. They also constructed a solid-state battery cell that used lithium iron phosphate for the cathode and lithium titanium oxide as the anode, both common materials in today’s batteries. The nanoribbons moved lithium ions successfully between the electrodes, but a side-effect known as polarization limited the movement of lithium ions into the battery’s electrodes during fast bouts of charging and discharging, hampering its performance compared to today’s gold-standard commercial batteries.“The lithium ions moved along the nanofiber all right, but getting the lithium ion from the nanofibers to the metal oxide seems to be the most sluggish point of the process,” Cho says.When they immersed the battery cell into organic solvents, the material immediately dissolved, with each part of the battery falling away for easier recycling. Cho compared the materials’ reaction to cotton candy being submerged in water.“The electrolyte holds the two battery electrodes together and provides the lithium-ion pathways,” Cho says. “So, when you want to recycle the battery, the entire electrolyte layer can fall off naturally and you can recycle the electrodes separately.”Validating a new approachCho says the material is a proof of concept that demonstrates the recycle-first approach.“We don’t want to say we solved all the problems with this material,” Cho says. “Our battery performance was not fantastic because we used only this material as the entire electrolyte for the paper, but what we’re picturing is using this material as one layer in the battery electrolyte. It doesn’t have to be the entire electrolyte to kick off the recycling process.”Cho also sees a lot of room for optimizing the material’s performance with further experiments.Now, the researchers are exploring ways to integrate these kinds of materials into existing battery designs as well as implementing the ideas into new battery chemistries.“It’s very challenging to convince existing vendors to do something very differently,” Cho says. “But with new battery materials that may come out in five or 10 years, it could be easier to integrate this into new designs in the beginning.”Cho also believes the approach could help reshore lithium supplies by reusing materials from batteries that are already in the U.S.“People are starting to realize how important this is,” Cho says. “If we can start to recycle lithium-ion batteries from battery waste at scale, it’ll have the same effect as opening lithium mines in the U.S. Also, each battery requires a certain amount of lithium, so extrapolating out the growth of electric vehicles, we need to reuse this material to avoid massive lithium price spikes.”The work was supported, in part, by the National Science Foundation and the U.S. Department of Energy. More

  • in

    New method could monitor corrosion and cracking in a nuclear reactor

    MIT researchers have developed a technique that enables real-time, 3D monitoring of corrosion, cracking, and other material failure processes inside a nuclear reactor environment.This could allow engineers and scientists to design safer nuclear reactors that also deliver higher performance for applications like electricity generation and naval vessel propulsion.During their experiments, the researchers utilized extremely powerful X-rays to mimic the behavior of neutrons interacting with a material inside a nuclear reactor.They found that adding a buffer layer of silicon dioxide between the material and its substrate, and keeping the material under the X-ray beam for a longer period of time, improves the stability of the sample. This allows for real-time monitoring of material failure processes.By reconstructing 3D image data on the structure of a material as it fails, researchers could design more resilient materials that can better withstand the stress caused by irradiation inside a nuclear reactor.“If we can improve materials for a nuclear reactor, it means we can extend the life of that reactor. It also means the materials will take longer to fail, so we can get more use out of a nuclear reactor than we do now. The technique we’ve demonstrated here allows to push the boundary in understanding how materials fail in real-time,” says Ericmoore Jossou, who has shared appointments in the Department of Nuclear Science and Engineering (NSE), where he is the John Clark Hardwick Professor, and the Department of Electrical Engineering and Computer Science (EECS), and the MIT Schwarzman College of Computing.Jossou, senior author of a study on this technique, is joined on the paper by lead author David Simonne, an NSE postdoc; Riley Hultquist, a graduate student in NSE; Jiangtao Zhao, of the European Synchrotron; and Andrea Resta, of Synchrotron SOLEIL. The research was published Tuesday by the journal Scripta Materiala.“Only with this technique can we measure strain with a nanoscale resolution during corrosion processes. Our goal is to bring such novel ideas to the nuclear science community while using synchrotrons both as an X-ray probe and radiation source,” adds Simonne.Real-time imagingStudying real-time failure of materials used in advanced nuclear reactors has long been a goal of Jossou’s research group.Usually, researchers can only learn about such material failures after the fact, by removing the material from its environment and imaging it with a high-resolution instrument.“We are interested in watching the process as it happens. If we can do that, we can follow the material from beginning to end and see when and how it fails. That helps us understand a material much better,” he says.They simulate the process by firing an extremely focused X-ray beam at a sample to mimic the environment inside a nuclear reactor. The researchers must use a special type of high-intensity X-ray, which is only found in a handful of experimental facilities worldwide.For these experiments they studied nickel, a material incorporated into alloys that are commonly used in advanced nuclear reactors. But before they could start the X-ray equipment, they had to prepare a sample.To do this, the researchers used a process called solid state dewetting, which involves putting a thin film of the material onto a substrate and heating it to an extremely high temperature in a furnace until it transforms into single crystals.“We thought making the samples was going to be a walk in the park, but it wasn’t,” Jossou says.As the nickel heated up, it interacted with the silicon substrate and formed a new chemical compound, essentially derailing the entire experiment. After much trial-and-error, the researchers found that adding a thin layer of silicon dioxide between the nickel and substrate prevented this reaction.But when crystals formed on top of the buffer layer, they were highly strained. This means the individual atoms had moved slightly to new positions, causing distortions in the crystal structure.Phase retrieval algorithms can typically recover the 3D size and shape of a crystal in real-time, but if there is too much strain in the material, the algorithms will fail.However, the team was surprised to find that keeping the X-ray beam trained on the sample for a longer period of time caused the strain to slowly relax, due to the silicon buffer layer. After a few extra minutes of X-rays, the sample was stable enough that they could utilize phase retrieval algorithms to accurately recover the 3D shape and size of the crystal.“No one had been able to do that before. Now that we can make this crystal, we can image electrochemical processes like corrosion in real time, watching the crystal fail in 3D under conditions that are very similar to inside a nuclear reactor. This has far-reaching impacts,” he says.They experimented with a different substrate, such as niobium doped strontium titanate, and found that only a silicon dioxide buffered silicon wafer created this unique effect.An unexpected resultAs they fine-tuned the experiment, the researchers discovered something else.They could also use the X-ray beam to precisely control the amount of strain in the material, which could have implications for the development of microelectronics.In the microelectronics community, engineers often introduce strain to deform a material’s crystal structure in a way that boosts its electrical or optical properties.“With our technique, engineers can use X-rays to tune the strain in microelectronics while they are manufacturing them. While this was not our goal with these experiments, it is like getting two results for the price of one,” he adds.In the future, the researchers want to apply this technique to more complex materials like steel and other metal alloys used in nuclear reactors and aerospace applications. They also want to see how changing the thickness of the silicon dioxide buffer layer impacts their ability to control the strain in a crystal sample.“This discovery is significant for two reasons. First, it provides fundamental insight into how nanoscale materials respond to radiation — a question of growing importance for energy technologies, microelectronics, and quantum materials. Second, it highlights the critical role of the substrate in strain relaxation, showing that the supporting surface can determine whether particles retain or release strain when exposed to focused X-ray beams,” says Edwin Fohtung, an associate professor at the Rensselaer Polytechnic Institute, who was not involved with this work.This work was funded, in part, by the MIT Faculty Startup Fund and the U.S. Department of Energy. The sample preparation was carried out, in part, at the MIT.nano facilities. More

  • in

    Study sheds light on graphite’s lifespan in nuclear reactors

    Graphite is a key structural component in some of the world’s oldest nuclear reactors and many of the next-generation designs being built today. But it also condenses and swells in response to radiation — and the mechanism behind those changes has proven difficult to study.Now, MIT researchers and collaborators have uncovered a link between properties of graphite and how the material behaves in response to radiation. The findings could lead to more accurate, less destructive ways of predicting the lifespan of graphite materials used in reactors around the world.“We did some basic science to understand what leads to swelling and, eventually, failure in graphite structures,” says MIT Research Scientist Boris Khaykovich, senior author of the new study. “More research will be needed to put this into practice, but the paper proposes an attractive idea for industry: that you might not need to break hundreds of irradiated samples to understand their failure point.”Specifically, the study shows a connection between the size of the pores within graphite and the way the material swells and shrinks in volume, leading to degradation.“The lifetime of nuclear graphite is limited by irradiation-induced swelling,” says co-author and MIT Research Scientist Lance Snead. “Porosity is a controlling factor in this swelling, and while graphite has been extensively studied for nuclear applications since the Manhattan Project, we still do not have a clear understanding of the porosity in both mechanical properties and swelling. This work addresses that.”The open-access paper appears this week in Interdisciplinary Materials. It is co-authored by Khaykovich, Snead, MIT Research Scientist Sean Fayfar, former MIT research fellow Durgesh Rai, Stony Brook University Assistant Professor David Sprouster, Oak Ridge National Laboratory Staff Scientist Anne Campbell, and Argonne National Laboratory Physicist Jan Ilavsky.A long-studied, complex materialEver since 1942, when physicists and engineers built the world’s first nuclear reactor on a converted squash court at the University of Chicago, graphite has played a central role in the generation of nuclear energy. That first reactor, dubbed the Chicago Pile, was constructed from about 40,000 graphite blocks, many of which contained nuggets of uranium.Today graphite is a vital component of many operating nuclear reactors and is expected to play a central role in next-generation reactor designs like molten-salt and high-temperature gas reactors. That’s because graphite is a good neutron moderator, slowing down the neutrons released by nuclear fission so they are more likely to create fissions themselves and sustain a chain reaction.“The simplicity of graphite makes it valuable,” Khaykovich explains. “It’s made of carbon, and it’s relatively well-known how to make it cleanly. Graphite is a very mature technology. It’s simple, stable, and we know it works.”But graphite also has its complexities.“We call graphite a composite even though it’s made up of only carbon atoms,” Khaykovich says. “It includes ‘filler particles’ that are more crystalline, then there is a matrix called a ‘binder’ that is less crystalline, then there are pores that span in length from nanometers to many microns.”Each graphite grade has its own composite structure, but they all contain fractals, or shapes that look the same at different scales.Those complexities have made it hard to predict how graphite will respond to radiation in microscopic detail, although it’s been known for decades that when graphite is irradiated, it first densifies, reducing its volume by up to 10 percent, before swelling and cracking. The volume fluctuation is caused by changes to graphite’s porosity and lattice stress.“Graphite deteriorates under radiation, as any material does,” Khaykovich says. “So, on the one hand we have a material that’s extremely well-known, and on the other hand, we have a material that is immensely complicated, with a behavior that’s impossible to predict through computer simulations.”For the study, the researchers received irradiated graphite samples from Oak Ridge National Laboratory. Co-authors Campbell and Snead were involved in irradiating the samples some 20 years ago. The samples are a grade of graphite known as G347A.The research team used an analysis technique known as X-ray scattering, which uses the scattered intensity of an X-ray beam to analyze the properties of material. Specifically, they looked at the distribution of sizes and surface areas of the sample’s pores, or what are known as the material’s fractal dimensions.“When you look at the scattering intensity, you see a large range of porosity,” Fayfar says. “Graphite has porosity over such large scales, and you have this fractal self-similarity: The pores in very small sizes look similar to pores spanning microns, so we used fractal models to relate different morphologies across length scales.”Fractal models had been used on graphite samples before, but not on irradiated samples to see how the material’s pore structures changed. The researchers found that when graphite is first exposed to radiation, its pores get filled as the material degrades.“But what was quite surprising to us is the [size distribution of the pores] turned back around,” Fayfar says. “We had this recovery process that matched our overall volume plots, which was quite odd. It seems like after graphite is irradiated for so long, it starts recovering. It’s sort of an annealing process where you create some new pores, then the pores smooth out and get slightly bigger. That was a big surprise.”The researchers found that the size distribution of the pores closely follows the volume change caused by radiation damage.“Finding a strong correlation between the [size distribution of pores] and the graphite’s volume changes is a new finding, and it helps connect to the failure of the material under irradiation,” Khaykovich says. “It’s important for people to know how graphite parts will fail when they are under stress and how failure probability changes under irradiation.”From research to reactorsThe researchers plan to study other graphite grades and explore further how pore sizes in irradiated graphite correlate with the probability of failure. They speculate that a statistical technique known as the Weibull Distribution could be used to predict graphite’s time until failure. The Weibull Distribution is already used to describe the probability of failure in ceramics and other porous materials like metal alloys.Khaykovich also speculated that the findings could contribute to our understanding of why materials densify and swell under irradiation.“There’s no quantitative model of densification that takes into account what’s happening at these tiny scales in graphite,” Khaykovich says. “Graphite irradiation densification reminds me of sand or sugar, where when you crush big pieces into smaller grains, they densify. For nuclear graphite, the crushing force is the energy that neutrons bring in, causing large pores to get filled with smaller, crushed pieces. But more energy and agitation create still more pores, and so graphite swells again. It’s not a perfect analogy, but I believe analogies bring progress for understanding these materials.”The researchers describe the paper as an important step toward informing graphite production and use in nuclear reactors of the future.“Graphite has been studied for a very long time, and we’ve developed a lot of strong intuitions about how it will respond in different environments, but when you’re building a nuclear reactor, details matter,” Khaykovich says. “People want numbers. They need to know how much thermal conductivity will change, how much cracking and volume change will happen. If components are changing volume, at some point you need to take that into account.”This work was supported, in part, by the U.S. Department of Energy. More

  • in

    Surprisingly diverse innovations led to dramatically cheaper solar panels

    The cost of solar panels has dropped by more than 99 percent since the 1970s, enabling widespread adoption of photovoltaic systems that convert sunlight into electricity.A new MIT study drills down on specific innovations that enabled such dramatic cost reductions, revealing that technical advances across a web of diverse research efforts and industries played a pivotal role.The findings could help renewable energy companies make more effective R&D investment decisions and aid policymakers in identifying areas to prioritize to spur growth in manufacturing and deployment.The researchers’ modeling approach shows that key innovations often originated outside the solar sector, including advances in semiconductor fabrication, metallurgy, glass manufacturing, oil and gas drilling, construction processes, and even legal domains.“Our results show just how intricate the process of cost improvement is, and how much scientific and engineering advances, often at a very basic level, are at the heart of these cost reductions. A lot of knowledge was drawn from different domains and industries, and this network of knowledge is what makes these technologies improve,” says study senior author Jessika Trancik, a professor in MIT’s Institute for Data, Systems, and Society.Trancik is joined on the paper by co-lead authors Goksin Kavlak, a former IDSS graduate student and postdoc who is now a senior energy associate at the Brattle Group; Magdalena Klemun, a former IDSS graduate student and postdoc who is now an assistant professor at Johns Hopkins University; former MIT postdoc Ajinkya Kamat; as well as Brittany Smith and Robert Margolis of the National Renewable Energy Laboratory. The research appears today in PLOS ONE.Identifying innovationsThis work builds on mathematical models that the researchers previously developed that tease out the effects of engineering technologies on the cost of photovoltaic (PV) modules and systems.In this study, the researchers aimed to dig even deeper into the scientific advances that drove those cost declines.They combined their quantitative cost model with a detailed, qualitative analysis of innovations that affected the costs of PV system materials, manufacturing steps, and deployment processes.“Our quantitative cost model guided the qualitative analysis, allowing us to look closely at innovations in areas that are hard to measure due to a lack of quantitative data,” Kavlak says.Building on earlier work identifying key cost drivers — such as the number of solar cells per module, wiring efficiency, and silicon wafer area — the researchers conducted a structured scan of the literature for innovations likely to affect these drivers. Next, they grouped these innovations to identify patterns, revealing clusters that reduced costs by improving materials or prefabricating components to streamline manufacturing and installation. Finally, the team tracked industry origins and timing for each innovation, and consulted domain experts to zero in on the most significant innovations.All told, they identified 81 unique innovations that affected PV system costs since 1970, from improvements in antireflective coated glass to the implementation of fully online permitting interfaces.“With innovations, you can always go to a deeper level, down to things like raw materials processing techniques, so it was challenging to know when to stop. Having that quantitative model to ground our qualitative analysis really helped,” Trancik says.They chose to separate PV module costs from so-called balance-of-system (BOS) costs, which cover things like mounting systems, inverters, and wiring.PV modules, which are wired together to form solar panels, are mass-produced and can be exported, while many BOS components are designed, built, and sold at the local level.“By examining innovations both at the BOS level and within the modules, we identify the different types of innovations that have emerged in these two parts of PV technology,” Kavlak says.BOS costs depend more on soft technologies, nonphysical elements such as permitting procedures, which have contributed significantly less to PV’s past cost improvement compared to hardware innovations.“Often, it comes down to delays. Time is money, and if you have delays on construction sites and unpredictable processes, that affects these balance-of-system costs,” Trancik says.Innovations such as automated permitting software, which flags code-compliant systems for fast-track approval, show promise. Though not yet quantified in this study, the team’s framework could support future analysis of their economic impact and similar innovations that streamline deployment processes.Interconnected industriesThe researchers found that innovations from the semiconductor, electronics, metallurgy, and petroleum industries played a major role in reducing both PV and BOS costs, but BOS costs were also impacted by innovations in software engineering and electric utilities.Noninnovation factors, like efficiency gains from bulk purchasing and the accumulation of knowledge in the solar power industry, also reduced some cost variables.In addition, while most PV panel innovations originated in research organizations or industry, many BOS innovations were developed by city governments, U.S. states, or professional associations.“I knew there was a lot going on with this technology, but the diversity of all these fields and how closely linked they are, and the fact that we can clearly see that network through this analysis, was interesting,” Trancik says.“PV was very well-positioned to absorb innovations from other industries — thanks to the right timing, physical compatibility, and supportive policies to adapt innovations for PV applications,” Klemun adds.The analysis also reveals the role greater computing power could play in reducing BOS costs through advances like automated engineering review systems and remote site assessment software.“In terms of knowledge spillovers, what we’ve seen so far in PV may really just be the beginning,” Klemun says, pointing to the expanding role of robotics and AI-driven digital tools in driving future cost reductions and quality improvements.In addition to their qualitative analysis, the researchers demonstrated how this methodology could be used to estimate the quantitative impact of a particular innovation if one has the numerical data to plug into the cost equation.For instance, using information about material prices and manufacturing procedures, they estimate that wire sawing, a technique which was introduced in the 1980s, led to an overall PV system cost decrease of $5 per watt by reducing silicon losses and increasing throughput during fabrication.“Through this retrospective analysis, you learn something valuable for future strategy because you can see what worked and what didn’t work, and the models can also be applied prospectively. It is also useful to know what adjacent sectors may help support improvement in a particular technology,” Trancik says.Moving forward, the researchers plan to apply this methodology to a wide range of technologies, including other renewable energy systems. They also want to further study soft technology to identify innovations or processes that could accelerate cost reductions.“Although the process of technological innovation may seem like a black box, we’ve shown that you can study it just like any other phenomena,” Trancik says.This research is funded, in part, by the U.S. Department of Energy Solar Energies Technology Office. More

  • in

    Theory-guided strategy expands the scope of measurable quantum interactions

    A new theory-guided framework could help scientists probe the properties of new semiconductors for next-generation microelectronic devices, or discover materials that boost the performance of quantum computers.Research to develop new or better materials typically involves investigating properties that can be reliably measured with existing lab equipment, but this represents just a fraction of the properties that scientists could potentially probe in principle. Some properties remain effectively “invisible” because they are too difficult to capture directly with existing methods.Take electron-phonon interaction — this property plays a critical role in a material’s electrical, thermal, optical, and superconducting properties, but directly capturing it using existing techniques is notoriously challenging.Now, MIT researchers have proposed a theoretically justified approach that could turn this challenge into an opportunity. Their method reinterprets neutron scattering, an often-overlooked interference effect as a potential direct probe of electron-phonon coupling strength.The procedure creates two interaction effects in the material. The researchers show that, by deliberately designing their experiment to leverage the interference between the two interactions, they can capture the strength of a material’s electron-phonon interaction.The researchers’ theory-informed methodology could be used to shape the design of future experiments, opening the door to measuring new quantities that were previously out of reach.“Rather than discovering new spectroscopy techniques by pure accident, we can use theory to justify and inform the design of our experiments and our physical equipment,” says Mingda Li, the Class of 1947 Career Development Professor and an associate professor of nuclear science and engineering, and senior author of a paper on this experimental method.Li is joined on the paper by co-lead authors Chuliang Fu, an MIT postdoc; Phum Siriviboon and Artittaya Boonkird, both MIT graduate students; as well as others at MIT, the National Institute of Standards and Technology, the University of California at Riverside, Michigan State University, and Oak Ridge National Laboratory. The research appears this week in Materials Today Physics.Investigating interferenceNeutron scattering is a powerful measurement technique that involves aiming a beam of neutrons at a material and studying how the neutrons are scattered after they strike it. The method is ideal for measuring a material’s atomic structure and magnetic properties.When neutrons collide with the material sample, they interact with it through two different mechanisms, creating a nuclear interaction and a magnetic interaction. These interactions can interfere with each other.“The scientific community has known about this interference effect for a long time, but researchers tend to view it as a complication that can obscure measurement signals. So it hasn’t received much focused attention,” Fu says.The team and their collaborators took a conceptual “leap of faith” and decided to explore this oft-overlooked interference effect more deeply.They flipped the traditional materials research approach on its head by starting with a multifaceted theoretical analysis. They explored what happens inside a material when the nuclear interaction and magnetic interaction interfere with each other.Their analysis revealed that this interference pattern is directly proportional to the strength of the material’s electron-phonon interaction.“This makes the interference effect a probe we can use to detect this interaction,” explains Siriviboon.Electron-phonon interactions play a role in a wide range of material properties. They affect how heat flows through a material, impact a material’s ability to absorb and emit light, and can even lead to superconductivity.But the complexity of these interactions makes them hard to directly measure using existing experimental techniques. Instead, researchers often rely on less precise, indirect methods to capture electron-phonon interactions.However, leveraging this interference effect enables direct measurement of the electron-phonon interaction, a major advantage over other approaches.“Being able to directly measure the electron-phonon interaction opens the door to many new possibilities,” says Boonkird.Rethinking materials researchBased on their theoretical insights, the researchers designed an experimental setup to demonstrate their approach.Since the available equipment wasn’t powerful enough for this type of neutron scattering experiment, they were only able to capture a weak electron-phonon interaction signal — but the results were clear enough to support their theory.“These results justify the need for a new facility where the equipment might be 100 to 1,000 times more powerful, enabling scientists to clearly resolve the signal and measure the interaction,” adds Landry.With improved neutron scattering facilities, like those proposed for the upcoming Second Target Station at Oak Ridge National Laboratory, this experimental method could be an effective technique for measuring many crucial material properties.For instance, by helping scientists identify and harness better semiconductors, this approach could enable more energy-efficient appliances, faster wireless communication devices, and more reliable medical equipment like pacemakers and MRI scanners.   Ultimately, the team sees this work as a broader message about the need to rethink the materials research process.“Using theoretical insights to design experimental setups in advance can help us redefine the properties we can measure,” Fu says.To that end, the team and their collaborators are currently exploring other types of interactions they could leverage to investigate additional material properties.“This is a very interesting paper,” says Jon Taylor, director of the neutron scattering division at Oak Ridge National Laboratory, who was not involved with this research. “It would be interesting to have a neutron scattering method that is directly sensitive to charge lattice interactions or more generally electronic effects that were not just magnetic moments. It seems that such an effect is expectedly rather small, so facilities like STS could really help develop that fundamental understanding of the interaction and also leverage such effects routinely for research.”This work is funded, in part, by the U.S. Department of Energy and the National Science Foundation. More