More stories

  • in

    Fast-tracking fusion energy’s arrival with AI and accessibility

    As the impacts of climate change continue to grow, so does interest in fusion’s potential as a clean energy source. While fusion reactions have been studied in laboratories since the 1930s, there are still many critical questions scientists must answer to make fusion power a reality, and time is of the essence. As part of their strategy to accelerate fusion energy’s arrival and reach carbon neutrality by 2050, the U.S. Department of Energy (DoE) has announced new funding for a project led by researchers at MIT’s Plasma Science and Fusion Center (PSFC) and four collaborating institutions.

    Cristina Rea, a research scientist and group leader at the PSFC, will serve as the primary investigator for the newly funded three-year collaboration to pilot the integration of fusion data into a system that can be read by AI-powered tools. The PSFC, together with scientists from the College of William and Mary, the University of Wisconsin at Madison, Auburn University, and the nonprofit HDF Group, plan to create a holistic fusion data platform, the elements of which could offer unprecedented access for researchers, especially underrepresented students. The project aims to encourage diverse participation in fusion and data science, both in academia and the workforce, through outreach programs led by the group’s co-investigators, of whom four out of five are women. 

    The DoE’s award, part of a $29 million funding package for seven projects across 19 institutions, will support the group’s efforts to distribute data produced by fusion devices like the PSFC’s Alcator C-Mod, a donut-shaped “tokamak” that utilized powerful magnets to control and confine fusion reactions. Alcator C-Mod operated from 1991 to 2016 and its data are still being studied, thanks in part to the PSFC’s commitment to the free exchange of knowledge.

    Currently, there are nearly 50 public experimental magnetic confinement-type fusion devices; however, both historical and current data from these devices can be difficult to access. Some fusion databases require signing user agreements, and not all data are catalogued and organized the same way. Moreover, it can be difficult to leverage machine learning, a class of AI tools, for data analysis and to enable scientific discovery without time-consuming data reorganization. The result is fewer scientists working on fusion, greater barriers to discovery, and a bottleneck in harnessing AI to accelerate progress.

    The project’s proposed data platform addresses technical barriers by being FAIR — Findable, Interoperable, Accessible, Reusable — and by adhering to UNESCO’s Open Science (OS) recommendations to improve the transparency and inclusivity of science; all of the researchers’ deliverables will adhere to FAIR and OS principles, as required by the DoE. The platform’s databases will be built using MDSplusML, an upgraded version of the MDSplus open-source software developed by PSFC researchers in the 1980s to catalogue the results of Alcator C-Mod’s experiments. Today, nearly 40 fusion research institutes use MDSplus to store and provide external access to their fusion data. The release of MDSplusML aims to continue that legacy of open collaboration.

    The researchers intend to address barriers to participation for women and disadvantaged groups not only by improving general access to fusion data, but also through a subsidized summer school that will focus on topics at the intersection of fusion and machine learning, which will be held at William and Mary for the next three years.

    Of the importance of their research, Rea says, “This project is about responding to the fusion community’s needs and setting ourselves up for success. Scientific advancements in fusion are enabled via multidisciplinary collaboration and cross-pollination, so accessibility is absolutely essential. I think we all understand now that diverse communities have more diverse ideas, and they allow faster problem-solving.”

    The collaboration’s work also aligns with vital areas of research identified in the International Atomic Energy Agency’s “AI for Fusion” Coordinated Research Project (CRP). Rea was selected as the technical coordinator for the IAEA’s CRP emphasizing community engagement and knowledge access to accelerate fusion research and development. In a letter of support written for the group’s proposed project, the IAEA stated that, “the work [the researchers] will carry out […] will be beneficial not only to our CRP but also to the international fusion community in large.”

    PSFC Director and Hitachi America Professor of Engineering Dennis Whyte adds, “I am thrilled to see PSFC and our collaborators be at the forefront of applying new AI tools while simultaneously encouraging and enabling extraction of critical data from our experiments.”

    “Having the opportunity to lead such an important project is extremely meaningful, and I feel a responsibility to show that women are leaders in STEM,” says Rea. “We have an incredible team, strongly motivated to improve our fusion ecosystem and to contribute to making fusion energy a reality.” More

  • in

    New clean air and water labs to bring together researchers, policymakers to find climate solutions

    MIT’s Abdul Latif Jameel Poverty Action Lab (J-PAL) is launching the Clean Air and Water Labs, with support from Community Jameel, to generate evidence-based solutions aimed at increasing access to clean air and water.

    Led by J-PAL’s Africa, Middle East and North Africa (MENA), and South Asia regional offices, the labs will partner with government agencies to bring together researchers and policymakers in areas where impactful clean air and water solutions are most urgently needed.

    Together, the labs aim to improve clean air and water access by informing the scaling of evidence-based policies and decisions of city, state, and national governments that serve nearly 260 million people combined.

    The Clean Air and Water Labs expand the work of J-PAL’s King Climate Action Initiative, building on the foundational support of King Philanthropies, which significantly expanded J-PAL’s work at the nexus of climate change and poverty alleviation worldwide. 

    Air pollution, water scarcity and the need for evidence 

    Africa, MENA, and South Asia are on the front lines of global air and water crises. 

    “There is no time to waste investing in solutions that do not achieve their desired effects,” says Iqbal Dhaliwal, global executive director of J-PAL. “By co-generating rigorous real-world evidence with researchers, policymakers can have the information they need to dedicate resources to scaling up solutions that have been shown to be effective.”

    In India, about 75 percent of households did not have drinking water on premises in 2018. In MENA, nearly 90 percent of children live in areas facing high or extreme water stress. Across Africa, almost 400 million people lack access to safe drinking water. 

    Simultaneously, air pollution is one of the greatest threats to human health globally. In India, extraordinary levels of air pollution are shortening the average life expectancy by five years. In Africa, rising indoor and ambient air pollution contributed to 1.1 million premature deaths in 2019. 

    There is increasing urgency to find high-impact and cost-effective solutions to the worsening threats to human health and resources caused by climate change. However, data and evidence on potential solutions are limited.

    Fostering collaboration to generate policy-relevant evidence 

    The Clean Air and Water Labs will foster deep collaboration between government stakeholders, J-PAL regional offices, and researchers in the J-PAL network. 

    Through the labs, J-PAL will work with policymakers to:

    co-diagnose the most pressing air and water challenges and opportunities for policy innovation;
    expand policymakers’ access to and use of high-quality air and water data;
    co-design potential solutions informed by existing evidence;
    co-generate evidence on promising solutions through rigorous evaluation, leveraging existing and new data sources; and
    support scaling of air and water policies and programs that are found to be effective through evaluation. 
    A research and scaling fund for each lab will prioritize resources for co-generated pilot studies, randomized evaluations, and scaling projects. 

    The labs will also collaborate with C40 Cities, a global network of mayors of the world’s leading cities that are united in action to confront the climate crisis, to share policy-relevant evidence and identify opportunities for potential new connections and research opportunities within India and across Africa.

    This model aims to strengthen the use of evidence in decision-making to ensure solutions are highly effective and to guide research to answer policymakers’ most urgent questions. J-PAL Africa, MENA, and South Asia’s strong on-the-ground presence will further bridge research and policy work by anchoring activities within local contexts. 

    “Communities across the world continue to face challenges in accessing clean air and water, a threat to human safety that has only been exacerbated by the climate crisis, along with rising temperatures and other hazards,” says George Richards, director of Community Jameel. “Through our collaboration with J-PAL and C40 in creating climate policy labs embedded in city, state, and national governments in Africa and South Asia, we are committed to innovative and science-based approaches that can help hundreds of millions of people enjoy healthier lives.”

    J-PAL Africa, MENA, and South Asia will formally launch Clean Air and Water Labs with government partners over the coming months. J-PAL is housed in the MIT Department of Economics, within the School of Humanities, Arts, and Social Sciences. More

  • in

    Tiny magnetic beads produce an optical signal that could be used to quickly detect pathogens

    Getting results from a blood test can take anywhere from one day to a week, depending on what a test is targeting. The same goes for tests of water pollution and food contamination. And in most cases, the wait time has to do with time-consuming steps in sample processing and analysis.

    Now, MIT engineers have identified a new optical signature in a widely used class of magnetic beads, which could be used to quickly detect contaminants in a variety of diagnostic tests. For example, the team showed the signature could be used to detect signs of the food contaminant Salmonella.

    The so-called Dynabeads are microscopic magnetic beads that can be coated with antibodies that bind to target molecules, such as a specific pathogen. Dynabeads are typically used in experiments in which they are mixed into solutions to capture molecules of interest. But from there, scientists have to take additional, time-consuming steps to confirm that the molecules are indeed present and bound to the beads.

    The MIT team found a faster way to confirm the presence of Dynabead-bound pathogens, using optics, specifically, Raman spectroscopy. This optical technique identifies specific molecules based on their “Raman signature,” or the unique way in which a molecule scatters light.

    The researchers found that Dynabeads have an unusually strong Raman signature that can be easily detected, much like a fluorescent tag. This signature, they found, can act as a “reporter.” If detected, the signal can serve as a quick confirmation, within less than one second, that a target pathogen is indeed present in a given sample. The team is currently working to develop a portable device for quickly detecting a range of bacterial pathogens, and their results will appear in an Emerging Investigators special issue of the Journal of Raman Spectroscopy.

    “This technique would be useful in a situation where a doctor is trying to narrow down the source of an infection in order to better inform antibiotic prescription, as well as for the detection of known pathogens in food and water,” says study co-author Marissa McDonald, a graduate student in the Harvard-MIT Program in Health Sciences and Technology. “Additionally, we hope this approach will eventually lead to expanded access to advanced diagnostics in resource-limited environments.”

    Study co-authors at MIT include Postdoctoral Associate Jongwan Lee; Visiting Scholar Nikiwe Mhlanga; Research Scientist Jeon Woong Kang; Tata Professor Rohit Karnik, who is also the associate director of the Abdul Latif Jameel Water and Food Systems Lab; and Assistant Professor Loza Tadesse of the Department of Mechanical Engineering.

    Oil and water

    Looking for diseased cells and pathogens in fluid samples is an exercise in patience.

    “It’s kind of a needle-in-a-haystack problem,” Tadesse says.

    The numbers present are so small that they must be grown in controlled environments to sufficient numbers, and their cultures stained, then studied under a microscope. The entire process can take several days to a week to yield a confident positive or negative result.

    Both Karnik and Tadesse’s labs have independently been developing techniques to speed up various parts of the pathogen testing process and make the process portable, using Dynabeads.

    Dynabeads are commercially available microscopic beads made from a magnetic iron core and a polymer shell that can be coated with antibodies. The surface antibodies act as hooks to bind specific target molecules. When mixed with a fluid, such as a vial of blood or water, any molecules present will glom onto the Dynabeads. Using a magnet, scientists can gently coax the beads to the bottom of a vial and filter them out of a solution. Karnik’s lab is investigating ways to then further separate the beads into those that are bound to a target molecule, and those that are not. “Still, the challenge is, how do we know that we have what we’re looking for?” Tadesse says.

    The beads themselves are not visible by eye. That’s where Tadesse’s work comes in. Her lab uses Raman spectroscopy as a way to “fingerprint” pathogens. She has found that different cell types scatter light in unique ways that can be used as a signature to identify them.

    In the team’s new work, she and her colleagues found that Dynabeads also have a unique and strong Raman signature that can act as a surprisingly clear beacon.

    “We were initially seeking to identify the signatures of bacteria, but the signature of the Dynabeads was actually very strong,” Tadesse says. “We realized this signal could be a means of reporting to you whether you have that bacteria or not.”

    Testing beacon

    As a practical demonstration, the researchers mixed Dynabeads into vials of water contaminated with Salmonella. They then magnetically isolated these beads onto microscope slides and measured the way light scattered through the fluid when exposed to laser light. Within half a second, they quickly detected the Dynabeads’ Raman signature — a confirmation that bound Dynabeads, and by inference, Salmonella, were present in the fluid.

    “This is something that can be used to rapidly give a positive or negative answer: Is there a contaminant or not?” Tadesse says. “Because even a handful of pathogens can cause clinical symptoms.”

    The team’s new technique is significantly faster than conventional methods and uses elements that could be adapted into smaller, more portable forms — a goal that the researchers are currently working toward. The approach is also highly versatile.

    “Salmonella is the proof of concept,” Tadesse says. “You could purchase Dynabeads with E.coli antibodies, and the same thing would happen: It would bind to the bacteria, and we’d be able to detect the Dynabead signature because the signal is super strong.”

    The team is particularly keen to apply the test to conditions such as sepsis, where time is of the essence, and where pathogens that trigger the condition are not rapidly detected using conventional lab tests.

    “There are a lot cases, like in sepsis, where pathogenic cells cannot always be grown on a plate,” says Lee, a member of Karnik’s lab. “In that case, our technique could rapidly detect these pathogens.”

    This research was supported, in part, by the MIT Laser Biomedical Research Center, the National Cancer Institute, and the Abdul Latif Jameel Water and Food Systems Lab at MIT. More

  • in

    Supporting sustainability, digital health, and the future of work

    The MIT and Accenture Convergence Initiative for Industry and Technology has selected three new research projects that will receive support from the initiative. The research projects aim to accelerate progress in meeting complex societal needs through new business convergence insights in technology and innovation.

    Established in MIT’s School of Engineering and now in its third year, the MIT and Accenture Convergence Initiative is furthering its mission to bring together technological experts from across business and academia to share insights and learn from one another. Recently, Thomas W. Malone, the Patrick J. McGovern (1959) Professor of Management, joined the initiative as its first-ever faculty lead. The research projects relate to three of the initiative’s key focus areas: sustainability, digital health, and the future of work.

    “The solutions these research teams are developing have the potential to have tremendous impact,” says Anantha Chandrakasan, dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “They embody the initiative’s focus on advancing data-driven research that addresses technology and industry convergence.”

    “The convergence of science and technology driven by advancements in generative AI, digital twins, quantum computing, and other technologies makes this an especially exciting time for Accenture and MIT to be undertaking this joint research,” says Kenneth Munie, senior managing director at Accenture Strategy, Life Sciences. “Our three new research projects focusing on sustainability, digital health, and the future of work have the potential to help guide and shape future innovations that will benefit the way we work and live.”

    The MIT and Accenture Convergence Initiative charter project researchers are described below.

    Accelerating the journey to net zero with industrial clusters

    Jessika Trancik is a professor at the Institute for Data, Systems, and Society (IDSS). Trancik’s research examines the dynamic costs, performance, and environmental impacts of energy systems to inform climate policy and accelerate beneficial and equitable technology innovation. Trancik’s project aims to identify how industrial clusters can enable companies to derive greater value from decarbonization, potentially making companies more willing to invest in the clean energy transition.

    To meet the ambitious climate goals that have been set by countries around the world, rising greenhouse gas emissions trends must be rapidly reversed. Industrial clusters — geographically co-located or otherwise-aligned groups of companies representing one or more industries — account for a significant portion of greenhouse gas emissions globally. With major energy consumers “clustered” in proximity, industrial clusters provide a potential platform to scale low-carbon solutions by enabling the aggregation of demand and the coordinated investment in physical energy supply infrastructure.

    In addition to Trancik, the research team working on this project will include Aliza Khurram, a postdoc in IDSS; Micah Ziegler, an IDSS research scientist; Melissa Stark, global energy transition services lead at Accenture; Laura Sanderfer, strategy consulting manager at Accenture; and Maria De Miguel, strategy senior analyst at Accenture.

    Eliminating childhood obesity

    Anette “Peko” Hosoi is the Neil and Jane Pappalardo Professor of Mechanical Engineering. A common theme in her work is the fundamental study of shape, kinematic, and rheological optimization of biological systems with applications to the emergent field of soft robotics. Her project will use both data from existing studies and synthetic data to create a return-on-investment (ROI) calculator for childhood obesity interventions so that companies can identify earlier returns on their investment beyond reduced health-care costs.

    Childhood obesity is too prevalent to be solved by a single company, industry, drug, application, or program. In addition to the physical and emotional impact on children, society bears a cost through excess health care spending, lost workforce productivity, poor school performance, and increased family trauma. Meaningful solutions require multiple organizations, representing different parts of society, working together with a common understanding of the problem, the economic benefits, and the return on investment. ROI is particularly difficult to defend for any single organization because investment and return can be separated by many years and involve asymmetric investments, returns, and allocation of risk. Hosoi’s project will consider the incentives for a particular entity to invest in programs in order to reduce childhood obesity.

    Hosoi will be joined by graduate students Pragya Neupane and Rachael Kha, both of IDSS, as well a team from Accenture that includes Kenneth Munie, senior managing director at Accenture Strategy, Life Sciences; Kaveh Safavi, senior managing director in Accenture Health Industry; and Elizabeth Naik, global health and public service research lead.

    Generating innovative organizational configurations and algorithms for dealing with the problem of post-pandemic employment

    Thomas Malone is the Patrick J. McGovern (1959) Professor of Management at the MIT Sloan School of Management and the founding director of the MIT Center for Collective Intelligence. His research focuses on how new organizations can be designed to take advantage of the possibilities provided by information technology. Malone will be joined in this project by John Horton, the Richard S. Leghorn (1939) Career Development Professor at the MIT Sloan School of Management, whose research focuses on the intersection of labor economics, market design, and information systems. Malone and Horton’s project will look to reshape the future of work with the help of lessons learned in the wake of the pandemic.

    The Covid-19 pandemic has been a major disrupter of work and employment, and it is not at all obvious how governments, businesses, and other organizations should manage the transition to a desirable state of employment as the pandemic recedes. Using natural language processing algorithms such as GPT-4, this project will look to identify new ways that companies can use AI to better match applicants to necessary jobs, create new types of jobs, assess skill training needed, and identify interventions to help include women and other groups whose employment was disproportionately affected by the pandemic.

    In addition to Malone and Horton, the research team will include Rob Laubacher, associate director and research scientist at the MIT Center for Collective Intelligence, and Kathleen Kennedy, executive director at the MIT Center for Collective Intelligence and senior director at MIT Horizon. The team will also include Nitu Nivedita, managing director of artificial intelligence at Accenture, and Thomas Hancock, data science senior manager at Accenture. More

  • in

    Making aviation fuel from biomass

    In 2021, nearly a quarter of the world’s carbon dioxide emissions came from the transportation sector, with aviation being a significant contributor. While the growing use of electric vehicles is helping to clean up ground transportation, today’s batteries can’t compete with fossil fuel-derived liquid hydrocarbons in terms of energy delivered per pound of weight — a major concern when it comes to flying. Meanwhile, based on projected growth in travel demand, consumption of jet fuel is projected to double between now and 2050 — the year by which the international aviation industry has pledged to be carbon neutral.

    Many groups have targeted a 100 percent sustainable hydrocarbon fuel for aircraft, but without much success. Part of the challenge is that aviation fuels are so tightly regulated. “This is a subclass of fuels that has very specific requirements in terms of the chemistry and the physical properties of the fuel, because you can’t risk something going wrong in an airplane engine,” says Yuriy Román-Leshkov, the Robert T. Haslam Professor of Chemical Engineering. “If you’re flying at 30,000 feet, it’s very cold outside, and you don’t want the fuel to thicken or freeze. That’s why the formulation is very specific.”

    Aviation fuel is a combination of two large classes of chemical compounds. Some 75 to 90 percent of it is made up of “aliphatic” molecules, which consist of long chains of carbon atoms linked together. “This is similar to what we would find in diesel fuels, so it’s a classic hydrocarbon that is out there,” explains Román-Leshkov. The remaining 10 to 25 percent consists of “aromatic” molecules, each of which includes at least one ring made up of six connected carbon atoms.

    In most transportation fuels, aromatic hydrocarbons are viewed as a source of pollution, so they’re removed as much as possible. However, in aviation fuels, some aromatic molecules must remain because they set the necessary physical and combustion properties of the overall mixture. They also perform one more critical task: They ensure that seals between various components in the aircraft’s fuel system are tight. “The aromatics get absorbed by the plastic seals and make them swell,” explains Román-Leshkov. “If for some reason the fuel changes, so can the seals, and that’s very dangerous.”

    As a result, aromatics are a necessary component — but they’re also a stumbling block in the move to create sustainable aviation fuels, or SAFs. Companies know how to make the aliphatic fraction from inedible parts of plants and other renewables, but they haven’t yet developed an approved method of generating the aromatic fraction from sustainable sources. As a result, there’s a “blending wall,” explains Román-Leshkov. “Since we need that aromatic content — regardless of its source — there will always be a limit on how much of the sustainable aliphatic hydrocarbons we can use without changing the properties of the mixture.” He notes a similar blending wall with gasoline. “We have a lot of ethanol, but we can’t add more than 10 percent without changing the properties of the gasoline. In fact, current engines can’t handle even 15 percent ethanol without modification.”

    No shortage of renewable source material — or attempts to convert it

    For the past five years, understanding and solving the SAF problem has been the goal of research by Román-Leshkov and his MIT team — Michael L. Stone PhD ’21, Matthew S. Webber, and others — as well as their collaborators at Washington State University, the National Renewable Energy Laboratory (NREL), and the Pacific Northwest National Laboratory. Their work has focused on lignin, a tough material that gives plants structural support and protection against microbes and fungi. About 30 percent of the carbon in biomass is in lignin, yet when ethanol is generated from biomass, the lignin is left behind as a waste product.

    Despite valiant efforts, no one has found an economically viable, scalable way to turn lignin into useful products, including the aromatic molecules needed to make jet fuel 100 percent sustainable. Why not? As Román-Leshkov says, “It’s because of its chemical recalcitrance.” It’s difficult to make it chemically react in useful ways. As a result, every year millions of tons of waste lignin are burned as a low-grade fuel, used as fertilizer, or simply thrown away.

    Understanding the problem requires understanding what’s happening at the atomic level. A single lignin molecule — the starting point of the challenge — is a big “macromolecule” made up of a network of many aromatic rings connected by oxygen and hydrogen atoms. Put simply, the key to converting lignin into the aromatic fraction of SAF is to break that macromolecule into smaller pieces while in the process getting rid of all of the oxygen atoms.

    In general, most industrial processes begin with a chemical reaction that prevents the subsequent upgrading of lignin: As the lignin is extracted from the biomass, the aromatic molecules in it react with one another, linking together to form strong networks that won’t react further. As a result, the lignin is no longer useful for making aviation fuels.

    To avoid that outcome, Román-Leshkov and his team utilize another approach: They use a catalyst to induce a chemical reaction that wouldn’t normally occur during extraction. By reacting the biomass in the presence of a ruthenium-based catalyst, they are able to remove the lignin from the biomass and produce a black liquid called lignin oil. That product is chemically stable, meaning that the aromatic molecules in it will no longer react with one another.

    So the researchers have now successfully broken the original lignin macromolecule into fragments that contain just one or two aromatic rings each. However, while the isolated fragments don’t chemically react, they still contain oxygen atoms. Therefore, one task remains: finding a way to remove the oxygen atoms.

    In fact, says Román-Leshkov, getting from the molecules in the lignin oil to the targeted aromatic molecules required them to accomplish three things in a single step: They needed to selectively break the carbon-oxygen bonds to free the oxygen atoms; they needed to avoid incorporating noncarbon atoms into the aromatic rings (for example, atoms from the hydrogen gas that must be present for all of the chemical transformations to occur); and they needed to preserve the carbon backbone of the molecule — that is, the series of linked carbon atoms that connect the aromatic rings that remain.

    Ultimately, Román-Leshkov and his team found a special ingredient that would do the trick: a molybdenum carbide catalyst. “It’s actually a really amazing catalyst because it can perform those three actions very well,” says Román-Leshkov. “In addition to that, it’s extremely resistant to poisons. Plants can contain a lot of components like proteins, salts, and sulfur, which often poison catalysts so they don’t work anymore. But molybdenum carbide is very robust and isn’t strongly influenced by such impurities.”

    Trying it out on lignin from poplar trees

    To test their approach in the lab, the researchers first designed and built a specialized “trickle-bed” reactor, a type of chemical reactor in which both liquids and gases flow downward through a packed bed of catalyst particles. They then obtained biomass from a poplar, a type of tree known as an “energy crop” because it grows quickly and doesn’t require a lot of fertilizer.

    To begin, they reacted the poplar biomass in the presence of their ruthenium-based catalyst to extract the lignin and produce the lignin oil. They then flowed the oil through their trickle-bed reactor containing the molybdenum carbide catalyst. The mixture that formed contained some of the targeted product but also a lot of others that still contained oxygen atoms.

    Román-Leshkov notes that in a trickle-bed reactor, the time during which the lignin oil is exposed to the catalyst depends entirely on how quickly it drips down through the packed bed. To increase the exposure time, they tried passing the oil through the same catalyst twice. However, the distribution of products that formed in the second pass wasn’t as they had predicted based on the outcome of the first pass.

    With further investigation, they figured out why. The first time the lignin oil drips through the reactor, it deposits oxygen onto the catalyst. The deposition of the oxygen changes the behavior of the catalyst such that certain products appear or disappear — with the temperature being critical. “The temperature and oxygen content set the condition of the catalyst in the first pass,” says Román-Leshkov. “Then, on the second pass, the oxygen content in the flow is lower, and the catalyst can fully break the remaining carbon-oxygen bonds.” The process can thus operate continuously: Two separate reactors containing independent catalyst beds would be connected in series, with the first pretreating the lignin oil and the second removing any oxygen that remains.

    Based on a series of experiments involving lignin oil from poplar biomass, the researchers determined the operating conditions yielding the best outcome: 350 degrees Celsius in the first step and 375 C in the second step. Under those optimized conditions, the mixture that forms is dominated by the targeted aromatic products, with the remainder consisting of small amounts of other jet-fuel aliphatic molecules and some remaining oxygen-containing molecules. The catalyst remains stable while generating more than 87 percent (by weight) of aromatic molecules.

    “When we do our chemistry with the molybdenum carbide catalyst, our total carbon yields are nearly 85 percent of the theoretical carbon yield,” says Román-Leshkov. “In most lignin-conversion processes, the carbon yields are very low, on the order of 10 percent. That’s why the catalysis community got very excited about our results — because people had not seen carbon yields as high as the ones we generated with this catalyst.”

    There remains one key question: Does the mixture of components that forms have the properties required for aviation fuel? “When we work with these new substrates to make new fuels, the blend that we create is different from standard jet fuel,” says Román-Leshkov. “Unless it has the exact properties required, it will not qualify for certification as jet fuel.”

    To check their products, Román-Leshkov and his team send samples to Washington State University, where a team operates a combustion lab devoted to testing fuels. Results from initial testing of the composition and properties of the samples have been encouraging. Based on the composition and published prescreening tools and procedures, the researchers have made initial property predictions for their samples, and they looked good. For example, the freezing point, viscosity, and threshold sooting index are predicted to be lower than the values for conventional aviation aromatics. (In other words, their material should flow more easily and be less likely to freeze than conventional aromatics while also generating less soot in the atmosphere when they burn.) Overall, the predicted properties are near to or more favorable than those of conventional fuel aromatics.

    Next steps

    The researchers are continuing to study how their sample blends behave at different temperatures and, in particular, how well they perform that key task: soaking into and swelling the seals inside jet engines. “These molecules are not the typical aromatic molecules that you use in jet fuel,” says Román-Leshkov. “Preliminary tests with sample seals show that there’s no difference in how our lignin-derived aromatics swell the seals, but we need to confirm that. There’s no room for error.”

    In addition, he and his team are working with their NREL collaborators to scale up their methods. NREL has much larger reactors and other infrastructure needed to produce large quantities of the new sustainable blend. Based on the promising results thus far, the team wants to be prepared for the further testing required for the certification of jet fuels. In addition to testing samples of the fuel, the full certification procedure calls for demonstrating its behavior in an operating engine — “not while flying, but in a lab,” clarifies Román-Leshkov. In addition to requiring large samples, that demonstration is both time-consuming and expensive — which is why it’s the very last step in the strict testing required for a new sustainable aviation fuel to be approved.

    Román-Leshkov and his colleagues are now exploring the use of their approach with other types of biomass, including pine, switchgrass, and corn stover (the leaves, stalks, and cobs left after corn is harvested). But their results with poplar biomass are promising. If further testing confirms that their aromatic products can replace the aromatics now in jet fuel, “the blending wall could disappear,” says Román-Leshkov. “We’ll have a means of producing all the components in aviation fuel from renewable material, potentially leading to aircraft fuel that’s 100 percent sustainable.”

    This research was initially funded by the Center for Bioenergy Innovation, a U.S. Department of Energy (DOE) Research Center supported by the Office of Biological and Environmental Research in the DOE Office of Science. More recent funding came from the DOE Bioenergy Technologies Office and from Eni S.p.A. through the MIT Energy Initiative. Michael L. Stone PhD ’21 is now a postdoc in chemical engineering at Stanford University. Matthew S. Webber is a graduate student in the Román-Leshkov group, now on leave for an internship at the National Renewable Energy Laboratory.

    This article appears in the Spring 2023 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    To improve solar and other clean energy tech, look beyond hardware

    To continue reducing the costs of solar energy and other clean energy technologies, scientists and engineers will likely need to focus, at least in part, on improving technology features that are not based on hardware, according to MIT researchers. They describe this finding and the mechanisms behind it today in Nature Energy.

    While the cost of installing a solar energy system has dropped by more than 99 percent since 1980, this new analysis shows that “soft technology” features, such as the codified permitting practices, supply chain management techniques, and system design processes that go into deploying a solar energy plant, contributed only 10 to 15 percent of total cost declines. Improvements to hardware features were responsible for the lion’s share.

    But because soft technology is increasingly dominating the total costs of installing solar energy systems, this trend threatens to slow future cost savings and hamper the global transition to clean energy, says the study’s senior author, Jessika Trancik, a professor in MIT’s Institute for Data, Systems, and Society (IDSS).

    Trancik’s co-authors include lead author Magdalena M. Klemun, a former IDSS graduate student and postdoc who is now an assistant professor at the Hong Kong University of Science and Technology; Goksin Kavlak, a former IDSS graduate student and postdoc who is now an associate at the Brattle Group; and James McNerney, a former IDSS postdoc and now senior research fellow at the Harvard Kennedy School.

    The team created a quantitative model to analyze the cost evolution of solar energy systems, which captures the contributions of both hardware technology features and soft technology features.

    The framework shows that soft technology hasn’t improved much over time — and that soft technology features contributed even less to overall cost declines than previously estimated.

    Their findings indicate that to reverse this trend and accelerate cost declines, engineers could look at making solar energy systems less reliant on soft technology to begin with, or they could tackle the problem directly by improving inefficient deployment processes.  

    “Really understanding where the efficiencies and inefficiencies are, and how to address those inefficiencies, is critical in supporting the clean energy transition. We are making huge investments of public dollars into this, and soft technology is going to be absolutely essential to making those funds count,” says Trancik.

    “However,” Klemun adds, “we haven’t been thinking about soft technology design as systematically as we have for hardware. That needs to change.”

    The hard truth about soft costs

    Researchers have observed that the so-called “soft costs” of building a solar power plant — the costs of designing and installing the plant — are becoming a much larger share of total costs. In fact, the share of soft costs now typically ranges from 35 to 64 percent.

    “We wanted to take a closer look at where these soft costs were coming from and why they weren’t coming down over time as quickly as the hardware costs,” Trancik says.

    In the past, scientists have modeled the change in solar energy costs by dividing total costs into additive components — hardware components and nonhardware components — and then tracking how these components changed over time.

    “But if you really want to understand where those rates of change are coming from, you need to go one level deeper to look at the technology features. Then things split out differently,” Trancik says.

    The researchers developed a quantitative approach that models the change in solar energy costs over time by assigning contributions to the individual technology features, including both hardware features and soft technology features.

    For instance, their framework would capture how much of the decline in system installation costs — a soft cost — is due to standardized practices of certified installers — a soft technology feature. It would also capture how that same soft cost is affected by increased photovoltaic module efficiency — a hardware technology feature.

    With this approach, the researchers saw that improvements in hardware had the greatest impacts on driving down soft costs in solar energy systems. For example, the efficiency of photovoltaic modules doubled between 1980 and 2017, reducing overall system costs by 17 percent. But about 40 percent of that overall decline could be attributed to reductions in soft costs tied to improved module efficiency.

    The framework shows that, while hardware technology features tend to improve many cost components, soft technology features affect only a few.

    “You can see this structural difference even before you collect data on how the technologies have changed over time. That’s why mapping out a technology’s network of cost dependencies is a useful first step to identify levers of change, for solar PV and for other technologies as well,” Klemun notes.  

    Static soft technology

    The researchers used their model to study several countries, since soft costs can vary widely around the world. For instance, solar energy soft costs in Germany are about 50 percent less than those in the U.S.

    The fact that hardware technology improvements are often shared globally led to dramatic declines in costs over the past few decades across locations, the analysis showed. Soft technology innovations typically aren’t shared across borders. Moreover, the team found that countries with better soft technology performance 20 years ago still have better performance today, while those with worse performance didn’t see much improvement.

    This country-by-country difference could be driven by regulation and permitting processes, cultural factors, or by market dynamics such as how firms interact with each other, Trancik says.

    “But not all soft technology variables are ones that you would want to change in a cost-reducing direction, like lower wages. So, there are other considerations, beyond just bringing the cost of the technology down, that we need to think about when interpreting these results,” she says.

    Their analysis points to two strategies for reducing soft costs. For one, scientists could focus on developing hardware improvements that make soft costs more dependent on hardware technology variables and less on soft technology variables, such as by creating simpler, more standardized equipment that could reduce on-site installation time.

    Or researchers could directly target soft technology features without changing hardware, perhaps by creating more efficient workflows for system installation or automated permitting platforms.

    “In practice, engineers will often pursue both approaches, but separating the two in a formal model makes it easier to target innovation efforts by leveraging specific relationships between technology characteristics and costs,” Klemun says.

    “Often, when we think about information processing, we are leaving out processes that still happen in a very low-tech way through people communicating with one another. But it is just as important to think about that as a technology as it is to design fancy software,” Trancik notes.

    In the future, she and her collaborators want to apply their quantitative model to study the soft costs related to other technologies, such as electrical vehicle charging and nuclear fission. They are also interested in better understanding the limits of soft technology improvement, and how one could design better soft technology from the outset.

    This research is funded by the U.S. Department of Energy Solar Energy Technologies Office. More

  • in

    Simple superconducting device could dramatically cut energy use in computing, other applications

    MIT scientists and their colleagues have created a simple superconducting device that could transfer current through electronic devices much more efficiently than is possible today. As a result, the new diode, a kind of switch, could dramatically cut the amount of energy used in high-power computing systems, a major problem that is estimated to become much worse. Even though it is in the early stages of development, the diode is more than twice as efficient as similar ones reported by others. It could even be integral to emerging quantum computing technologies.

    The work, which is reported in the July 13 online issue of Physical Review Letters, is also the subject of a news story in Physics Magazine.

    “This paper showcases that the superconducting diode is an entirely solved problem from an engineering perspective,” says Philip Moll, director of the Max Planck Institute for the Structure and Dynamics of Matter in Germany. Moll was not involved in the work. “The beauty of [this] work is that [Moodera and colleagues] obtained record efficiencies without even trying [and] their structures are far from optimized yet.”

    “Our engineering of a superconducting diode effect that is robust and can operate over a wide temperature range in simple systems can potentially open the door for novel technologies,” says Jagadeesh Moodera, leader of the current work and a senior research scientist in MIT’s Department of Physics. Moodera is also affiliated with the Materials Research Laboratory, the Francis Bitter Magnet Laboratory, and the Plasma Science and Fusion Center (PSFC).

    The nanoscopic rectangular diode — about 1,000 times thinner than the diameter of a human hair — is easily scalable. Millions could be produced on a single silicon wafer.

    Toward a superconducting switch

    Diodes, devices that allow current to travel easily in one direction but not in the reverse, are ubiquitous in computing systems. Modern semiconductor computer chips contain billions of diode-like devices known as transistors. However, these devices can get very hot due to electrical resistance, requiring vast amounts of energy to cool the high-power systems in the data centers behind myriad modern technologies, including cloud computing. According to a 2018 news feature in Nature, these systems could use nearly 20 percent of the world’s power in 10 years.

    As a result, work toward creating diodes made of superconductors has been a hot topic in condensed matter physics. That’s because superconductors transmit current with no resistance at all below a certain low temperature (the critical temperature), and are therefore much more efficient than their semiconducting cousins, which have noticeable energy loss in the form of heat.

    Until now, however, other approaches to the problem have involved much more complicated physics. “The effect we found is due [in part] to a ubiquitous property of superconductors that can be realized in a very simple, straightforward manner. It just stares you in the face,” says Moodera.

    Says Moll of the Max Planck Institute, “The work is an important counterpoint to the current fashion to associate superconducting diodes [with] exotic physics, such as finite-momentum pairing states. While in reality, a superconducting diode is a common and widespread phenomenon present in classical materials, as a result of certain broken symmetries.”

    A somewhat serendipitous discovery

    In 2020 Moodera and colleagues observed evidence of an exotic particle pair known as Majorana fermions. These particle pairs could lead to a new family of topological qubits, the building blocks of quantum computers. While pondering approaches to creating superconducting diodes, the team realized that the material platform they developed for the Majorana work might also be applied to the diode problem.

    They were right. Using that general platform, they developed different iterations of superconducting diodes, each more efficient than the last. The first, for example, consisted of a nanoscopically thin layer of vanadium, a superconductor, which was patterned into a structure common to electronics (the Hall bar). When they applied a tiny magnetic field comparable to the Earth’s magnetic field, they saw the diode effect — a giant polarity dependence for current flow.

    They then created another diode, this time layering a superconductor with a ferromagnet (a ferromagnetic insulator in their case), a material that produces its own tiny magnetic field. After applying a tiny magnetic field to magnetize the ferromagnet so that it produces its own field, they found an even bigger diode effect that was stable even after the original magnetic field was turned off.

    Ubiquitous properties

    The team went on to figure out what was happening.

    In addition to transmitting current with no resistance, superconductors also have other, less well-known but just as ubiquitous properties. For example, they don’t like magnetic fields getting inside. When exposed to a tiny magnetic field, superconductors produce an internal supercurrent that induces its own magnetic flux that cancels the external field, thereby maintaining their superconducting state. This phenomenon, known as the Meissner screening effect, can be thought of as akin to our bodies’ immune system releasing antibodies to fight the infection of bacteria and other pathogens. This works, however, only up to some limit. Similarly, superconductors cannot entirely keep out large magnetic fields.

    The diodes the team created make use of this universal Meissner screening effect. The tiny magnetic field they applied — either directly, or through the adjacent ferromagnetic layer — activates the material’s screening current mechanism for expelling the external magnetic field and maintaining superconductivity.

    The team also found that another key factor in optimizing these superconductor diodes is tiny differences between the two sides, or edges, of the diode devices. These differences “create some sort of asymmetry in the way the magnetic field enters the superconductor,” Moodera says.

    By engineering their own form of edges on diodes to optimize these differences — for example, one edge with sawtooth features, while the other edge not intentionally altered — the team found that they could increase the efficiency from 20 percent to more than 50 percent. This discovery opens the door for devices whose edges could be “tuned” for even higher efficiencies, Moodera says.

    In sum, the team discovered that the edge asymmetries within superconducting diodes, the ubiquitous Meissner screening effect found in all superconductors, and a third property of superconductors known as vortex pinning all came together to produce the diode effect.

    “It is fascinating to see how inconspicuous yet ubiquitous factors can create a significant effect in observing the diode effect,” says Yasen Hou, first author of the paper and a postdoc at the Francis Bitter Magnet Laboratory and the PSFC. “What’s more exciting is that [this work] provides a straightforward approach with huge potential to further improve the efficiency.”

    Christoph Strunk is a professor at the University of Regensburg in Germany. Says Strunk, who was not involved in the research, “the present work demonstrates that the supercurrent in simple superconducting strips can become nonreciprocal. Moreover, when combined with a ferromagnetic insulator, the diode effect can even be maintained in the absence of an external magnetic field. The rectification direction can be programmed by the remnant magnetization of the magnetic layer, which may have high potential for future applications. The work is important and appealing both from the basic research and from the applications point of view.”

    Teenage contributors

    Moodera noted that the two researchers who created the engineered edges did so while still in high school during a summer at Moodera’s lab. They are Ourania Glezakou-Elbert of Richland, Washington, who will be going to Princeton University this fall, and Amith Varambally of Vestavia Hills, Alabama, who will be entering Caltech.

    Says Varambally, “I didn’t know what to expect when I set foot in Boston last summer, and certainly never expected to [be] a coauthor in a Physical Review Letters paper.

    “Every day was exciting, whether I was reading dozens of papers to better understand the diode phenomena, or operating machinery to fabricate new diodes for study, or engaging in conversations with Ourania, Dr. Hou, and Dr. Moodera about our research.

    “I am profoundly grateful to Dr. Moodera and Dr. Hou for providing me with the opportunity to work on such a fascinating project, and to Ourania for being a great research partner and friend.”

    In addition to Moodera and Hou, corresponding authors of the paper are professors Patrick A. Lee of the MIT Department of Physics and Akashdeep Kamra of Autonomous University of Madrid. Other authors from MIT are Liang Fu and Margarita Davydova of the Department of Physics, and Hang Chi, Alessandro Lodesani, and Yingying Wu, all of the Francis Bitter Magnet Laboratory and the Plasma Science and Fusion Center. Chi is also affiliated with the U.S. Army CCDC Research Laboratory.

    Authors also include Fabrizio Nichele, Markus F. Ritter, and Daniel Z. Haxwell of IBM Research Europe; Stefan Ilićof Materials Physics Center (CFM-MPC); and F. Sebastian Bergeret of CFM-MPC and Donostia International Physics Center.

    This work was supported by the Air Force Office of Sponsored Research, the Office of Naval Research, the National Science Foundation, and the Army Research Office. Additional funders are the European Research Council, the European Union’s Horizon 2020 Research and Innovation Framework Programme, the Spanish Ministry of Science and Innovation, the A. v. Humboldt Foundation, and the Department of Energy’s Office of Basic Sciences. More

  • in

    A welcome new pipeline for students invested in clean energy

    Akarsh Aurora aspired “to be around people who are actually making the global energy transition happen,” he says. Sam Packman sought to “align his theoretical and computational interests to a clean energy project” with tangible impacts. Lauryn Kortman says she “really liked the idea of an in-depth research experience focused on an amazing energy source.”

    These three MIT students found what they wanted in the Fusion Undergraduate Scholars (FUSars) program launched by the MIT Plasma Science and Fusion Center (PSFC) to make meaningful fusion energy research accessible to undergraduates. Aurora, Kortman, and Packman are members of a cohort of 10 for the program’s inaugural run, which began spring semester 2023.

    FUSars operates like a high-wattage UROP (MIT’s Undergraduate Research Opportunities Program). The program requires a student commitment of 10 to 12 hours weekly on a research project during the course of an academic year, as well as participation in a for-credit seminar providing professional development, communication, and wellness support. Through this class and with the mentorship of graduate students, postdocs, and research scientist advisors, students craft a publication-ready journal submission summarizing their research. Scholars who complete the entire year and submit a manuscript for review will receive double the ordinary UROP stipend — a payment that can reach $9,000.

    “The opportunity just jumped out at me,” says Packman. “It was an offer I couldn’t refuse,” adds Aurora.

    Building a workforce

    “I kept hearing from students wanting to get into fusion, but they were very frustrated because there just wasn’t a pipeline for them to work at the PSFC,” says Michael Short, Class of ’42 Associate Professor of Nuclear Science and Engineering and associate director of the PSFC. The PSFC bustles with research projects run by scientists and postdocs. But since the PSFC isn’t a university department with educational obligations, it does not have the regular machinery in place to integrate undergraduate researchers.

    This poses a problem not just for students but for the field of fusion energy, which holds the prospect of unlimited, carbon-free electricity. There are promising advances afoot: MIT and one of its partners, Commonwealth Fusion Systems, are developing a prototype for a compact commercial fusion energy reactor. The start of a fusion energy industry will require a steady infusion of skilled talent.

    “We have to think about the workforce needs of fusion in the future and how to train that workforce,” says Rachel Shulman, who runs the FUSars program and co-instructs the FUSars class with Short. “Energy education needs to be thinking right now about what’s coming after solar, and that’s fusion.”

    Short, who earned his bachelor’s, master’s, and doctoral degrees at MIT, was himself the beneficiary of the Undergraduate Research Opportunity Program (UROP) at the PSFC. As a faculty member, he has become deeply engaged in building transformative research experiences for undergraduates. With FUSars, he hopes to give students a springboard into the field — with an eye to developing a diverse, highly trained, and zealous employee pool for a future fusion industry.

    Taking a deep dive

    Although these are early days for this initial group of FUSars, there is already a shared sense of purpose and enthusiasm. Chosen from 32 applicants in a whirlwind selection process — the program first convened in early February after crafting the experience over Independent Activities Period — the students arrived with detailed research proposals and personal goals.

    Aurora, a first-year majoring in mechanical engineering and artificial intelligence, became fixed on fusion while still in high school. Today he is investigating methods for increasing the availability, known as capacity factor, of fusion reactors. “This is key to the commercialization of fusion energy,” he says.

    Packman, a first-year planning on a math and physics double major, is developing approaches to help simplify the computations involved in designing the complex geometries of solenoid induction heaters in fusion reactors. “This project is more immersive than my last UROP, and requires more time, but I know what I’m doing here and how this fits into the broader goals of fusion science,” he says. “It’s cool that our project is going to lead to a tool that will actually be used.”

    To accommodate the demands of their research projects, Shulman and Short discouraged students from taking on large academic loads.

    Kortman, a junior majoring in materials science and engineering with a concentration in mechanical engineering, was eager to make room in her schedule for her project, which concerns the effects of radiation damage on superconducting magnets. A shorter research experience with the PSFC during the pandemic fired her determination to delve deeper and invest more time in fusion.

    “It is very appealing and motivating to join people who have been working on this problem for decades, just as breakthroughs are coming through,” she says. “What I’m doing feels like it might be directly applicable to the development of an actual fusion reactor.”

    Camaraderie and support

    In the FUSar program, students aim to seize a sizeable stake in a multipronged research enterprise. “Here, if you have any hypotheses, you really get to pursue those because at the end of the day, the paper you write is yours,” says Aurora. “You can take ownership of what sort of discovery you’re making.”

    Enabling students to make the most of their research experiences requires abundant support — and not just for the students. “We have a whole separate set of programming on mentoring the mentors, where we go over topics with postdocs like how to teach someone to write a research paper, rather than write it for them, and how to help a student through difficulties,” Shulman says.

    The weekly student seminar, taught primarily by Short and Shulman, covers pragmatic matters essential to becoming a successful researcher — topics not always addressed directly or in the kind of detail that makes a difference. Topics include how to collaborate with lab mates, deal with a supervisor, find material in the MIT libraries, produce effective and persuasive research abstracts, and take time for self-care.

    Kortman believes camaraderie will help the cohort through an intense year. “This is a tight-knit community that will be great for keeping us all motivated when we run into research issues,” she says. “Meeting weekly to see what other students are able to accomplish will encourage me in my own project.”

    The seminar offerings have already attracted five additional participants outside the FUSars cohort. Adria Peterkin, a second-year graduate student in nuclear science and engineering, is sitting in to solidify her skills in scientific writing.

    “I wanted a structured class to help me get good at abstracts and communicating with different audiences,” says Peterkin, who is investigating radiation’s impact on the molten salt used in fusion and advanced nuclear reactors. “There’s a lot of assumed knowledge coming in as a PhD student, and a program like FUSars is really useful to help level out that playing field, regardless of your background.”

    Fusion research for all

    Short would like FUSars to cast a wide net, capturing the interest of MIT undergraduates no matter their backgrounds or financial means. One way he hopes to achieve this end is with the support of private donors, who make possible premium stipends for fusion scholars.

    “Many of our students are economically disadvantaged, on financial aid or supporting family back home, and need work that pays more than $15 an hour,” he says. This generous stipend may be critical, he says, to “flipping students from something else to fusion.”

    Although this first FUSars class is composed of science and engineering students, Short envisions a cohort eventually drawn from the broad spectrum of MIT disciplines. “Fusion is not a nuclear-focused discipline anymore — it’s no longer just plasma physics and radiation,” he says. “We’re trying to make a power plant now, and it’s an all hands-on-deck kind of thing, involving policy and economics and other subjects.”

    Although many are just getting started on their academic journeys, FUSar students believe this year will give them a strong push toward potential energy careers. “Fusion is the future of the energy transition and how we’re going to defeat climate change,” says Aurora. “I joined the program for a deep dive into the field, to help me decide whether I should invest the rest of my life to it.” More