More stories

  • in

    Supporting sustainability, digital health, and the future of work

    The MIT and Accenture Convergence Initiative for Industry and Technology has selected three new research projects that will receive support from the initiative. The research projects aim to accelerate progress in meeting complex societal needs through new business convergence insights in technology and innovation.

    Established in MIT’s School of Engineering and now in its third year, the MIT and Accenture Convergence Initiative is furthering its mission to bring together technological experts from across business and academia to share insights and learn from one another. Recently, Thomas W. Malone, the Patrick J. McGovern (1959) Professor of Management, joined the initiative as its first-ever faculty lead. The research projects relate to three of the initiative’s key focus areas: sustainability, digital health, and the future of work.

    “The solutions these research teams are developing have the potential to have tremendous impact,” says Anantha Chandrakasan, dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “They embody the initiative’s focus on advancing data-driven research that addresses technology and industry convergence.”

    “The convergence of science and technology driven by advancements in generative AI, digital twins, quantum computing, and other technologies makes this an especially exciting time for Accenture and MIT to be undertaking this joint research,” says Kenneth Munie, senior managing director at Accenture Strategy, Life Sciences. “Our three new research projects focusing on sustainability, digital health, and the future of work have the potential to help guide and shape future innovations that will benefit the way we work and live.”

    The MIT and Accenture Convergence Initiative charter project researchers are described below.

    Accelerating the journey to net zero with industrial clusters

    Jessika Trancik is a professor at the Institute for Data, Systems, and Society (IDSS). Trancik’s research examines the dynamic costs, performance, and environmental impacts of energy systems to inform climate policy and accelerate beneficial and equitable technology innovation. Trancik’s project aims to identify how industrial clusters can enable companies to derive greater value from decarbonization, potentially making companies more willing to invest in the clean energy transition.

    To meet the ambitious climate goals that have been set by countries around the world, rising greenhouse gas emissions trends must be rapidly reversed. Industrial clusters — geographically co-located or otherwise-aligned groups of companies representing one or more industries — account for a significant portion of greenhouse gas emissions globally. With major energy consumers “clustered” in proximity, industrial clusters provide a potential platform to scale low-carbon solutions by enabling the aggregation of demand and the coordinated investment in physical energy supply infrastructure.

    In addition to Trancik, the research team working on this project will include Aliza Khurram, a postdoc in IDSS; Micah Ziegler, an IDSS research scientist; Melissa Stark, global energy transition services lead at Accenture; Laura Sanderfer, strategy consulting manager at Accenture; and Maria De Miguel, strategy senior analyst at Accenture.

    Eliminating childhood obesity

    Anette “Peko” Hosoi is the Neil and Jane Pappalardo Professor of Mechanical Engineering. A common theme in her work is the fundamental study of shape, kinematic, and rheological optimization of biological systems with applications to the emergent field of soft robotics. Her project will use both data from existing studies and synthetic data to create a return-on-investment (ROI) calculator for childhood obesity interventions so that companies can identify earlier returns on their investment beyond reduced health-care costs.

    Childhood obesity is too prevalent to be solved by a single company, industry, drug, application, or program. In addition to the physical and emotional impact on children, society bears a cost through excess health care spending, lost workforce productivity, poor school performance, and increased family trauma. Meaningful solutions require multiple organizations, representing different parts of society, working together with a common understanding of the problem, the economic benefits, and the return on investment. ROI is particularly difficult to defend for any single organization because investment and return can be separated by many years and involve asymmetric investments, returns, and allocation of risk. Hosoi’s project will consider the incentives for a particular entity to invest in programs in order to reduce childhood obesity.

    Hosoi will be joined by graduate students Pragya Neupane and Rachael Kha, both of IDSS, as well a team from Accenture that includes Kenneth Munie, senior managing director at Accenture Strategy, Life Sciences; Kaveh Safavi, senior managing director in Accenture Health Industry; and Elizabeth Naik, global health and public service research lead.

    Generating innovative organizational configurations and algorithms for dealing with the problem of post-pandemic employment

    Thomas Malone is the Patrick J. McGovern (1959) Professor of Management at the MIT Sloan School of Management and the founding director of the MIT Center for Collective Intelligence. His research focuses on how new organizations can be designed to take advantage of the possibilities provided by information technology. Malone will be joined in this project by John Horton, the Richard S. Leghorn (1939) Career Development Professor at the MIT Sloan School of Management, whose research focuses on the intersection of labor economics, market design, and information systems. Malone and Horton’s project will look to reshape the future of work with the help of lessons learned in the wake of the pandemic.

    The Covid-19 pandemic has been a major disrupter of work and employment, and it is not at all obvious how governments, businesses, and other organizations should manage the transition to a desirable state of employment as the pandemic recedes. Using natural language processing algorithms such as GPT-4, this project will look to identify new ways that companies can use AI to better match applicants to necessary jobs, create new types of jobs, assess skill training needed, and identify interventions to help include women and other groups whose employment was disproportionately affected by the pandemic.

    In addition to Malone and Horton, the research team will include Rob Laubacher, associate director and research scientist at the MIT Center for Collective Intelligence, and Kathleen Kennedy, executive director at the MIT Center for Collective Intelligence and senior director at MIT Horizon. The team will also include Nitu Nivedita, managing director of artificial intelligence at Accenture, and Thomas Hancock, data science senior manager at Accenture. More

  • in

    Making aviation fuel from biomass

    In 2021, nearly a quarter of the world’s carbon dioxide emissions came from the transportation sector, with aviation being a significant contributor. While the growing use of electric vehicles is helping to clean up ground transportation, today’s batteries can’t compete with fossil fuel-derived liquid hydrocarbons in terms of energy delivered per pound of weight — a major concern when it comes to flying. Meanwhile, based on projected growth in travel demand, consumption of jet fuel is projected to double between now and 2050 — the year by which the international aviation industry has pledged to be carbon neutral.

    Many groups have targeted a 100 percent sustainable hydrocarbon fuel for aircraft, but without much success. Part of the challenge is that aviation fuels are so tightly regulated. “This is a subclass of fuels that has very specific requirements in terms of the chemistry and the physical properties of the fuel, because you can’t risk something going wrong in an airplane engine,” says Yuriy Román-Leshkov, the Robert T. Haslam Professor of Chemical Engineering. “If you’re flying at 30,000 feet, it’s very cold outside, and you don’t want the fuel to thicken or freeze. That’s why the formulation is very specific.”

    Aviation fuel is a combination of two large classes of chemical compounds. Some 75 to 90 percent of it is made up of “aliphatic” molecules, which consist of long chains of carbon atoms linked together. “This is similar to what we would find in diesel fuels, so it’s a classic hydrocarbon that is out there,” explains Román-Leshkov. The remaining 10 to 25 percent consists of “aromatic” molecules, each of which includes at least one ring made up of six connected carbon atoms.

    In most transportation fuels, aromatic hydrocarbons are viewed as a source of pollution, so they’re removed as much as possible. However, in aviation fuels, some aromatic molecules must remain because they set the necessary physical and combustion properties of the overall mixture. They also perform one more critical task: They ensure that seals between various components in the aircraft’s fuel system are tight. “The aromatics get absorbed by the plastic seals and make them swell,” explains Román-Leshkov. “If for some reason the fuel changes, so can the seals, and that’s very dangerous.”

    As a result, aromatics are a necessary component — but they’re also a stumbling block in the move to create sustainable aviation fuels, or SAFs. Companies know how to make the aliphatic fraction from inedible parts of plants and other renewables, but they haven’t yet developed an approved method of generating the aromatic fraction from sustainable sources. As a result, there’s a “blending wall,” explains Román-Leshkov. “Since we need that aromatic content — regardless of its source — there will always be a limit on how much of the sustainable aliphatic hydrocarbons we can use without changing the properties of the mixture.” He notes a similar blending wall with gasoline. “We have a lot of ethanol, but we can’t add more than 10 percent without changing the properties of the gasoline. In fact, current engines can’t handle even 15 percent ethanol without modification.”

    No shortage of renewable source material — or attempts to convert it

    For the past five years, understanding and solving the SAF problem has been the goal of research by Román-Leshkov and his MIT team — Michael L. Stone PhD ’21, Matthew S. Webber, and others — as well as their collaborators at Washington State University, the National Renewable Energy Laboratory (NREL), and the Pacific Northwest National Laboratory. Their work has focused on lignin, a tough material that gives plants structural support and protection against microbes and fungi. About 30 percent of the carbon in biomass is in lignin, yet when ethanol is generated from biomass, the lignin is left behind as a waste product.

    Despite valiant efforts, no one has found an economically viable, scalable way to turn lignin into useful products, including the aromatic molecules needed to make jet fuel 100 percent sustainable. Why not? As Román-Leshkov says, “It’s because of its chemical recalcitrance.” It’s difficult to make it chemically react in useful ways. As a result, every year millions of tons of waste lignin are burned as a low-grade fuel, used as fertilizer, or simply thrown away.

    Understanding the problem requires understanding what’s happening at the atomic level. A single lignin molecule — the starting point of the challenge — is a big “macromolecule” made up of a network of many aromatic rings connected by oxygen and hydrogen atoms. Put simply, the key to converting lignin into the aromatic fraction of SAF is to break that macromolecule into smaller pieces while in the process getting rid of all of the oxygen atoms.

    In general, most industrial processes begin with a chemical reaction that prevents the subsequent upgrading of lignin: As the lignin is extracted from the biomass, the aromatic molecules in it react with one another, linking together to form strong networks that won’t react further. As a result, the lignin is no longer useful for making aviation fuels.

    To avoid that outcome, Román-Leshkov and his team utilize another approach: They use a catalyst to induce a chemical reaction that wouldn’t normally occur during extraction. By reacting the biomass in the presence of a ruthenium-based catalyst, they are able to remove the lignin from the biomass and produce a black liquid called lignin oil. That product is chemically stable, meaning that the aromatic molecules in it will no longer react with one another.

    So the researchers have now successfully broken the original lignin macromolecule into fragments that contain just one or two aromatic rings each. However, while the isolated fragments don’t chemically react, they still contain oxygen atoms. Therefore, one task remains: finding a way to remove the oxygen atoms.

    In fact, says Román-Leshkov, getting from the molecules in the lignin oil to the targeted aromatic molecules required them to accomplish three things in a single step: They needed to selectively break the carbon-oxygen bonds to free the oxygen atoms; they needed to avoid incorporating noncarbon atoms into the aromatic rings (for example, atoms from the hydrogen gas that must be present for all of the chemical transformations to occur); and they needed to preserve the carbon backbone of the molecule — that is, the series of linked carbon atoms that connect the aromatic rings that remain.

    Ultimately, Román-Leshkov and his team found a special ingredient that would do the trick: a molybdenum carbide catalyst. “It’s actually a really amazing catalyst because it can perform those three actions very well,” says Román-Leshkov. “In addition to that, it’s extremely resistant to poisons. Plants can contain a lot of components like proteins, salts, and sulfur, which often poison catalysts so they don’t work anymore. But molybdenum carbide is very robust and isn’t strongly influenced by such impurities.”

    Trying it out on lignin from poplar trees

    To test their approach in the lab, the researchers first designed and built a specialized “trickle-bed” reactor, a type of chemical reactor in which both liquids and gases flow downward through a packed bed of catalyst particles. They then obtained biomass from a poplar, a type of tree known as an “energy crop” because it grows quickly and doesn’t require a lot of fertilizer.

    To begin, they reacted the poplar biomass in the presence of their ruthenium-based catalyst to extract the lignin and produce the lignin oil. They then flowed the oil through their trickle-bed reactor containing the molybdenum carbide catalyst. The mixture that formed contained some of the targeted product but also a lot of others that still contained oxygen atoms.

    Román-Leshkov notes that in a trickle-bed reactor, the time during which the lignin oil is exposed to the catalyst depends entirely on how quickly it drips down through the packed bed. To increase the exposure time, they tried passing the oil through the same catalyst twice. However, the distribution of products that formed in the second pass wasn’t as they had predicted based on the outcome of the first pass.

    With further investigation, they figured out why. The first time the lignin oil drips through the reactor, it deposits oxygen onto the catalyst. The deposition of the oxygen changes the behavior of the catalyst such that certain products appear or disappear — with the temperature being critical. “The temperature and oxygen content set the condition of the catalyst in the first pass,” says Román-Leshkov. “Then, on the second pass, the oxygen content in the flow is lower, and the catalyst can fully break the remaining carbon-oxygen bonds.” The process can thus operate continuously: Two separate reactors containing independent catalyst beds would be connected in series, with the first pretreating the lignin oil and the second removing any oxygen that remains.

    Based on a series of experiments involving lignin oil from poplar biomass, the researchers determined the operating conditions yielding the best outcome: 350 degrees Celsius in the first step and 375 C in the second step. Under those optimized conditions, the mixture that forms is dominated by the targeted aromatic products, with the remainder consisting of small amounts of other jet-fuel aliphatic molecules and some remaining oxygen-containing molecules. The catalyst remains stable while generating more than 87 percent (by weight) of aromatic molecules.

    “When we do our chemistry with the molybdenum carbide catalyst, our total carbon yields are nearly 85 percent of the theoretical carbon yield,” says Román-Leshkov. “In most lignin-conversion processes, the carbon yields are very low, on the order of 10 percent. That’s why the catalysis community got very excited about our results — because people had not seen carbon yields as high as the ones we generated with this catalyst.”

    There remains one key question: Does the mixture of components that forms have the properties required for aviation fuel? “When we work with these new substrates to make new fuels, the blend that we create is different from standard jet fuel,” says Román-Leshkov. “Unless it has the exact properties required, it will not qualify for certification as jet fuel.”

    To check their products, Román-Leshkov and his team send samples to Washington State University, where a team operates a combustion lab devoted to testing fuels. Results from initial testing of the composition and properties of the samples have been encouraging. Based on the composition and published prescreening tools and procedures, the researchers have made initial property predictions for their samples, and they looked good. For example, the freezing point, viscosity, and threshold sooting index are predicted to be lower than the values for conventional aviation aromatics. (In other words, their material should flow more easily and be less likely to freeze than conventional aromatics while also generating less soot in the atmosphere when they burn.) Overall, the predicted properties are near to or more favorable than those of conventional fuel aromatics.

    Next steps

    The researchers are continuing to study how their sample blends behave at different temperatures and, in particular, how well they perform that key task: soaking into and swelling the seals inside jet engines. “These molecules are not the typical aromatic molecules that you use in jet fuel,” says Román-Leshkov. “Preliminary tests with sample seals show that there’s no difference in how our lignin-derived aromatics swell the seals, but we need to confirm that. There’s no room for error.”

    In addition, he and his team are working with their NREL collaborators to scale up their methods. NREL has much larger reactors and other infrastructure needed to produce large quantities of the new sustainable blend. Based on the promising results thus far, the team wants to be prepared for the further testing required for the certification of jet fuels. In addition to testing samples of the fuel, the full certification procedure calls for demonstrating its behavior in an operating engine — “not while flying, but in a lab,” clarifies Román-Leshkov. In addition to requiring large samples, that demonstration is both time-consuming and expensive — which is why it’s the very last step in the strict testing required for a new sustainable aviation fuel to be approved.

    Román-Leshkov and his colleagues are now exploring the use of their approach with other types of biomass, including pine, switchgrass, and corn stover (the leaves, stalks, and cobs left after corn is harvested). But their results with poplar biomass are promising. If further testing confirms that their aromatic products can replace the aromatics now in jet fuel, “the blending wall could disappear,” says Román-Leshkov. “We’ll have a means of producing all the components in aviation fuel from renewable material, potentially leading to aircraft fuel that’s 100 percent sustainable.”

    This research was initially funded by the Center for Bioenergy Innovation, a U.S. Department of Energy (DOE) Research Center supported by the Office of Biological and Environmental Research in the DOE Office of Science. More recent funding came from the DOE Bioenergy Technologies Office and from Eni S.p.A. through the MIT Energy Initiative. Michael L. Stone PhD ’21 is now a postdoc in chemical engineering at Stanford University. Matthew S. Webber is a graduate student in the Román-Leshkov group, now on leave for an internship at the National Renewable Energy Laboratory.

    This article appears in the Spring 2023 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    To improve solar and other clean energy tech, look beyond hardware

    To continue reducing the costs of solar energy and other clean energy technologies, scientists and engineers will likely need to focus, at least in part, on improving technology features that are not based on hardware, according to MIT researchers. They describe this finding and the mechanisms behind it today in Nature Energy.

    While the cost of installing a solar energy system has dropped by more than 99 percent since 1980, this new analysis shows that “soft technology” features, such as the codified permitting practices, supply chain management techniques, and system design processes that go into deploying a solar energy plant, contributed only 10 to 15 percent of total cost declines. Improvements to hardware features were responsible for the lion’s share.

    But because soft technology is increasingly dominating the total costs of installing solar energy systems, this trend threatens to slow future cost savings and hamper the global transition to clean energy, says the study’s senior author, Jessika Trancik, a professor in MIT’s Institute for Data, Systems, and Society (IDSS).

    Trancik’s co-authors include lead author Magdalena M. Klemun, a former IDSS graduate student and postdoc who is now an assistant professor at the Hong Kong University of Science and Technology; Goksin Kavlak, a former IDSS graduate student and postdoc who is now an associate at the Brattle Group; and James McNerney, a former IDSS postdoc and now senior research fellow at the Harvard Kennedy School.

    The team created a quantitative model to analyze the cost evolution of solar energy systems, which captures the contributions of both hardware technology features and soft technology features.

    The framework shows that soft technology hasn’t improved much over time — and that soft technology features contributed even less to overall cost declines than previously estimated.

    Their findings indicate that to reverse this trend and accelerate cost declines, engineers could look at making solar energy systems less reliant on soft technology to begin with, or they could tackle the problem directly by improving inefficient deployment processes.  

    “Really understanding where the efficiencies and inefficiencies are, and how to address those inefficiencies, is critical in supporting the clean energy transition. We are making huge investments of public dollars into this, and soft technology is going to be absolutely essential to making those funds count,” says Trancik.

    “However,” Klemun adds, “we haven’t been thinking about soft technology design as systematically as we have for hardware. That needs to change.”

    The hard truth about soft costs

    Researchers have observed that the so-called “soft costs” of building a solar power plant — the costs of designing and installing the plant — are becoming a much larger share of total costs. In fact, the share of soft costs now typically ranges from 35 to 64 percent.

    “We wanted to take a closer look at where these soft costs were coming from and why they weren’t coming down over time as quickly as the hardware costs,” Trancik says.

    In the past, scientists have modeled the change in solar energy costs by dividing total costs into additive components — hardware components and nonhardware components — and then tracking how these components changed over time.

    “But if you really want to understand where those rates of change are coming from, you need to go one level deeper to look at the technology features. Then things split out differently,” Trancik says.

    The researchers developed a quantitative approach that models the change in solar energy costs over time by assigning contributions to the individual technology features, including both hardware features and soft technology features.

    For instance, their framework would capture how much of the decline in system installation costs — a soft cost — is due to standardized practices of certified installers — a soft technology feature. It would also capture how that same soft cost is affected by increased photovoltaic module efficiency — a hardware technology feature.

    With this approach, the researchers saw that improvements in hardware had the greatest impacts on driving down soft costs in solar energy systems. For example, the efficiency of photovoltaic modules doubled between 1980 and 2017, reducing overall system costs by 17 percent. But about 40 percent of that overall decline could be attributed to reductions in soft costs tied to improved module efficiency.

    The framework shows that, while hardware technology features tend to improve many cost components, soft technology features affect only a few.

    “You can see this structural difference even before you collect data on how the technologies have changed over time. That’s why mapping out a technology’s network of cost dependencies is a useful first step to identify levers of change, for solar PV and for other technologies as well,” Klemun notes.  

    Static soft technology

    The researchers used their model to study several countries, since soft costs can vary widely around the world. For instance, solar energy soft costs in Germany are about 50 percent less than those in the U.S.

    The fact that hardware technology improvements are often shared globally led to dramatic declines in costs over the past few decades across locations, the analysis showed. Soft technology innovations typically aren’t shared across borders. Moreover, the team found that countries with better soft technology performance 20 years ago still have better performance today, while those with worse performance didn’t see much improvement.

    This country-by-country difference could be driven by regulation and permitting processes, cultural factors, or by market dynamics such as how firms interact with each other, Trancik says.

    “But not all soft technology variables are ones that you would want to change in a cost-reducing direction, like lower wages. So, there are other considerations, beyond just bringing the cost of the technology down, that we need to think about when interpreting these results,” she says.

    Their analysis points to two strategies for reducing soft costs. For one, scientists could focus on developing hardware improvements that make soft costs more dependent on hardware technology variables and less on soft technology variables, such as by creating simpler, more standardized equipment that could reduce on-site installation time.

    Or researchers could directly target soft technology features without changing hardware, perhaps by creating more efficient workflows for system installation or automated permitting platforms.

    “In practice, engineers will often pursue both approaches, but separating the two in a formal model makes it easier to target innovation efforts by leveraging specific relationships between technology characteristics and costs,” Klemun says.

    “Often, when we think about information processing, we are leaving out processes that still happen in a very low-tech way through people communicating with one another. But it is just as important to think about that as a technology as it is to design fancy software,” Trancik notes.

    In the future, she and her collaborators want to apply their quantitative model to study the soft costs related to other technologies, such as electrical vehicle charging and nuclear fission. They are also interested in better understanding the limits of soft technology improvement, and how one could design better soft technology from the outset.

    This research is funded by the U.S. Department of Energy Solar Energy Technologies Office. More

  • in

    Simple superconducting device could dramatically cut energy use in computing, other applications

    MIT scientists and their colleagues have created a simple superconducting device that could transfer current through electronic devices much more efficiently than is possible today. As a result, the new diode, a kind of switch, could dramatically cut the amount of energy used in high-power computing systems, a major problem that is estimated to become much worse. Even though it is in the early stages of development, the diode is more than twice as efficient as similar ones reported by others. It could even be integral to emerging quantum computing technologies.

    The work, which is reported in the July 13 online issue of Physical Review Letters, is also the subject of a news story in Physics Magazine.

    “This paper showcases that the superconducting diode is an entirely solved problem from an engineering perspective,” says Philip Moll, director of the Max Planck Institute for the Structure and Dynamics of Matter in Germany. Moll was not involved in the work. “The beauty of [this] work is that [Moodera and colleagues] obtained record efficiencies without even trying [and] their structures are far from optimized yet.”

    “Our engineering of a superconducting diode effect that is robust and can operate over a wide temperature range in simple systems can potentially open the door for novel technologies,” says Jagadeesh Moodera, leader of the current work and a senior research scientist in MIT’s Department of Physics. Moodera is also affiliated with the Materials Research Laboratory, the Francis Bitter Magnet Laboratory, and the Plasma Science and Fusion Center (PSFC).

    The nanoscopic rectangular diode — about 1,000 times thinner than the diameter of a human hair — is easily scalable. Millions could be produced on a single silicon wafer.

    Toward a superconducting switch

    Diodes, devices that allow current to travel easily in one direction but not in the reverse, are ubiquitous in computing systems. Modern semiconductor computer chips contain billions of diode-like devices known as transistors. However, these devices can get very hot due to electrical resistance, requiring vast amounts of energy to cool the high-power systems in the data centers behind myriad modern technologies, including cloud computing. According to a 2018 news feature in Nature, these systems could use nearly 20 percent of the world’s power in 10 years.

    As a result, work toward creating diodes made of superconductors has been a hot topic in condensed matter physics. That’s because superconductors transmit current with no resistance at all below a certain low temperature (the critical temperature), and are therefore much more efficient than their semiconducting cousins, which have noticeable energy loss in the form of heat.

    Until now, however, other approaches to the problem have involved much more complicated physics. “The effect we found is due [in part] to a ubiquitous property of superconductors that can be realized in a very simple, straightforward manner. It just stares you in the face,” says Moodera.

    Says Moll of the Max Planck Institute, “The work is an important counterpoint to the current fashion to associate superconducting diodes [with] exotic physics, such as finite-momentum pairing states. While in reality, a superconducting diode is a common and widespread phenomenon present in classical materials, as a result of certain broken symmetries.”

    A somewhat serendipitous discovery

    In 2020 Moodera and colleagues observed evidence of an exotic particle pair known as Majorana fermions. These particle pairs could lead to a new family of topological qubits, the building blocks of quantum computers. While pondering approaches to creating superconducting diodes, the team realized that the material platform they developed for the Majorana work might also be applied to the diode problem.

    They were right. Using that general platform, they developed different iterations of superconducting diodes, each more efficient than the last. The first, for example, consisted of a nanoscopically thin layer of vanadium, a superconductor, which was patterned into a structure common to electronics (the Hall bar). When they applied a tiny magnetic field comparable to the Earth’s magnetic field, they saw the diode effect — a giant polarity dependence for current flow.

    They then created another diode, this time layering a superconductor with a ferromagnet (a ferromagnetic insulator in their case), a material that produces its own tiny magnetic field. After applying a tiny magnetic field to magnetize the ferromagnet so that it produces its own field, they found an even bigger diode effect that was stable even after the original magnetic field was turned off.

    Ubiquitous properties

    The team went on to figure out what was happening.

    In addition to transmitting current with no resistance, superconductors also have other, less well-known but just as ubiquitous properties. For example, they don’t like magnetic fields getting inside. When exposed to a tiny magnetic field, superconductors produce an internal supercurrent that induces its own magnetic flux that cancels the external field, thereby maintaining their superconducting state. This phenomenon, known as the Meissner screening effect, can be thought of as akin to our bodies’ immune system releasing antibodies to fight the infection of bacteria and other pathogens. This works, however, only up to some limit. Similarly, superconductors cannot entirely keep out large magnetic fields.

    The diodes the team created make use of this universal Meissner screening effect. The tiny magnetic field they applied — either directly, or through the adjacent ferromagnetic layer — activates the material’s screening current mechanism for expelling the external magnetic field and maintaining superconductivity.

    The team also found that another key factor in optimizing these superconductor diodes is tiny differences between the two sides, or edges, of the diode devices. These differences “create some sort of asymmetry in the way the magnetic field enters the superconductor,” Moodera says.

    By engineering their own form of edges on diodes to optimize these differences — for example, one edge with sawtooth features, while the other edge not intentionally altered — the team found that they could increase the efficiency from 20 percent to more than 50 percent. This discovery opens the door for devices whose edges could be “tuned” for even higher efficiencies, Moodera says.

    In sum, the team discovered that the edge asymmetries within superconducting diodes, the ubiquitous Meissner screening effect found in all superconductors, and a third property of superconductors known as vortex pinning all came together to produce the diode effect.

    “It is fascinating to see how inconspicuous yet ubiquitous factors can create a significant effect in observing the diode effect,” says Yasen Hou, first author of the paper and a postdoc at the Francis Bitter Magnet Laboratory and the PSFC. “What’s more exciting is that [this work] provides a straightforward approach with huge potential to further improve the efficiency.”

    Christoph Strunk is a professor at the University of Regensburg in Germany. Says Strunk, who was not involved in the research, “the present work demonstrates that the supercurrent in simple superconducting strips can become nonreciprocal. Moreover, when combined with a ferromagnetic insulator, the diode effect can even be maintained in the absence of an external magnetic field. The rectification direction can be programmed by the remnant magnetization of the magnetic layer, which may have high potential for future applications. The work is important and appealing both from the basic research and from the applications point of view.”

    Teenage contributors

    Moodera noted that the two researchers who created the engineered edges did so while still in high school during a summer at Moodera’s lab. They are Ourania Glezakou-Elbert of Richland, Washington, who will be going to Princeton University this fall, and Amith Varambally of Vestavia Hills, Alabama, who will be entering Caltech.

    Says Varambally, “I didn’t know what to expect when I set foot in Boston last summer, and certainly never expected to [be] a coauthor in a Physical Review Letters paper.

    “Every day was exciting, whether I was reading dozens of papers to better understand the diode phenomena, or operating machinery to fabricate new diodes for study, or engaging in conversations with Ourania, Dr. Hou, and Dr. Moodera about our research.

    “I am profoundly grateful to Dr. Moodera and Dr. Hou for providing me with the opportunity to work on such a fascinating project, and to Ourania for being a great research partner and friend.”

    In addition to Moodera and Hou, corresponding authors of the paper are professors Patrick A. Lee of the MIT Department of Physics and Akashdeep Kamra of Autonomous University of Madrid. Other authors from MIT are Liang Fu and Margarita Davydova of the Department of Physics, and Hang Chi, Alessandro Lodesani, and Yingying Wu, all of the Francis Bitter Magnet Laboratory and the Plasma Science and Fusion Center. Chi is also affiliated with the U.S. Army CCDC Research Laboratory.

    Authors also include Fabrizio Nichele, Markus F. Ritter, and Daniel Z. Haxwell of IBM Research Europe; Stefan Ilićof Materials Physics Center (CFM-MPC); and F. Sebastian Bergeret of CFM-MPC and Donostia International Physics Center.

    This work was supported by the Air Force Office of Sponsored Research, the Office of Naval Research, the National Science Foundation, and the Army Research Office. Additional funders are the European Research Council, the European Union’s Horizon 2020 Research and Innovation Framework Programme, the Spanish Ministry of Science and Innovation, the A. v. Humboldt Foundation, and the Department of Energy’s Office of Basic Sciences. More

  • in

    A welcome new pipeline for students invested in clean energy

    Akarsh Aurora aspired “to be around people who are actually making the global energy transition happen,” he says. Sam Packman sought to “align his theoretical and computational interests to a clean energy project” with tangible impacts. Lauryn Kortman says she “really liked the idea of an in-depth research experience focused on an amazing energy source.”

    These three MIT students found what they wanted in the Fusion Undergraduate Scholars (FUSars) program launched by the MIT Plasma Science and Fusion Center (PSFC) to make meaningful fusion energy research accessible to undergraduates. Aurora, Kortman, and Packman are members of a cohort of 10 for the program’s inaugural run, which began spring semester 2023.

    FUSars operates like a high-wattage UROP (MIT’s Undergraduate Research Opportunities Program). The program requires a student commitment of 10 to 12 hours weekly on a research project during the course of an academic year, as well as participation in a for-credit seminar providing professional development, communication, and wellness support. Through this class and with the mentorship of graduate students, postdocs, and research scientist advisors, students craft a publication-ready journal submission summarizing their research. Scholars who complete the entire year and submit a manuscript for review will receive double the ordinary UROP stipend — a payment that can reach $9,000.

    “The opportunity just jumped out at me,” says Packman. “It was an offer I couldn’t refuse,” adds Aurora.

    Building a workforce

    “I kept hearing from students wanting to get into fusion, but they were very frustrated because there just wasn’t a pipeline for them to work at the PSFC,” says Michael Short, Class of ’42 Associate Professor of Nuclear Science and Engineering and associate director of the PSFC. The PSFC bustles with research projects run by scientists and postdocs. But since the PSFC isn’t a university department with educational obligations, it does not have the regular machinery in place to integrate undergraduate researchers.

    This poses a problem not just for students but for the field of fusion energy, which holds the prospect of unlimited, carbon-free electricity. There are promising advances afoot: MIT and one of its partners, Commonwealth Fusion Systems, are developing a prototype for a compact commercial fusion energy reactor. The start of a fusion energy industry will require a steady infusion of skilled talent.

    “We have to think about the workforce needs of fusion in the future and how to train that workforce,” says Rachel Shulman, who runs the FUSars program and co-instructs the FUSars class with Short. “Energy education needs to be thinking right now about what’s coming after solar, and that’s fusion.”

    Short, who earned his bachelor’s, master’s, and doctoral degrees at MIT, was himself the beneficiary of the Undergraduate Research Opportunity Program (UROP) at the PSFC. As a faculty member, he has become deeply engaged in building transformative research experiences for undergraduates. With FUSars, he hopes to give students a springboard into the field — with an eye to developing a diverse, highly trained, and zealous employee pool for a future fusion industry.

    Taking a deep dive

    Although these are early days for this initial group of FUSars, there is already a shared sense of purpose and enthusiasm. Chosen from 32 applicants in a whirlwind selection process — the program first convened in early February after crafting the experience over Independent Activities Period — the students arrived with detailed research proposals and personal goals.

    Aurora, a first-year majoring in mechanical engineering and artificial intelligence, became fixed on fusion while still in high school. Today he is investigating methods for increasing the availability, known as capacity factor, of fusion reactors. “This is key to the commercialization of fusion energy,” he says.

    Packman, a first-year planning on a math and physics double major, is developing approaches to help simplify the computations involved in designing the complex geometries of solenoid induction heaters in fusion reactors. “This project is more immersive than my last UROP, and requires more time, but I know what I’m doing here and how this fits into the broader goals of fusion science,” he says. “It’s cool that our project is going to lead to a tool that will actually be used.”

    To accommodate the demands of their research projects, Shulman and Short discouraged students from taking on large academic loads.

    Kortman, a junior majoring in materials science and engineering with a concentration in mechanical engineering, was eager to make room in her schedule for her project, which concerns the effects of radiation damage on superconducting magnets. A shorter research experience with the PSFC during the pandemic fired her determination to delve deeper and invest more time in fusion.

    “It is very appealing and motivating to join people who have been working on this problem for decades, just as breakthroughs are coming through,” she says. “What I’m doing feels like it might be directly applicable to the development of an actual fusion reactor.”

    Camaraderie and support

    In the FUSar program, students aim to seize a sizeable stake in a multipronged research enterprise. “Here, if you have any hypotheses, you really get to pursue those because at the end of the day, the paper you write is yours,” says Aurora. “You can take ownership of what sort of discovery you’re making.”

    Enabling students to make the most of their research experiences requires abundant support — and not just for the students. “We have a whole separate set of programming on mentoring the mentors, where we go over topics with postdocs like how to teach someone to write a research paper, rather than write it for them, and how to help a student through difficulties,” Shulman says.

    The weekly student seminar, taught primarily by Short and Shulman, covers pragmatic matters essential to becoming a successful researcher — topics not always addressed directly or in the kind of detail that makes a difference. Topics include how to collaborate with lab mates, deal with a supervisor, find material in the MIT libraries, produce effective and persuasive research abstracts, and take time for self-care.

    Kortman believes camaraderie will help the cohort through an intense year. “This is a tight-knit community that will be great for keeping us all motivated when we run into research issues,” she says. “Meeting weekly to see what other students are able to accomplish will encourage me in my own project.”

    The seminar offerings have already attracted five additional participants outside the FUSars cohort. Adria Peterkin, a second-year graduate student in nuclear science and engineering, is sitting in to solidify her skills in scientific writing.

    “I wanted a structured class to help me get good at abstracts and communicating with different audiences,” says Peterkin, who is investigating radiation’s impact on the molten salt used in fusion and advanced nuclear reactors. “There’s a lot of assumed knowledge coming in as a PhD student, and a program like FUSars is really useful to help level out that playing field, regardless of your background.”

    Fusion research for all

    Short would like FUSars to cast a wide net, capturing the interest of MIT undergraduates no matter their backgrounds or financial means. One way he hopes to achieve this end is with the support of private donors, who make possible premium stipends for fusion scholars.

    “Many of our students are economically disadvantaged, on financial aid or supporting family back home, and need work that pays more than $15 an hour,” he says. This generous stipend may be critical, he says, to “flipping students from something else to fusion.”

    Although this first FUSars class is composed of science and engineering students, Short envisions a cohort eventually drawn from the broad spectrum of MIT disciplines. “Fusion is not a nuclear-focused discipline anymore — it’s no longer just plasma physics and radiation,” he says. “We’re trying to make a power plant now, and it’s an all hands-on-deck kind of thing, involving policy and economics and other subjects.”

    Although many are just getting started on their academic journeys, FUSar students believe this year will give them a strong push toward potential energy careers. “Fusion is the future of the energy transition and how we’re going to defeat climate change,” says Aurora. “I joined the program for a deep dive into the field, to help me decide whether I should invest the rest of my life to it.” More

  • in

    3 Questions: Boosting concrete’s ability to serve as a natural “carbon sink”

    Damian Stefaniuk is a postdoc at the MIT Concrete Sustainability Hub (CSHub). He works with MIT professors Franz-Josef Ulm and Admir Masic of the MIT Department of Civil and Environmental Engineering (CEE) to investigate multifunctional concrete. Here, he provides an overview of carbonation in cement-based products, a brief explanation of why understanding carbonation in the life cycle of cement products is key for assessing their environmental impact, and an update on current research to bolster the process.

    Q: What is carbonation and why is it important for thinking about concrete from a life-cycle perspective?

    A: Carbonation is the reaction between carbon dioxide (CO2) and certain compounds in cement-based products, occurring during their use phase and end of life. It forms calcium carbonate (CaCO3) and has important implications for neutralizing the GHG [greenhouse gas] emissions and achieving carbon neutrality in the life cycle of concrete.

    Firstly, carbonation causes cement-based products to act as natural carbon sinks, sequestering CO2 from the air and storing it permanently. This helps mitigate the carbon emissions associated with the production of cement, reducing their overall carbon footprint.

    Secondly, carbonation affects concrete properties. Early-stage carbonation may increase the compressive strength of cement-based products, enhancing their durability and structural performance. However, late-stage carbonation can impact corrosion resistance in steel-reinforced concrete due to reduced alkalinity.

    Considering carbonation in the life cycle of cement-based products is crucial for accurately assessing their environmental impact. Understanding and leveraging carbonation can help industry reduce carbon emissions and maximize carbon sequestration potential. Paying close attention to it in the design process aids in creating durable and corrosion-resistant structures, contributing to longevity and overall sustainability.

    Q: What are some ongoing global efforts to force carbonation?

    A: Some ongoing efforts to force carbonation in concrete involve artificially increasing the amount of CO2 gas present during the early-stage hydration of concrete. This process, known as forced carbonation, aims to accelerate the carbonation reaction and its associated benefits.

    Forced carbonation is typically applied to precast concrete elements that are produced in artificially CO2-rich environments. By exposing fresh concrete to higher concentrations of CO2 during curing, the carbonation process can be expedited, resulting in potential improvements in strength, reduced water absorption, improved resistance to chloride permeability, and improved performance during freeze-thaw. At the same time, it can be difficult to quantify how much CO2 is absorbed and released because of the process.

    These efforts to induce early-stage carbonation through forced carbonation represent the industry’s focus on optimizing concrete performance and environmental impacts. By exploring methods to enhance the carbonation process, researchers and practitioners seek to more efficiently harness its benefits, such as increasing strength and sequestering CO2.

    It is important to note that forced carbonation requires careful implementation and monitoring to ensure desired outcomes. The specific procedures and conditions vary based on the application and intended goals, highlighting the need for expertise and controlled environments.

    Overall, ongoing efforts in forced carbonation contribute to the continuous development of concrete technology, aiming to improve its properties and reduce its carbon footprint throughout the life cycle of the material.

    Q: What is chemically-induced pre-cure carbonation, and what implications does it have?

    A: Chemically-induced pre-cure carbonation (CIPCC) is a method developed by the MIT CSHub to mineralize and permanently store CO2 in cement. Unlike traditional forced carbonation methods, CIPCC introduces CO2 into the concrete mix as a solid powder, specifically sodium bicarbonate. This approach addresses some of the limitations of current carbon capture and utilization technologies.

    The implications of CIPCC are significant. Firstly, it offers convenience for cast-in-place applications, making it easier to incorporate CO2 use in concrete projects. Unlike some other approaches, CIPCC allows for precise control over the quantity of CO2 sequestered in the concrete. This ensures accurate carbonation and facilitates better management of the storage process. CIPCC also builds on previous research regarding amorphous hydration phases, providing an additional mechanism for CO2 sequestration in cement-based products. These phases carbonate through CIPCC, contributing to the overall carbon sequestration capacity of the material.

    Furthermore, early-stage pre-cure carbonation shows promise as a pathway for concrete to permanently sequester a controlled and precise quantity of CO2. Our recent paper in PNAS Nexus suggests that it could theoretically offset at least 40 percent of the calcination emissions associated with cement production, when anticipating advances in the lower-emissions production of sodium bicarbonate. We also found that up to 15 percent of cement (by weight) could be substituted with sodium bicarbonate without compromising the mechanical performance of a given mix. Further research is needed to evaluate long-term effects of this process to explore the potential life-cycle savings and impacts of carbonation.

    CIPCC offers not only environmental benefits by reducing carbon emissions, but also practical advantages. The early-stage strength increase observed in real-world applications could expedite construction timelines by allowing concrete to reach its full strength faster.

    Overall, CIPCC demonstrates the potential for more efficient and controlled CO2 sequestration in concrete. It represents an important development in concrete sustainability, emphasizing the need for further research and considering the material’s life-cycle impacts.

    This research was carried out by MIT CSHub, which is sponsored by the Concrete Advancement Foundation and the Portland Cement Association. More

  • in

    The curse of variety in transportation systems

    Cathy Wu has always delighted in systems that run smoothly. In high school, she designed a project to optimize the best route for getting to class on time. Her research interests and career track are evidence of a propensity for organizing and optimizing, coupled with a strong sense of responsibility to contribute to society instilled by her parents at a young age.

    As an undergraduate at MIT, Wu explored domains like agriculture, energy, and education, eventually homing in on transportation. “Transportation touches each of our lives,” she says. “Every day, we experience the inefficiencies and safety issues as well as the environmental harms associated with our transportation systems. I believe we can and should do better.”

    But doing so is complicated. Consider the long-standing issue of traffic systems control. Wu explains that it is not one problem, but more accurately a family of control problems impacted by variables like time of day, weather, and vehicle type — not to mention the types of sensing and communication technologies used to measure roadway information. Every differentiating factor introduces an exponentially larger set of control problems. There are thousands of control-problem variations and hundreds, if not thousands, of studies and papers dedicated to each problem. Wu refers to the sheer number of variations as the curse of variety — and it is hindering innovation.

    Play video

    “To prove that a new control strategy can be safely deployed on our streets can take years. As time lags, we lose opportunities to improve safety and equity while mitigating environmental impacts. Accelerating this process has huge potential,” says Wu.  

    Which is why she and her group in the MIT Laboratory for Information and Decision Systems are devising machine learning-based methods to solve not just a single control problem or a single optimization problem, but families of control and optimization problems at scale. “In our case, we’re examining emerging transportation problems that people have spent decades trying to solve with classical approaches. It seems to me that we need a different approach.”

    Optimizing intersections

    Currently, Wu’s largest research endeavor is called Project Greenwave. There are many sectors that directly contribute to climate change, but transportation is responsible for the largest share of greenhouse gas emissions — 29 percent, of which 81 percent is due to land transportation. And while much of the conversation around mitigating environmental impacts related to mobility is focused on electric vehicles (EVs), electrification has its drawbacks. EV fleet turnover is time-consuming (“on the order of decades,” says Wu), and limited global access to the technology presents a significant barrier to widespread adoption.

    Wu’s research, on the other hand, addresses traffic control problems by leveraging deep reinforcement learning. Specifically, she is looking at traffic intersections — and for good reason. In the United States alone, there are more than 300,000 signalized intersections where vehicles must stop or slow down before re-accelerating. And every re-acceleration burns fossil fuels and contributes to greenhouse gas emissions.

    Highlighting the magnitude of the issue, Wu says, “We have done preliminary analysis indicating that up to 15 percent of land transportation CO2 is wasted through energy spent idling and re-accelerating at intersections.”

    To date, she and her group have modeled 30,000 different intersections across 10 major metropolitan areas in the United States. That is 30,000 different configurations, roadway topologies (e.g., grade of road or elevation), different weather conditions, and variations in travel demand and fuel mix. Each intersection and its corresponding scenarios represents a unique multi-agent control problem.

    Wu and her team are devising techniques that can solve not just one, but a whole family of problems comprised of tens of thousands of scenarios. Put simply, the idea is to coordinate the timing of vehicles so they arrive at intersections when traffic lights are green, thereby eliminating the start, stop, re-accelerate conundrum. Along the way, they are building an ecosystem of tools, datasets, and methods to enable roadway interventions and impact assessments of strategies to significantly reduce carbon-intense urban driving.

    Play video

    Their collaborator on the project is the Utah Department of Transportation, which Wu says has played an essential role, in part by sharing data and practical knowledge that she and her group otherwise would not have been able to access publicly.

    “I appreciate industry and public sector collaborations,” says Wu. “When it comes to important societal problems, one really needs grounding with practitioners. One needs to be able to hear the perspectives in the field. My interactions with practitioners expand my horizons and help ground my research. You never know when you’ll hear the perspective that is the key to the solution, or perhaps the key to understanding the problem.”

    Finding the best routes

    In a similar vein, she and her research group are tackling large coordination problems. For example, vehicle routing. “Every day, delivery trucks route more than a hundred thousand packages for the city of Boston alone,” says Wu. Accomplishing the task requires, among other things, figuring out which trucks to use, which packages to deliver, and the order in which to deliver them as efficiently as possible. If and when the trucks are electrified, they will need to be charged, adding another wrinkle to the process and further complicating route optimization.

    The vehicle routing problem, and therefore the scope of Wu’s work, extends beyond truck routing for package delivery. Ride-hailing cars may need to pick up objects as well as drop them off; and what if delivery is done by bicycle or drone? In partnership with Amazon, for example, Wu and her team addressed routing and path planning for hundreds of robots (up to 800) in their warehouses.

    Every variation requires custom heuristics that are expensive and time-consuming to develop. Again, this is really a family of problems — each one complicated, time-consuming, and currently unsolved by classical techniques — and they are all variations of a central routing problem. The curse of variety meets operations and logistics.

    By combining classical approaches with modern deep-learning methods, Wu is looking for a way to automatically identify heuristics that can effectively solve all of these vehicle routing problems. So far, her approach has proved successful.

    “We’ve contributed hybrid learning approaches that take existing solution methods for small problems and incorporate them into our learning framework to scale and accelerate that existing solver for large problems. And we’re able to do this in a way that can automatically identify heuristics for specialized variations of the vehicle routing problem.” The next step, says Wu, is applying a similar approach to multi-agent robotics problems in automated warehouses.

    Wu and her group are making big strides, in part due to their dedication to use-inspired basic research. Rather than applying known methods or science to a problem, they develop new methods, new science, to address problems. The methods she and her team employ are necessitated by societal problems with practical implications. The inspiration for the approach? None other than Louis Pasteur, who described his research style in a now-famous article titled “Pasteur’s Quadrant.” Anthrax was decimating the sheep population, and Pasteur wanted to better understand why and what could be done about it. The tools of the time could not solve the problem, so he invented a new field, microbiology, not out of curiosity but out of necessity. More

  • in

    MIT engineers create an energy-storing supercapacitor from ancient materials

    Two of humanity’s most ubiquitous historical materials, cement and carbon black (which resembles very fine charcoal), may form the basis for a novel, low-cost energy storage system, according to a new study. The technology could facilitate the use of renewable energy sources such as solar, wind, and tidal power by allowing energy networks to remain stable despite fluctuations in renewable energy supply.

    The two materials, the researchers found, can be combined with water to make a supercapacitor — an alternative to batteries — that could provide storage of electrical energy. As an example, the MIT researchers who developed the system say that their supercapacitor could eventually be incorporated into the concrete foundation of a house, where it could store a full day’s worth of energy while adding little (or no) to the cost of the foundation and still providing the needed structural strength. The researchers also envision a concrete roadway that could provide contactless recharging for electric cars as they travel over that road.

    The simple but innovative technology is described this week in the journal PNAS, in a paper by MIT professors Franz-Josef Ulm, Admir Masic, and Yang-Shao Horn, and four others at MIT and at the Wyss Institute for Biologically Inspired Engineering.

    Capacitors are in principle very simple devices, consisting of two electrically conductive plates immersed in an electrolyte and separated by a membrane. When a voltage is applied across the capacitor, positively charged ions from the electrolyte accumulate on the negatively charged plate, while the positively charged plate accumulates negatively charged ions. Since the membrane in between the plates blocks charged ions from migrating across, this separation of charges creates an electric field between the plates, and the capacitor becomes charged. The two plates can maintain this pair of charges for a long time and then deliver them very quickly when needed. Supercapacitors are simply capacitors that can store exceptionally large charges.

    The amount of power a capacitor can store depends on the total surface area of its conductive plates. The key to the new supercapacitors developed by this team comes from a method of producing a cement-based material with an extremely high internal surface area due to a dense, interconnected network of conductive material within its bulk volume. The researchers achieved this by introducing carbon black — which is highly conductive — into a concrete mixture along with cement powder and water, and letting it cure. The water naturally forms a branching network of openings within the structure as it reacts with cement, and the carbon migrates into these spaces to make wire-like structures within the hardened cement. These structures have a fractal-like structure, with larger branches sprouting smaller branches, and those sprouting even smaller branchlets, and so on, ending up with an extremely large surface area within the confines of a relatively small volume. The material is then soaked in a standard electrolyte material, such as potassium chloride, a kind of salt, which provides the charged particles that accumulate on the carbon structures. Two electrodes made of this material, separated by a thin space or an insulating layer, form a very powerful supercapacitor, the researchers found.

    The two plates of the capacitor function just like the two poles of a rechargeable battery of equivalent voltage: When connected to a source of electricity, as with a battery, energy gets stored in the plates, and then when connected to a load, the electrical current flows back out to provide power.

    “The material is fascinating,” Masic says, “because you have the most-used manmade material in the world, cement, that is combined with carbon black, that is a well-known historical material — the Dead Sea Scrolls were written with it. You have these at least two-millennia-old materials that when you combine them in a specific manner you come up with a conductive nanocomposite, and that’s when things get really interesting.”

    As the mixture sets and cures, he says, “The water is systematically consumed through cement hydration reactions, and this hydration fundamentally affects nanoparticles of carbon because they are hydrophobic (water repelling).” As the mixture evolves, “the carbon black is self-assembling into a connected conductive wire,” he says. The process is easily reproducible, with materials that are inexpensive and readily available anywhere in the world. And the amount of carbon needed is very small — as little as 3 percent by volume of the mix — to achieve a percolated carbon network, Masic says.

    Supercapacitors made of this material have great potential to aid in the world’s transition to renewable energy, Ulm says. The principal sources of emissions-free energy, wind, solar, and tidal power, all produce their output at variable times that often do not correspond to the peaks in electricity usage, so ways of storing that power are essential. “There is a huge need for big energy storage,” he says, and existing batteries are too expensive and mostly rely on materials such as lithium, whose supply is limited, so cheaper alternatives are badly needed. “That’s where our technology is extremely promising, because cement is ubiquitous,” Ulm says.

    The team calculated that a block of nanocarbon-black-doped concrete that is 45 cubic meters (or yards) in size — equivalent to a cube about 3.5 meters across — would have enough capacity to store about 10 kilowatt-hours of energy, which is considered the average daily electricity usage for a household. Since the concrete would retain its strength, a house with a foundation made of this material could store a day’s worth of energy produced by solar panels or windmills and allow it to be used whenever it’s needed. And, supercapacitors can be charged and discharged much more rapidly than batteries.

    After a series of tests used to determine the most effective ratios of cement, carbon black, and water, the team demonstrated the process by making small supercapacitors, about the size of some button-cell batteries, about 1 centimeter across and 1 millimeter thick, that could each be charged to 1 volt, comparable to a 1-volt battery. They then connected three of these to demonstrate their ability to light up a 3-volt light-emitting diode (LED). Having proved the principle, they now plan to build a series of larger versions, starting with ones about the size of a typical 12-volt car battery, then working up to a 45-cubic-meter version to demonstrate its ability to store a house-worth of power.

    There is a tradeoff between the storage capacity of the material and its structural strength, they found. By adding more carbon black, the resulting supercapacitor can store more energy, but the concrete is slightly weaker, and this could be useful for applications where the concrete is not playing a structural role or where the full strength-potential of concrete is not required. For applications such as a foundation, or structural elements of the base of a wind turbine, the “sweet spot” is around 10 percent carbon black in the mix, they found.

    Another potential application for carbon-cement supercapacitors is for building concrete roadways that could store energy produced by solar panels alongside the road and then deliver that energy to electric vehicles traveling along the road using the same kind of technology used for wirelessly rechargeable phones. A related type of car-recharging system is already being developed by companies in Germany and the Netherlands, but using standard batteries for storage.

    Initial uses of the technology might be for isolated homes or buildings or shelters far from grid power, which could be powered by solar panels attached to the cement supercapacitors, the researchers say.

    Ulm says that the system is very scalable, as the energy-storage capacity is a direct function of the volume of the electrodes. “You can go from 1-millimeter-thick electrodes to 1-meter-thick electrodes, and by doing so basically you can scale the energy storage capacity from lighting an LED for a few seconds, to powering a whole house,” he says.

    Depending on the properties desired for a given application, the system could be tuned by adjusting the mixture. For a vehicle-charging road, very fast charging and discharging rates would be needed, while for powering a home “you have the whole day to charge it up,” so slower-charging material could be used, Ulm says.

    “So, it’s really a multifunctional material,” he adds. Besides its ability to store energy in the form of supercapacitors, the same kind of concrete mixture can be used as a heating system, by simply applying electricity to the carbon-laced concrete.

    Ulm sees this as “a new way of looking toward the future of concrete as part of the energy transition.”

    The research team also included postdocs Nicolas Chanut and Damian Stefaniuk at MIT’s Department of Civil and Environmental Engineering, James Weaver at the Wyss Institute, and Yunguang Zhu in MIT’s Department of Mechanical Engineering. The work was supported by the MIT Concrete Sustainability Hub, with sponsorship by the Concrete Advancement Foundation. More