More stories

  • in

    Bubble findings could unlock better electrode and electrolyzer designs

    Industrial electrochemical processes that use electrodes to produce fuels and chemical products are hampered by the formation of bubbles that block parts of the electrode surface, reducing the area available for the active reaction. Such blockage reduces the performance of the electrodes by anywhere from 10 to 25 percent.But new research reveals a decades-long misunderstanding about the extent of that interference. The findings show exactly how the blocking effect works and could lead to new ways of designing electrode surfaces to minimize inefficiencies in these widely used electrochemical processes.It has long been assumed that the entire area of the electrode shadowed by each bubble would be effectively inactivated. But it turns out that a much smaller area — roughly the area where the bubble actually contacts the surface — is blocked from its electrochemical activity. The new insights could lead directly to new ways of patterning the surfaces to minimize the contact area and improve overall efficiency.The findings are reported today in the journal Nanoscale, in a paper by recent MIT graduate Jack Lake PhD ’23, graduate student Simon Rufer, professor of mechanical engineering Kripa Varanasi, research scientist Ben Blaiszik, and six others at the University of Chicago and Argonne National Laboratory. The team has made available an open-source, AI-based software tool that engineers and scientists can now use to automatically recognize and quantify bubbles formed on a given surface, as a first step toward controlling the electrode material’s properties.

    Play video

    Gas-evolving electrodes, often with catalytic surfaces that promote chemical reactions, are used in a wide variety of processes, including the production of “green” hydrogen without the use of fossil fuels, carbon-capture processes that can reduce greenhouse gas emissions, aluminum production, and the chlor-alkali process that is used to make widely used chemical products.These are very widespread processes. The chlor-alkali process alone accounts for 2 percent of all U.S. electricity usage; aluminum production accounts for 3 percent of global electricity; and both carbon capture and hydrogen production are likely to grow rapidly in coming years as the world strives to meet greenhouse-gas reduction targets. So, the new findings could make a real difference, Varanasi says.“Our work demonstrates that engineering the contact and growth of bubbles on electrodes can have dramatic effects” on how bubbles form and how they leave the surface, he says. “The knowledge that the area under bubbles can be significantly active ushers in a new set of design rules for high-performance electrodes to avoid the deleterious effects of bubbles.”“The broader literature built over the last couple of decades has suggested that not only that small area of contact but the entire area under the bubble is passivated,” Rufer says. The new study reveals “a significant difference between the two models because it changes how you would develop and design an electrode to minimize these losses.”To test and demonstrate the implications of this effect, the team produced different versions of electrode surfaces with patterns of dots that nucleated and trapped bubbles at different sizes and spacings. They were able to show that surfaces with widely spaced dots promoted large bubble sizes but only tiny areas of surface contact, which helped to make clear the difference between the expected and actual effects of bubble coverage.Developing the software to detect and quantify bubble formation was necessary for the team’s analysis, Rufer explains. “We wanted to collect a lot of data and look at a lot of different electrodes and different reactions and different bubbles, and they all look slightly different,” he says. Creating a program that could deal with different materials and different lighting and reliably identify and track the bubbles was a tricky process, and machine learning was key to making it work, he says.Using that tool, he says, they were able to collect “really significant amounts of data about the bubbles on a surface, where they are, how big they are, how fast they’re growing, all these different things.” The tool is now freely available for anyone to use via the GitHub repository.By using that tool to correlate the visual measures of bubble formation and evolution with electrical measurements of the electrode’s performance, the researchers were able to disprove the accepted theory and to show that only the area of direct contact is affected. Videos further proved the point, revealing new bubbles actively evolving directly under parts of a larger bubble.The researchers developed a very general methodology that can be applied to characterize and understand the impact of bubbles on any electrode or catalyst surface. They were able to quantify the bubble passivation effects in a new performance metric they call BECSA (Bubble-induced electrochemically active surface), as opposed to ECSA (electrochemically active surface area), that is used in the field. “The BECSA metric was a concept we defined in an earlier study but did not have an effective method to estimate until this work,” says Varanasi.The knowledge that the area under bubbles can be significantly active ushers in a new set of design rules for high-performance electrodes. This means that electrode designers should seek to minimize bubble contact area rather than simply bubble coverage, which can be achieved by controlling the morphology and chemistry of the electrodes. Surfaces engineered to control bubbles can not only improve the overall efficiency of the processes and thus reduce energy use, they can also save on upfront materials costs. Many of these gas-evolving electrodes are coated with catalysts made of expensive metals like platinum or iridium, and the findings from this work can be used to engineer electrodes to reduce material wasted by reaction-blocking bubbles.Varanasi says that “the insights from this work could inspire new electrode architectures that not only reduce the usage of precious materials, but also improve the overall electrolyzer performance,” both of which would provide large-scale environmental benefits.The research team included Jim James, Nathan Pruyne, Aristana Scourtas, Marcus Schwarting, Aadit Ambalkar, Ian Foster, and Ben Blaiszik at the University of Chicago and Argonne National Laboratory. The work was supported by the U.S. Department of Energy under the ARPA-E program. More

  • in

    Applying risk and reliability analysis across industries

    On Feb. 1, 2003, the space shuttle Columbia disintegrated as it returned to Earth, killing all seven astronauts on board. The tragic incident compelled NASA to amp up their risk safety assessments and protocols. They knew whom to call: Curtis Smith PhD ’02, who is now the KEPCO Professor of the Practice of Nuclear Science and Engineering at MIT.The nuclear community has always been a leader in probabilistic risk analysis and Smith’s work in risk-related research had made him an established expert in the field. When NASA came knocking, Smith had been working for the Nuclear Regulatory Commission (NRC) at the Idaho National Laboratory (INL). He pivoted quickly. For the next decade, Smith worked with NASA’s Office of Safety and Mission Assurance supporting their increased use of risk analysis. It was a software tool that Smith helped develop, SAPHIRE, that NASA would adopt to bolster its own risk analysis program.At MIT, Smith’s focus is on both sides of system operation: risk and reliability. A research project he has proposed involves evaluating the reliability of 3D-printed components and parts for nuclear reactors.Growing up in IdahoMIT is a distance from where Smith grew up on the Shoshone-Bannock Native American reservation in Fort Hall, Idaho. His father worked at a chemical manufacturing plant, while his mother and grandmother operated a small restaurant on the reservation.Southeast Idaho had a significant population of migrant workers and Smith grew up with a diverse group of friends, mostly Native American and Hispanic. “It was a largely positive time and set a worldview for me in many wonderful ways,” Smith remembers. When he was a junior in high school, the family moved to Pingree, Idaho, a small town of barely 500. Smith attended Snake River High, a regional school, and remembered the deep impact his teachers had. “I learned a lot in grade school and had great teachers, so my love for education probably started there. I tried to emulate my teachers,” Smith says.Smith went to Idaho State University in Pocatello for college, a 45-minute drive from his family. Drawn to science, he decided he wanted to study a subject that would benefit humanity the most: nuclear engineering. Fortunately, Idaho State has a strong nuclear engineering program. Smith completed a master’s degree in the same field at ISU while working for the Federal Bureau of Investigation in the security department during the swing shift — 5 p.m. to 1 a.m. — at the FBI offices in Pocatello. “It was a perfect job while attending grad school,” Smith says.His KEPCO Professor of the Practice appointment is the second stint for Smith at MIT: He completed his PhD in the Department of Nuclear Science and Engineering (NSE) under the advisement of Professor George Apostolakis in 2002.A career in risk analysis and managementAfter a doctorate at MIT, Smith returned to Idaho, conducting research in risk analysis for the NRC. He also taught technical courses and developed risk analysis software. “We did a whole host of work that supported the current fleet of nuclear reactors that we have,” Smith says.He was 10 years into his career at INL when NASA recruited him, leaning on his expertise in risk analysis to translate it into space missions. “I didn’t really have a background in aerospace, but I was able to bring all the engineering I knew, conducting risk analysis for nuclear missions. It was really exciting and I learned a lot about aerospace,” Smith says.Risk analysis uses statistics and data to answer complex questions involving safety. Among his projects: analyzing the risk involved in a Mars rover mission with a radioisotope-generated power source for the rover. Even if the necessary plutonium is encased in really strong material, calculations for risk have to factor in all eventualities, including the rocket blowing up.When the Fukushima incident happened in 2011, the Department of Energy (DoE) was more supportive of safety and risk analysis research. Smith found himself in the center of the action again, supporting large DoE research programs. He then moved to become the director of the Nuclear Safety and Regulatory Research Division at the INL. Smith found he loved the role, mentoring and nurturing the careers of a diverse set of scientists. “It turned out to be much more rewarding than I had expected,” Smith says. Under his leadership, the division grew from 45 to almost 90 research staff and won multiple national awards.Return to MITMIT NSE came calling in 2022, looking to fill the position of professor of the practice, an offer Smith couldn’t refuse. The department was looking to bulk up its risk and reliability offerings and Smith made a great fit. The DoE division he had been supervising had grown wings enough for Smith to seek out something new.“Just getting back to Boston is exciting,” Smith says. The last go-around involved bringing the family to the city and included a lot of sleepless nights. Smith’s wife, Jacquie, is also excited about being closer to the New England fan base. The couple has invested in season tickets for the Patriots and look to attend as many sporting events as possible.Smith is most excited about adding to the risk and reliability offerings at MIT at a time when the subject has become especially important for nuclear power. “I’m grateful for the opportunity to bring my knowledge and expertise from the last 30 years to the field,” he says. Being a professor of the practice of NSE carries with it a responsibility to unite theory and practice, something Smith is especially good at. “We always have to answer the question of, ‘How do I take the research and make that practical,’ especially for something important like nuclear power, because we need much more of these ideas in industry,” he says.He is particularly excited about developing the next generation of nuclear scientists. “Having the ability to do this at a place like MIT is especially fulfilling and something I have been desiring my whole career,” Smith says. More

  • in

    Affordable high-tech windows for comfort and energy savings

    Imagine if the windows of your home didn’t transmit heat. They’d keep the heat indoors in winter and outdoors on a hot summer’s day. Your heating and cooling bills would go down; your energy consumption and carbon emissions would drop; and you’d still be comfortable all year ’round.AeroShield, a startup spun out of MIT, is poised to start manufacturing such windows. Building operations make up 36 percent of global carbon dioxide emissions, and today’s windows are a major contributor to energy inefficiency in buildings. To improve building efficiency, AeroShield has developed a window technology that promises to reduce heat loss by up to 65 percent, significantly reducing energy use and carbon emissions in buildings, and the company just announced the opening of a new facility to manufacture its breakthrough energy-efficient windows.“Our mission is to decarbonize the built environment,” says Elise Strobach SM ’17, PhD ’20, co-founder and CEO of AeroShield. “The availability of affordable, thermally insulating windows will help us achieve that goal while also reducing homeowner’s heating and cooling bills.” According to the U.S. Department of Energy, for most homeowners, 30 percent of that bill results from window inefficiencies.Technology development at MITResearch on AeroShield’s window technology began a decade ago in the MIT lab of Evelyn Wang, Ford Professor of Engineering, now on leave to serve as director of the Advanced Research Projects Agency-Energy (ARPA-E). In late 2014, the MIT team received funding from ARPA-E, and other sponsors followed, including the MIT Energy Initiative through the MIT Tata Center for Technology and Design in 2016.The work focused on aerogels, remarkable materials that are ultra-porous, lighter than a marshmallow, strong enough to support a brick, and an unparalleled barrier to heat flow. Aerogels were invented in the 1930s and used by NASA and others as thermal insulation. The team at MIT saw the potential for incorporating aerogel sheets into windows to keep heat from escaping or entering buildings. But there was one problem: Nobody had been able to make aerogels transparent.An aerogel is made of transparent, loosely connected nanoscale silica particles and is 95 percent air. But an aerogel sheet isn’t transparent because light traveling through it gets scattered by the silica particles.After five years of theoretical and experimental work, the MIT team determined that the key to transparency was having the silica particles both small and uniform in size. This allows light to pass directly through, so the aerogel becomes transparent. Indeed, as long as the particle size is small and uniform, increasing the thickness of an aerogel sheet to achieve greater thermal insulation won’t make it less clear.Teams in the MIT lab looked at various applications for their super-insulating, transparent aerogels. Some focused on improving solar thermal collectors by making the systems more efficient and less expensive. But to Strobach, increasing the thermal efficiency of windows looked especially promising and potentially significant as a means of reducing climate change.The researchers determined that aerogel sheets could be inserted into the gap in double-pane windows, making them more than twice as insulating. The windows could then be manufactured on existing production lines with minor changes, and the resulting windows would be affordable and as wide-ranging in style as the window options available today. Best of all, once purchased and installed, the windows would reduce electricity bills, energy use, and carbon emissions.The impact on energy use in buildings could be considerable. “If we only consider winter, windows in the United States lose enough energy to power over 50 million homes,” says Strobach. “That wasted energy generates about 350 million tons of carbon dioxide — more than is emitted by 76 million cars.” Super-insulating windows could help home and building owners reduce carbon dioxide emissions by gigatons while saving billions in heating and cooling costs.The AeroShield storyIn 2019, Strobach and her MIT colleagues — Aaron Baskerville-Bridges MBA ’20, SM ’20 and Kyle Wilke PhD ’19 — co-founded AeroShield to further develop and commercialize their aerogel-based technology for windows and other applications. And in the subsequent five years, their hard work has attracted attention, recently leading to two major accomplishments.In spring 2024, the company announced the opening of its new pilot manufacturing facility in Waltham, Massachusetts, where the team will be producing, testing, and certifying their first full-size windows and patio doors for initial product launch. The 12,000 square foot facility will significantly expand the company’s capabilities, with cutting-edge aerogel R&D labs, manufacturing equipment, assembly lines, and testing equipment. Says Strobach, “Our pilot facility will supply window and door manufacturers as we launch our first products and will also serve as our R&D headquarters as we develop the next generation of energy-efficient products using transparent aerogels.”Also in spring 2024, AeroShield received a $14.5 million award from ARPA-E’s “Seeding Critical Advances for Leading Energy technologies with Untapped Potential” (SCALEUP) program, which provides new funding to previous ARPA-E awardees that have “demonstrated a viable path to market.” That funding will enable the company to expand its production capacity to tens of thousands, or even hundreds of thousands, of units per year.Strobach also cites two less-obvious benefits of the SCALEUP award.First, the funding is enabling the company to move more quickly on the scale-up phase of their technology development. “We know from our fundamental studies and lab experiments that we can make large-area aerogel sheets that could go in an entry or patio door,” says Elise. “The SCALEUP award allows us to go straight for that vision. We don’t have to do all the incremental sizes of aerogels to prove that we can make a big one. The award provides capital for us to buy the big equipment to make the big aerogel.”Second, the SCALEUP award confirms the viability of the company to other potential investors and collaborators. Indeed, AeroShield recently announced $5 million of additional funding from existing investors Massachusetts Clean Energy Center and MassVentures, as well as new investor MassMutual Ventures. Strobach notes that the company now has investor, engineering, and customer partners.She stresses the importance of partners in achieving AeroShield’s mission. “We know that what we’ve got from a fundamental perspective can change the industry,” she says. “Now we want to go out and do it. With the right partners and at the right pace, we may actually be able to increase the energy efficiency of our buildings early enough to help make a real dent in climate change.” More

  • in

    Study of disordered rock salts leads to battery breakthrough

    For the past decade, disordered rock salt has been studied as a potential breakthrough cathode material for use in lithium-ion batteries and a key to creating low-cost, high-energy storage for everything from cell phones to electric vehicles to renewable energy storage.A new MIT study is making sure the material fulfills that promise.Led by Ju Li, the Tokyo Electric Power Company Professor in Nuclear Engineering and professor of materials science and engineering, a team of researchers describe a new class of partially disordered rock salt cathode, integrated with polyanions — dubbed disordered rock salt-polyanionic spinel, or DRXPS — that delivers high energy density at high voltages with significantly improved cycling stability.“There is typically a trade-off in cathode materials between energy density and cycling stability … and with this work we aim to push the envelope by designing new cathode chemistries,” says Yimeng Huang, a postdoc in the Department of Nuclear Science and Engineering and first author of a paper describing the work published today in Nature Energy. “(This) material family has high energy density and good cycling stability because it integrates two major types of cathode materials, rock salt and polyanionic olivine, so it has the benefits of both.”Importantly, Li adds, the new material family is primarily composed of manganese, an earth-abundant element that is significantly less expensive than elements like nickel and cobalt, which are typically used in cathodes today.“Manganese is at least five times less expensive than nickel, and about 30 times less expensive than cobalt,” Li says. “Manganese is also the one of the keys to achieving higher energy densities, so having that material be much more earth-abundant is a tremendous advantage.”A possible path to renewable energy infrastructureThat advantage will be particularly critical, Li and his co-authors wrote, as the world looks to build the renewable energy infrastructure needed for a low- or no-carbon future.Batteries are a particularly important part of that picture, not only for their potential to decarbonize transportation with electric cars, buses, and trucks, but also because they will be essential to addressing the intermittency issues of wind and solar power by storing excess energy, then feeding it back into the grid at night or on calm days, when renewable generation drops.Given the high cost and relative rarity of materials like cobalt and nickel, they wrote, efforts to rapidly scale up electric storage capacity would likely lead to extreme cost spikes and potentially significant materials shortages.“If we want to have true electrification of energy generation, transportation, and more, we need earth-abundant batteries to store intermittent photovoltaic and wind power,” Li says. “I think this is one of the steps toward that dream.”That sentiment was shared by Gerbrand Ceder, the Samsung Distinguished Chair in Nanoscience and Nanotechnology Research and a professor of materials science and engineering at the University of California at Berkeley.“Lithium-ion batteries are a critical part of the clean energy transition,” Ceder says. “Their continued growth and price decrease depends on the development of inexpensive, high-performance cathode materials made from earth-abundant materials, as presented in this work.”Overcoming obstacles in existing materialsThe new study addresses one of the major challenges facing disordered rock salt cathodes — oxygen mobility.While the materials have long been recognized for offering very high capacity — as much as 350 milliampere-hour per gram — as compared to traditional cathode materials, which typically have capacities of between 190 and 200 milliampere-hour per gram, it is not very stable.The high capacity is contributed partially by oxygen redox, which is activated when the cathode is charged to high voltages. But when that happens, oxygen becomes mobile, leading to reactions with the electrolyte and degradation of the material, eventually leaving it effectively useless after prolonged cycling.To overcome those challenges, Huang added another element — phosphorus — that essentially acts like a glue, holding the oxygen in place to mitigate degradation.“The main innovation here, and the theory behind the design, is that Yimeng added just the right amount of phosphorus, formed so-called polyanions with its neighboring oxygen atoms, into a cation-deficient rock salt structure that can pin them down,” Li explains. “That allows us to basically stop the percolating oxygen transport due to strong covalent bonding between phosphorus and oxygen … meaning we can both utilize the oxygen-contributed capacity, but also have good stability as well.”That ability to charge batteries to higher voltages, Li says, is crucial because it allows for simpler systems to manage the energy they store.“You can say the quality of the energy is higher,” he says. “The higher the voltage per cell, then the less you need to connect them in series in the battery pack, and the simpler the battery management system.”Pointing the way to future studiesWhile the cathode material described in the study could have a transformative impact on lithium-ion battery technology, there are still several avenues for study going forward.Among the areas for future study, Huang says, are efforts to explore new ways to fabricate the material, particularly for morphology and scalability considerations.“Right now, we are using high-energy ball milling for mechanochemical synthesis, and … the resulting morphology is non-uniform and has small average particle size (about 150 nanometers). This method is also not quite scalable,” he says. “We are trying to achieve a more uniform morphology with larger particle sizes using some alternate synthesis methods, which would allow us to increase the volumetric energy density of the material and may allow us to explore some coating methods … which could further improve the battery performance. The future methods, of course, should be industrially scalable.”In addition, he says, the disordered rock salt material by itself is not a particularly good conductor, so significant amounts of carbon — as much as 20 weight percent of the cathode paste — were added to boost its conductivity. If the team can reduce the carbon content in the electrode without sacrificing performance, there will be higher active material content in a battery, leading to an increased practical energy density.“In this paper, we just used Super P, a typical conductive carbon consisting of nanospheres, but they’re not very efficient,” Huang says. “We are now exploring using carbon nanotubes, which could reduce the carbon content to just 1 or 2 weight percent, which could allow us to dramatically increase the amount of the active cathode material.”Aside from decreasing carbon content, making thick electrodes, he adds, is yet another way to increase the practical energy density of the battery. This is another area of research that the team is working on.“This is only the beginning of DRXPS research, since we only explored a few chemistries within its vast compositional space,” he continues. “We can play around with different ratios of lithium, manganese, phosphorus, and oxygen, and with various combinations of other polyanion-forming elements such as boron, silicon, and sulfur.”With optimized compositions, more scalable synthesis methods, better morphology that allows for uniform coatings, lower carbon content, and thicker electrodes, he says, the DRXPS cathode family is very promising in applications of electric vehicles and grid storage, and possibly even in consumer electronics, where the volumetric energy density is very important.This work was supported with funding from the Honda Research Institute USA Inc. and the Molecular Foundry at Lawrence Berkeley National Laboratory, and used resources of the National Synchrotron Light Source II at Brookhaven National Laboratory and the Advanced Photon Source at Argonne National Laboratory.  More

  • in

    More durable metals for fusion power reactors

    For many decades, nuclear fusion power has been viewed as the ultimate energy source. A fusion power plant could generate carbon-free energy at a scale needed to address climate change. And it could be fueled by deuterium recovered from an essentially endless source — seawater.Decades of work and billions of dollars in research funding have yielded many advances, but challenges remain. To Ju Li, the TEPCO Professor in Nuclear Science and Engineering and a professor of materials science and engineering at MIT, there are still two big challenges. The first is to build a fusion power plant that generates more energy than is put into it; in other words, it produces a net output of power. Researchers worldwide are making progress toward meeting that goal.The second challenge that Li cites sounds straightforward: “How do we get the heat out?” But understanding the problem and finding a solution are both far from obvious.Research in the MIT Energy Initiative (MITEI) includes development and testing of advanced materials that may help address those challenges, as well as many other challenges of the energy transition. MITEI has multiple corporate members that have been supporting MIT’s efforts to advance technologies required to harness fusion energy.The problem: An abundance of helium, a destructive forceKey to a fusion reactor is a superheated plasma — an ionized gas — that’s reacting inside a vacuum vessel. As light atoms in the plasma combine to form heavier ones, they release fast neutrons with high kinetic energy that shoot through the surrounding vacuum vessel into a coolant. During this process, those fast neutrons gradually lose their energy by causing radiation damage and generating heat. The heat that’s transferred to the coolant is eventually used to raise steam that drives an electricity-generating turbine.The problem is finding a material for the vacuum vessel that remains strong enough to keep the reacting plasma and the coolant apart, while allowing the fast neutrons to pass through to the coolant. If one considers only the damage due to neutrons knocking atoms out of position in the metal structure, the vacuum vessel should last a full decade. However, depending on what materials are used in the fabrication of the vacuum vessel, some projections indicate that the vacuum vessel will last only six to 12 months. Why is that? Today’s nuclear fission reactors also generate neutrons, and those reactors last far longer than a year.The difference is that fusion neutrons possess much higher kinetic energy than fission neutrons do, and as they penetrate the vacuum vessel walls, some of them interact with the nuclei of atoms in the structural material, giving off particles that rapidly turn into helium atoms. The result is hundreds of times more helium atoms than are present in a fission reactor. Those helium atoms look for somewhere to land — a place with low “embedding energy,” a measure that indicates how much energy it takes for a helium atom to be absorbed. As Li explains, “The helium atoms like to go to places with low helium embedding energy.” And in the metals used in fusion vacuum vessels, there are places with relatively low helium embedding energy — namely, naturally occurring openings called grain boundaries.Metals are made up of individual grains inside which atoms are lined up in an orderly fashion. Where the grains come together there are gaps where the atoms don’t line up as well. That open space has relatively low helium embedding energy, so the helium atoms congregate there. Worse still, helium atoms have a repellent interaction with other atoms, so the helium atoms basically push open the grain boundary. Over time, the opening grows into a continuous crack, and the vacuum vessel breaks.That congregation of helium atoms explains why the structure fails much sooner than expected based just on the number of helium atoms that are present. Li offers an analogy to illustrate. “Babylon is a city of a million people. But the claim is that 100 bad persons can destroy the whole city — if all those bad persons work at the city hall.” The solution? Give those bad persons other, more attractive places to go, ideally in their own villages.To Li, the problem and possible solution are the same in a fusion reactor. If many helium atoms go to the grain boundary at once, they can destroy the metal wall. The solution? Add a small amount of a material that has a helium embedding energy even lower than that of the grain boundary. And over the past two years, Li and his team have demonstrated — both theoretically and experimentally — that their diversionary tactic works. By adding nanoscale particles of a carefully selected second material to the metal wall, they’ve found they can keep the helium atoms that form from congregating in the structurally vulnerable grain boundaries in the metal.Looking for helium-absorbing compoundsTo test their idea, So Yeon Kim ScD ’23 of the Department of Materials Science and Engineering and Haowei Xu PhD ’23 of the Department of Nuclear Science and Engineering acquired a sample composed of two materials, or “phases,” one with a lower helium embedding energy than the other. They and their collaborators then implanted helium ions into the sample at a temperature similar to that in a fusion reactor and watched as bubbles of helium formed. Transmission electron microscope images confirmed that the helium bubbles occurred predominantly in the phase with the lower helium embedding energy. As Li notes, “All the damage is in that phase — evidence that it protected the phase with the higher embedding energy.”Having confirmed their approach, the researchers were ready to search for helium-absorbing compounds that would work well with iron, which is often the principal metal in vacuum vessel walls. “But calculating helium embedding energy for all sorts of different materials would be computationally demanding and expensive,” says Kim. “We wanted to find a metric that is easy to compute and a reliable indicator of helium embedding energy.”They found such a metric: the “atomic-scale free volume,” which is basically the maximum size of the internal vacant space available for helium atoms to potentially settle. “This is just the radius of the largest sphere that can fit into a given crystal structure,” explains Kim. “It is a simple calculation.” Examination of a series of possible helium-absorbing ceramic materials confirmed that atomic free volume correlates well with helium embedding energy. Moreover, many of the ceramics they investigated have higher free volume, thus lower embedding energy, than the grain boundaries do.However, in order to identify options for the nuclear fusion application, the screening needed to include some other factors. For example, in addition to the atomic free volume, a good second phase must be mechanically robust (able to sustain a load); it must not get very radioactive with neutron exposure; and it must be compatible — but not too cozy — with the surrounding metal, so it disperses well but does not dissolve into the metal. “We want to disperse the ceramic phase uniformly in the bulk metal to ensure that all grain boundary regions are close to the dispersed ceramic phase so it can provide protection to those regions,” says Li. “The two phases need to coexist, so the ceramic won’t either clump together or totally dissolve in the iron.”Using their analytical tools, Kim and Xu examined about 50,000 compounds and identified 750 potential candidates. Of those, a good option for inclusion in a vacuum vessel wall made mainly of iron was iron silicate.Experimental testingThe researchers were ready to examine samples in the lab. To make the composite material for proof-of-concept demonstrations, Kim and collaborators dispersed nanoscale particles of iron silicate into iron and implanted helium into that composite material. She took X-ray diffraction (XRD) images before and after implanting the helium and also computed the XRD patterns. The ratio between the implanted helium and the dispersed iron silicate was carefully controlled to allow a direct comparison between the experimental and computed XRD patterns. The measured XRD intensity changed with the helium implantation exactly as the calculations had predicted. “That agreement confirms that atomic helium is being stored within the bulk lattice of the iron silicate,” says Kim.To follow up, Kim directly counted the number of helium bubbles in the composite. In iron samples without the iron silicate added, grain boundaries were flanked by many helium bubbles. In contrast, in the iron samples with the iron silicate ceramic phase added, helium bubbles were spread throughout the material, with many fewer occurring along the grain boundaries. Thus, the iron silicate had provided sites with low helium-embedding energy that lured the helium atoms away from the grain boundaries, protecting those vulnerable openings and preventing cracks from opening up and causing the vacuum vessel to fail catastrophically.The researchers conclude that adding just 1 percent (by volume) of iron silicate to the iron walls of the vacuum vessel will cut the number of helium bubbles in half and also reduce their diameter by 20 percent — “and having a lot of small bubbles is OK if they’re not in the grain boundaries,” explains Li.Next stepsThus far, Li and his team have gone from computational studies of the problem and a possible solution to experimental demonstrations that confirm their approach. And they’re well on their way to commercial fabrication of components. “We’ve made powders that are compatible with existing commercial 3D printers and are preloaded with helium-absorbing ceramics,” says Li. The helium-absorbing nanoparticles are well dispersed and should provide sufficient helium uptake to protect the vulnerable grain boundaries in the structural metals of the vessel walls. While Li confirms that there’s more scientific and engineering work to be done, he, along with Alexander O’Brien PhD ’23 of the Department of Nuclear Science and Engineering and Kang Pyo So, a former postdoc in the same department, have already developed a startup company that’s ready to 3D print structural materials that can meet all the challenges faced by the vacuum vessel inside a fusion reactor.This research was supported by Eni S.p.A. through the MIT Energy Initiative. Additional support was provided by a Kwajeong Scholarship; the U.S. Department of Energy (DOE) Laboratory Directed Research and Development program at Idaho National Laboratory; U.S. DOE Lawrence Livermore National Laboratory; and Creative Materials Discovery Program through the National Research Foundation of Korea. More

  • in

    MIT engineers design tiny batteries for powering cell-sized robots

    A tiny battery designed by MIT engineers could enable the deployment of cell-sized, autonomous robots for drug delivery within in the human body, as well as other applications such as locating leaks in gas pipelines.The new battery, which is 0.1 millimeters long and 0.002 millimeters thick — roughly the thickness of a human hair — can capture oxygen from air and use it to oxidize zinc, creating a current of up to 1 volt. That is enough to power a small circuit, sensor, or actuator, the researchers showed.“We think this is going to be very enabling for robotics,” says Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT and the senior author of the study. “We’re building robotic functions onto the battery and starting to put these components together into devices.”Ge Zhang PhD ’22 and Sungyun Yang, an MIT graduate student, are the lead author of the paper, which appears in Science Robotics.Powered by batteriesFor several years, Strano’s lab has been working on tiny robots that can sense and respond to stimuli in their environment. One of the major challenges in developing such tiny robots is making sure that they have enough power.Other researchers have shown that they can power microscale devices using solar power, but the limitation to that approach is that the robots must have a laser or another light source pointed at them at all times. Such devices are known as “marionettes” because they are controlled by an external power source. Putting a power source such as a battery inside these tiny devices could free them to roam much farther.“The marionette systems don’t really need a battery because they’re getting all the energy they need from outside,” Strano says. “But if you want a small robot to be able to get into spaces that you couldn’t access otherwise, it needs to have a greater level of autonomy. A battery is essential for something that’s not going to be tethered to the outside world.”To create robots that could become more autonomous, Strano’s lab decided to use a type of battery known as a zinc-air battery. These batteries, which have a longer lifespan than many other types of batteries due to their high energy density, are often used in hearing aids.The battery that they designed consists of a zinc electrode connected to a platinum electrode, embedded into a strip of a polymer called SU-8, which is commonly used for microelectronics. When these electrodes interact with oxygen molecules from the air, the zinc becomes oxidized and releases electrons that flow to the platinum electrode, creating a current.In this study, the researchers showed that this battery could provide enough energy to power an actuator — in this case, a robotic arm that can be raised and lowered. The battery could also power a memristor, an electrical component that can store memories of events by changing its electrical resistance, and a clock circuit, which allows robotic devices to keep track of time.The battery also provides enough power to run two different types of sensors that change their electrical resistance when they encounter chemicals in the environment. One of the sensors is made from atomically thin molybdenum disulfide and the other from carbon nanotubes.“We’re making the basic building blocks in order to build up functions at the cellular level,” Strano says.Robotic swarmsIn this study, the researchers used a wire to connect their battery to an external device, but in future work they plan to build robots in which the battery is incorporated into a device.“This is going to form the core of a lot of our robotic efforts,” Strano says. “You can build a robot around an energy source, sort of like you can build an electric car around the battery.”One of those efforts revolves around designing tiny robots that could be injected into the human body, where they could seek out a target site and then release a drug such as insulin. For use in the human body, the researchers envision that the devices would be made of biocompatible materials that would break apart once they were no longer needed.The researchers are also working on increasing the voltage of the battery, which may enable additional applications.The research was funded by the U.S. Army Research Office, the U.S. Department of Energy, the National Science Foundation, and a MathWorks Engineering Fellowship. More

  • in

    H2 underground

    In 1987 in a village in Mali, workers were digging a water well when they felt a rush of air. One of the workers was smoking a cigarette, and the air caught fire, burning a clear blue flame. The well was capped at the time, but in 2012, it was tapped to provide energy for the village, powering a generator for nine years.The fuel source: geologic hydrogen.For decades, hydrogen has been discussed as a potentially revolutionary fuel. But efforts to produce “green” hydrogen (splitting water into hydrogen and oxygen using renewable electricity), “grey” hydrogen (making hydrogen from methane and releasing the biproduct carbon dioxide (CO2) into the atmosphere), “brown” hydrogen (produced through the gasification of coal), and “blue” hydrogen (making hydrogen from methane but capturing the CO2) have thus far proven either expensive and/or energy-intensive. Enter geologic hydrogen. Also known as “orange,” “gold,” “white,” “natural,” and even “clear” hydrogen, geologic hydrogen is generated by natural geochemical processes in the Earth’s crust. While there is still much to learn, a growing number of researchers and industry leaders are hopeful that it may turn out to be an abundant and affordable resource lying right beneath our feet.“There’s a tremendous amount of uncertainty about this,” noted Robert Stoner, the founding director of the MIT Tata Center for Technology and Design, in his opening remarks at the MIT Energy Initiative (MITEI) Spring Symposium. “But the prospect of readily producible clean hydrogen showing up all over the world is a potential near-term game changer.”A new hope for hydrogenThis April, MITEI gathered researchers, industry leaders, and academic experts from around MIT and the world to discuss the challenges and opportunities posed by geologic hydrogen in a daylong symposium entitled “Geologic hydrogen: Are orange and gold the new green?” The field is so new that, until a year ago, the U.S. Department of Energy (DOE)’s website incorrectly claimed that hydrogen only occurs naturally on Earth in compound forms, chemically bonded to other elements.“There’s a common misconception that hydrogen doesn’t occur naturally on Earth,” said Geoffrey Ellis, a research geologist with the U.S. Geological Survey. He noted that natural hydrogen production tends to occur in different locations from where oil and natural gas are likely to be discovered, which explains why geologic hydrogen discoveries have been relatively rare, at least until recently.“Petroleum exploration is not targeting hydrogen,” Ellis said. “Companies are simply not really looking for it, they’re not interested in it, and oftentimes they don’t measure for it. The energy industry spends billions of dollars every year on exploration with very sophisticated technology, and still they drill dry holes all the time. So I think it’s naive to think that we would suddenly be finding hydrogen all the time when we’re not looking for it.”In fact, the number of researchers and startup energy companies with targeted efforts to characterize geologic hydrogen has increased over the past several years — and these searches have uncovered new prospects, said Mary Haas, a venture partner at Breakthrough Energy Ventures. “We’ve seen a dramatic uptick in exploratory activity, now that there is a focused effort by a small community worldwide. At Breakthrough Energy, we are excited about the potential of this space, as well as our role in accelerating its progress,” she said. Haas noted that if geologic hydrogen could be produced at $1 per kilogram, this would be consistent with the DOE’s targeted “liftoff” point for the energy source. “If that happens,” she said, “it would be transformative.”Haas noted that only a small portion of identified hydrogen sites are currently under commercial exploration, and she cautioned that it’s not yet clear how large a role the resource might play in the transition to green energy. But, she said, “It’s worthwhile and important to find out.”Inventing a new energy subsectorGeologic hydrogen is produced when water reacts with iron-rich minerals in rock. Researchers and industry are exploring how to stimulate this natural production by pumping water into promising deposits.In any new exploration area, teams must ask a series of questions to qualify the site, said Avon McIntyre, the executive director of HyTerra Ltd., an Australian company focused on the exploration and production of geologic hydrogen. These questions include: Is the geology favorable? Does local legislation allow for exploration and production? Does the site offer a clear path to value? And what are the carbon implications of producing hydrogen at the site?“We have to be humble,” McIntyre said. “We can’t be too prescriptive and think that we’ll leap straight into success. We have a unique opportunity to stop and think about what this industry will look like, how it will work, and how we can bring together various disciplines.” This was a theme that arose multiple times over the course of the symposium: the idea that many different stakeholders — including those from academia, industry, and government — will need to work together to explore the viability of geologic hydrogen and bring it to market at scale.In addition to the potential for hydrogen production to give rise to greenhouse gas emissions (in cases, for instance, where hydrogen deposits are contaminated with natural gas), researchers and industry must also consider landscape deformation and even potential seismic implications, said Bradford Hager, the Cecil and Ida Green Professor of Earth Sciences in the MIT Department of Earth, Atmospheric and Planetary Sciences.The surface impacts of hydrogen exploration and production will likely be similar to those caused by the hydro-fracturing process (“fracking”) used in oil and natural gas extraction, Hager said.“There will be unavoidable surface deformation. In most places, you don’t want this if there’s infrastructure around,” Hager said. “Seismicity in the stimulated zone itself should not be a problem, because the areas are tested first. But we need to avoid stressing surrounding brittle rocks.”McIntyre noted that the commercial case for hydrogen remains a challenge to quantify, without even a “spot” price that companies can use to make economic calculations. Early on, he said, capturing helium at hydrogen exploration sites could be a path to early cash flow, but that may ultimately serve as a “distraction” as teams attempt to scale up to the primary goal of hydrogen production. He also noted that it is not even yet clear whether hard rock, soft rock, or underwater environments hold the most potential for geologic hydrogen, but all show promise.“If you stack all of these things together,” McIntyre said, “what we end up doing may look very different from what we think we’re going to do right now.”The path aheadWhile the long-term prospects for geologic hydrogen are shrouded in uncertainty, most speakers at the symposium struck a tone of optimism. Ellis noted that the DOE has dedicated $20 million in funding to a stimulated hydrogen program. Paris Smalls, the co-founder and CEO of Eden GeoPower Inc., said “we think there is a path” to producing geologic hydrogen below the $1 per kilogram threshold. And Iwnetim Abate, an assistant professor in the MIT Department of Materials Science and Engineering, said that geologic hydrogen opens up the idea of Earth as a “factory to produce clean fuels,” utilizing the subsurface heat and pressure instead of relying on burning fossil fuels or natural gas for the same purpose.“Earth has had 4.6 billion years to do these experiments,” said Oliver Jagoutz, a professor of geology in the MIT Department of Earth, Atmospheric and Planetary Sciences. “So there is probably a very good solution out there.”Alexis Templeton, a professor of geological sciences at the University of Colorado at Boulder, made the case for moving quickly. “Let’s go to pilot, faster than you might think,” she said. “Why? Because we do have some systems that we understand. We could test the engineering approaches and make sure that we are doing the right tool development, the right technology development, the right experiments in the lab. To do that, we desperately need data from the field.”“This is growing so fast,” Templeton added. “The momentum and the development of geologic hydrogen is really quite substantial. We need to start getting data at scale. And then, I think, more people will jump off the sidelines very quickly.”  More

  • in

    Seizing solar’s bright future

    Consider the dizzying ascent of solar energy in the United States: In the past decade, solar capacity increased nearly 900 percent, with electricity production eight times greater in 2023 than in 2014. The jump from 2022 to 2023 alone was 51 percent, with a record 32 gigawatts (GW) of solar installations coming online. In the past four years, more solar has been added to the grid than any other form of generation. Installed solar now tops 179 GW, enough to power nearly 33 million homes. The U.S. Department of Energy (DOE) is so bullish on the sun that its decarbonization plans envision solar satisfying 45 percent of the nation’s electricity demands by 2050.But the continued rapid expansion of solar requires advances in technology, notably to improve the efficiency and durability of solar photovoltaic (PV) materials and manufacturing. That’s where Optigon, a three-year-old MIT spinout company, comes in.“Our goal is to build tools for research and industry that can accelerate the energy transition,” says Dane deQuilettes, the company’s co-founder and chief science officer. “The technology we have developed for solar will enable measurements and analysis of materials as they are being made both in lab and on the manufacturing line, dramatically speeding up the optimization of PV.”With roots in MIT’s vibrant solar research community, Optigon is poised for a 2024 rollout of technology it believes will drastically pick up the pace of solar power and other clean energy projects.Beyond siliconSilicon, the material mainstay of most PV, is limited by the laws of physics in the efficiencies it can achieve converting photons from the sun into electrical energy. Silicon-based solar cells can theoretically reach power conversion levels of just 30 percent, and real-world efficiency levels hover in the low 20s. But beyond the physical limitations of silicon, there is another issue at play for many researchers and the solar industry in the United States and elsewhere: China dominates the silicon PV market, from supply chains to manufacturing.Scientists are eagerly pursuing alternative materials, either for enhancing silicon’s solar conversion capacity or for replacing silicon altogether.In the past decade, a family of crystal-structured semiconductors known as perovskites has risen to the fore as a next-generation PV material candidate. Perovskite devices lend themselves to a novel manufacturing process using printing technology that could circumvent the supply chain juggernaut China has built for silicon. Perovskite solar cells can be stacked on each other or layered atop silicon PV, to achieve higher conversion efficiencies. Because perovskite technology is flexible and lightweight, modules can be used on roofs and other structures that cannot support heavier silicon PV, lowering costs and enabling a wider range of building-integrated solar devices.But these new materials require testing, both during R&D and then on assembly lines, where missing or defective optical, electrical, or dimensional properties in the nano-sized crystal structures can negatively impact the end product.“The actual measurement and data analysis processes have been really, really slow, because you have to use a bunch of separate tools that are all very manual,” says Optigon co-founder and chief executive officer Anthony Troupe ’21. “We wanted to come up with tools for automating detection of a material’s properties, for determining whether it could make a good or bad solar cell, and then for optimizing it.”“Our approach packed several non-contact, optical measurements using different types of light sources and detectors into a single system, which together provide a holistic, cross-sectional view of the material,” says Brandon Motes ’21, ME ’22, co-founder and chief technical officer.“This breakthrough in achieving millisecond timescales for data collection and analysis means we can take research-quality tools and actually put them on a full production system, getting extremely detailed information about products being built at massive, gigawatt scale in real-time,” says Troupe.This streamlined system takes measurements “in the snap of the fingers, unlike the traditional tools,” says Joseph Berry, director of the US Manufacturing of Advanced Perovskites Consortium and a senior research scientist at the National Renewable Energy Laboratory. “Optigon’s techniques are high precision and allow high throughput, which means they can be used in a lot of contexts where you want rapid feedback and the ability to develop materials very, very quickly.”According to Berry, Optigon’s technology may give the solar industry not just better materials, but the ability to pump out high-quality PV products at a brisker clip than is currently possible. “If Optigon is successful in deploying their technology, then we can more rapidly develop the materials that we need, manufacturing with the requisite precision again and again,” he says. “This could lead to the next generation of PV modules at a much, much lower cost.”Measuring makes the differenceWith Small Business Innovation Research funding from DOE to commercialize its products and a grant from the Massachusetts Clean Energy Center, Optigon has settled into a space at the climate technology incubator Greentown Labs in Somerville, Massachusetts. Here, the team is preparing for this spring’s launch of its first commercial product, whose genesis lies in MIT’s GridEdge Solar Research Program.Led by Vladimir Bulović, a professor of electrical engineering and the director of MIT.nano, the GridEdge program was established with funding from the Tata Trusts to develop lightweight, flexible, and inexpensive solar cells for distribution to rural communities around the globe. When deQuilettes joined the group in 2017 as a postdoc, he was tasked with directing the program and building the infrastructure to study and make perovskite solar modules.“We were trying to understand once we made the material whether or not it was good,” he recalls. “There were no good commercial metrology [the science of measurements] tools for materials beyond silicon, so we started to build our own.” Recognizing the group’s need for greater expertise on the problem, especially in the areas of electrical, software, and mechanical engineering, deQuilettes put a call out for undergraduate researchers to help build metrology tools for new solar materials.“Forty people inquired, but when I met Brandon and Anthony, something clicked; it was clear we had a complementary skill set,” says deQuilettes. “We started working together, with Anthony coming up with beautiful designs to integrate multiple measurements, and Brandon creating boards to control all of the hardware, including different types of lasers. We started filing multiple patents and that was when we saw it all coming together.”“We knew from the start that metrology could vastly improve not just materials, but production yields,” says Troupe. Adds deQuilettes, “Our goal was getting to the highest performance orders of magnitude faster than it would ordinarily take, so we developed tools that would not just be useful for research labs but for manufacturing lines to give live feedback on quality.”The device Optigon designed for industry is the size of a football, “with sensor packages crammed into a tiny form factor, taking measurements as material flows directly underneath,” says Motes. “We have also thought carefully about ways to make interaction with this tool as seamless and, dare I say, as enjoyable as possible, streaming data to both a dashboard an operator can watch and to a custom database.”Photovoltaics is just the startThe company may have already found its market niche. “A research group paid us to use our in-house prototype because they have such a burning need to get these sorts of measurements,” says Troupe, and according to Motes, “Potential customers ask us if they can buy the system now.” deQuilettes says, “Our hope is that we become the de facto company for doing any sort of characterization metrology in the United States and beyond.”Challenges lie ahead for Optigon: product launches, full-scale manufacturing, technical assistance, and sales. Greentown Labs offers support, as does MIT’s own rich community of solar researchers and entrepreneurs. But the founders are already thinking about next phases.“We are not limiting ourselves to the photovoltaics area,” says deQuilettes. “We’re planning on working in other clean energy materials such as batteries and fuel cells.”That’s because the team wants to make the maximum impact on the climate challenge. “We’ve thought a lot about the potential our tools will have on reducing carbon emissions, and we’ve done a really in-depth analysis looking at how our system can increase production yields of solar panels and other energy technologies, reducing materials and energy wasted in conventional optimization,” deQuilettes says. “If we look across all these sectors, we can expect to offset about 1,000 million metric tons of CO2 [carbon dioxide] per year in the not-too-distant future.”The team has written scale into its business plan. “We want to be the key enabler for bringing these new energy technologies to market,” says Motes. “We envision being deployed on every manufacturing line making these types of materials. It’s our goal to walk around and know that if we see a solar panel deployed, there’s a pretty high likelihood that it will be one we measured at some point.” More