More stories

  • in

    Explained: Why perovskites could take solar cells to new heights

    Perovskites hold promise for creating solar panels that could be easily deposited onto most surfaces, including flexible and textured ones. These materials would also be lightweight, cheap to produce, and as efficient as today’s leading photovoltaic materials, which are mainly silicon. They’re the subject of increasing research and investment, but companies looking to harness their potential do have to address some remaining hurdles before perovskite-based solar cells can be commercially competitive.

    The term perovskite refers not to a specific material, like silicon or cadmium telluride, other leading contenders in the photovoltaic realm, but to a whole family of compounds. The perovskite family of solar materials is named for its structural similarity to a mineral called perovskite, which was discovered in 1839 and named after Russian mineralogist L.A. Perovski.

    The original mineral perovskite, which is calcium titanium oxide (CaTiO3), has a distinctive crystal configuration. It has a three-part structure, whose components have come to be labeled A, B and X, in which lattices of the different components are interlaced. The family of perovskites consists of the many possible combinations of elements or molecules that can occupy each of the three components and form a structure similar to that of the original perovskite itself. (Some researchers even bend the rules a little by naming other crystal structures with similar elements “perovskites,” although this is frowned upon by crystallographers.)

    “You can mix and match atoms and molecules into the structure, with some limits. For instance, if you try to stuff a molecule that’s too big into the structure, you’ll distort it. Eventually you might cause the 3D crystal to separate into a 2D layered structure, or lose ordered structure entirely,” says Tonio Buonassisi, professor of mechanical engineering at MIT and director of the Photovoltaics Research Laboratory. “Perovskites are highly tunable, like a build-your-own-adventure type of crystal structure,” he says.

    That structure of interlaced lattices consists of ions or charged molecules, two of them (A and B) positively charged and the other one (X) negatively charged. The A and B ions are typically of quite different sizes, with the A being larger. 

    Within the overall category of perovskites, there are a number of types, including metal oxide perovskites, which have found applications in catalysis and in energy storage and conversion, such as in fuel cells and metal-air batteries. But a main focus of research activity for more than a decade has been on lead halide perovskites, according to Buonassisi says.

    Within that category, there is still a legion of possibilities, and labs around the world are racing through the tedious work of trying to find the variations that show the best performance in efficiency, cost, and durability — which has so far been the most challenging of the three.

    Many teams have also focused on variations that eliminate the use of lead, to avoid its environmental impact. Buonassisi notes, however, that “consistently over time, the lead-based devices continue to improve in their performance, and none of the other compositions got close in terms of electronic performance.” Work continues on exploring alternatives, but for now none can compete with the lead halide versions.

    One of the great advantages perovskites offer is their great tolerance of defects in the structure, he says. Unlike silicon, which requires extremely high purity to function well in electronic devices, perovskites can function well even with numerous imperfections and impurities.

    Searching for promising new candidate compositions for perovskites is a bit like looking for a needle in a haystack, but recently researchers have come up with a machine-learning system that can greatly streamline this process. This new approach could lead to a much faster development of new alternatives, says Buonassisi, who was a co-author of that research.

    While perovskites continue to show great promise, and several companies are already gearing up to begin some commercial production, durability remains the biggest obstacle they face. While silicon solar panels retain up to 90 percent of their power output after 25 years, perovskites degrade much faster. Great progress has been made — initial samples lasted only a few hours, then weeks or months, but newer formulations have usable lifetimes of up to a few years, suitable for some applications where longevity is not essential.

    From a research perspective, Buonassisi says, one advantage of perovskites is that they are relatively easy to make in the lab — the chemical constituents assemble readily. But that’s also their downside: “The material goes together very easily at room temperature,” he says, “but it also comes apart very easily at room temperature. Easy come, easy go!”

    To deal with that issue, most researchers are focused on using various kinds of protective materials to encapsulate the perovskite, protecting it from exposure to air and moisture. But others are studying the exact mechanisms that lead to that degradation, in hopes of finding formulations or treatments that are more inherently robust. A key finding is that a process called autocatalysis is largely to blame for the breakdown.

    In autocatalysis, as soon as one part of the material starts to degrade, its reaction products act as catalysts to start degrading the neighboring parts of the structure, and a runaway reaction gets underway. A similar problem existed in the early research on some other electronic materials, such as organic light-emitting diodes (OLEDs), and was eventually solved by adding additional purification steps to the raw materials, so a similar solution may be found in the case of perovskites, Buonassisi suggests.

    Buonassisi and his co-researchers recently completed a study showing that once perovskites reach a usable lifetime of at least a decade, thanks to their much lower initial cost that would be sufficient to make them economically viable as a substitute for silicon in large, utility-scale solar farms.

    Overall, progress in the development of perovskites has been impressive and encouraging, he says. With just a few years of work, it has already achieved efficiencies comparable to levels that cadmium telluride (CdTe), “which has been around for much longer, is still struggling to achieve,” he says. “The ease with which these higher performances are reached in this new material are almost stupefying.” Comparing the amount of research time spent to achieve a 1 percent improvement in efficiency, he says, the progress on perovskites has been somewhere between 100 and 1000 times faster than that on CdTe. “That’s one of the reasons it’s so exciting,” he says. More

  • in

    MIT engineers design surfaces that make water boil more efficiently

    The boiling of water or other fluids is an energy-intensive step at the heart of a wide range of industrial processes, including most electricity generating plants, many chemical production systems, and even cooling systems for electronics.

    Improving the efficiency of systems that heat and evaporate water could significantly reduce their energy use. Now, researchers at MIT have found a way to do just that, with a specially tailored surface treatment for the materials used in these systems.

    The improved efficiency comes from a combination of three different kinds of surface modifications, at different size scales. The new findings are described in the journal Advanced Materials in a paper by recent MIT graduate Youngsup Song PhD ’21, Ford Professor of Engineering Evelyn Wang, and four others at MIT. The researchers note that this initial finding is still at a laboratory scale, and more work is needed to develop a practical, industrial-scale process.

    There are two key parameters that describe the boiling process: the heat transfer coefficient (HTC) and the critical heat flux (CHF). In materials design, there’s generally a tradeoff between the two, so anything that improves one of these parameters tends to make the other worse. But both are important for the efficiency of the system, and now, after years of work, the team has achieved a way of significantly improving both properties at the same time, through their combination of different textures added to a material’s surface.

    “Both parameters are important,” Song says, “but enhancing both parameters together is kind of tricky because they have intrinsic trade off.” The reason for that, he explains, is “because if we have lots of bubbles on the boiling surface, that means boiling is very efficient, but if we have too many bubbles on the surface, they can coalesce together, which can form a vapor film over the boiling surface.” That film introduces resistance to the heat transfer from the hot surface to the water. “If we have vapor in between the surface and water, that prevents the heat transfer efficiency and lowers the CHF value,” he says.

    Song, who is now a postdoc at Lawrence Berkeley National Laboratory, carried out much of the research as part of his doctoral thesis work at MIT. While the various components of the new surface treatment he developed had been previously studied, the researchers say this work is the first to show that these methods could be combined to overcome the tradeoff between the two competing parameters.

    Adding a series of microscale cavities, or dents, to a surface is a way of controlling the way bubbles form on that surface, keeping them effectively pinned to the locations of the dents and preventing them from spreading out into a heat-resisting film. In this work, the researchers created an array of 10-micrometer-wide dents separated by about 2 millimeters to prevent film formation. But that separation also reduces the concentration of bubbles at the surface, which can reduce the boiling efficiency. To compensate for that, the team introduced a much smaller-scale surface treatment, creating tiny bumps and ridges at the nanometer scale, which increases the surface area and promotes the rate of evaporation under the bubbles.

    In these experiments, the cavities were made in the centers of a series of pillars on the material’s surface. These pillars, combined with nanostructures, promote wicking of liquid from the base to their tops, and this enhances the boiling process by providing more surface area exposed to the water. In combination, the three “tiers” of the surface texture — the cavity separation, the posts, and the nanoscale texturing — provide a greatly enhanced efficiency for the boiling process, Song says.

    “Those micro cavities define the position where bubbles come up,” he says. “But by separating those cavities by 2 millimeters, we separate the bubbles and minimize the coalescence of bubbles.” At the same time, the nanostructures promote evaporation under the bubbles, and the capillary action induced by the pillars supplies liquid to the bubble base. That maintains a layer of liquid water between the boiling surface and the bubbles of vapor, which enhances the maximum heat flux.

    Although their work has confirmed that the combination of these kinds of surface treatments can work and achieve the desired effects, this work was done under small-scale laboratory conditions that could not easily be scaled up to practical devices, Wang says. “These kinds of structures we’re making are not meant to be scaled in its current form,” she says, but rather were used to prove that such a system can work. One next step will be to find alternative ways of creating these kinds of surface textures so these methods could more easily be scaled up to practical dimensions.

    “Showing that we can control the surface in this way to get enhancement is a first step,” she says. “Then the next step is to think about more scalable approaches.” For example, though the pillars on the surface in these experiments were created using clean-room methods commonly used to produce semiconductor chips, there are other, less demanding ways of creating such structures, such as electrodeposition. There are also a number of different ways to produce the surface nanostructure textures, some of which may be more easily scalable.

    There may be some significant small-scale applications that could use this process in its present form, such as the thermal management of electronic devices, an area that is becoming more important as semiconductor devices get smaller and managing their heat output becomes ever more important. “There’s definitely a space there where this is really important,” Wang says.

    Even those kinds of applications will take some time to develop because typically thermal management systems for electronics use liquids other than water, known as dielectric liquids. These liquids have different surface tension and other properties than water, so the dimensions of the surface features would have to be adjusted accordingly. Work on these differences is one of the next steps for the ongoing research, Wang says.

    This same multiscale structuring technique could also be applied to different liquids, Song says, by adjusting the dimensions to account for the different properties of the liquids. “Those kinds of details can be changed, and that can be our next step,” he says.

    The team also included Carlos Diaz-Martin, Lenan Zhang, Hyeongyun Cha, and Yajing Zhao, all at MIT. The work was supported by the Advanced Research Projects Agency-Energy (ARPA-E), the Air Force Office of Scientific Research, and the Singapore-MIT Alliance for Research and Technology, and made use of the MIT.nano facilities. More

  • in

    Getting the carbon out of India’s heavy industries

    The world’s third largest carbon emitter after China and the United States, India ranks seventh in a major climate risk index. Unless India, along with the nearly 200 other signatory nations of the Paris Agreement, takes aggressive action to keep global warming well below 2 degrees Celsius relative to preindustrial levels, physical and financial losses from floods, droughts, and cyclones could become more severe than they are today. So, too, could health impacts associated with the hazardous air pollution levels now affecting more than 90 percent of its population.  

    To address both climate and air pollution risks and meet its population’s escalating demand for energy, India will need to dramatically decarbonize its energy system in the coming decades. To that end, its initial Paris Agreement climate policy pledge calls for a reduction in carbon dioxide intensity of GDP by 33-35 percent by 2030 from 2005 levels, and an increase in non-fossil-fuel-based power to about 40 percent of cumulative installed capacity in 2030. At the COP26 international climate change conference, India announced more aggressive targets, including the goal of achieving net-zero emissions by 2070.

    Meeting its climate targets will require emissions reductions in every economic sector, including those where emissions are particularly difficult to abate. In such sectors, which involve energy-intensive industrial processes (production of iron and steel; nonferrous metals such as copper, aluminum, and zinc; cement; and chemicals), decarbonization options are limited and more expensive than in other sectors. Whereas replacing coal and natural gas with solar and wind could lower carbon dioxide emissions in electric power generation and transportation, no easy substitutes can be deployed in many heavy industrial processes that release CO2 into the air as a byproduct.

    However, other methods could be used to lower the emissions associated with these processes, which draw upon roughly 50 percent of India’s natural gas, 25 percent of its coal, and 20 percent of its oil. Evaluating the potential effectiveness of such methods in the next 30 years, a new study in the journal Energy Economics led by researchers at the MIT Joint Program on the Science and Policy of Global Change is the first to explicitly explore emissions-reduction pathways for India’s hard-to-abate sectors.

    Using an enhanced version of the MIT Economic Projection and Policy Analysis (EPPA) model, the study assesses existing emissions levels in these sectors and projects how much they can be reduced by 2030 and 2050 under different policy scenarios. Aimed at decarbonizing industrial processes, the scenarios include the use of subsidies to increase electricity use, incentives to replace coal with natural gas, measures to improve industrial resource efficiency, policies to put a price on carbon, carbon capture and storage (CCS) technology, and hydrogen in steel production.

    The researchers find that India’s 2030 Paris Agreement pledge may still drive up fossil fuel use and associated greenhouse gas emissions, with projected carbon dioxide emissions from hard-to-abate sectors rising by about 2.6 times from 2020 to 2050. But scenarios that also promote electrification, natural gas support, and resource efficiency in hard-to-abate sectors can lower their CO2 emissions by 15-20 percent.

    While appearing to move the needle in the right direction, those reductions are ultimately canceled out by increased demand for the products that emerge from these sectors. So what’s the best path forward?

    The researchers conclude that only the incentive of carbon pricing or the advance of disruptive technology can move hard-to-abate sector emissions below their current levels. To achieve significant emissions reductions, they maintain, the price of carbon must be high enough to make CCS economically viable. In that case, reductions of 80 percent below current levels could be achieved by 2050.

    “Absent major support from the government, India will be unable to reduce carbon emissions in its hard-to-abate sectors in alignment with its climate targets,” says MIT Joint Program deputy director Sergey Paltsev, the study’s lead author. “A comprehensive government policy could provide robust incentives for the private sector in India and generate favorable conditions for foreign investments and technology advances. We encourage decision-makers to use our findings to design efficient pathways to reduce emissions in those sectors, and thereby help lower India’s climate and air pollution-related health risks.” More

  • in

    Better living through multicellular life cycles

    Cooperation is a core part of life for many organisms, ranging from microbes to complex multicellular life. It emerges when individuals share resources or partition a task in such a way that each derives a greater benefit when acting together than they could on their own. For example, birds and fish flock to evade predators, slime mold swarms to hunt for food and reproduce, and bacteria form biofilms to resist stress.

    Individuals must live in the same “neighborhood” to cooperate. For bacteria, this neighborhood can be as small as tens of microns. But in environments like the ocean, it’s rare for cells with the same genetic makeup to co-occur in the same neighborhood on their own. And this necessity poses a puzzle to scientists: In environments where survival hinges on cooperation, how do bacteria build their neighborhood?

    To study this problem, MIT professor Otto X. Cordero and colleagues took inspiration from nature: They developed a model system around a common coastal seawater bacterium that requires cooperation to eat sugars from brown algae. In the system, single cells were initially suspended in seawater too far away from other cells to cooperate. To share resources and grow, the cells had to find a mechanism of creating a neighborhood. “Surprisingly, each cell was able to divide and create its own neighborhood of clones by forming tightly packed clusters,” says Cordero, associate professor in the Department of Civil and Environmental Engineering.

    A new paper, published today in Current Biology, demonstrates how an algae-eating bacterium solves the engineering challenge of creating local cell density starting from a single-celled state.

    “A key discovery was the importance of phenotypic heterogeneity in supporting this surprising mechanism of clonal cooperation,” says Cordero, lead author of the new paper.

    Using a combination of microscopy, transcriptomics, and labeling experiments to profile a cellular metabolic state, the researchers found that cells phenotypically differentiate into a sticky “shell” population and a motile, carbon-storing “core.” The researchers propose that shell cells create the cellular neighborhood needed to sustain cooperation while core cells accumulate stores of carbon that support further clonal reproduction when the multicellular structure ruptures.

    This work addresses a key piece in the bigger challenge of understanding the bacterial processes that shape our earth, such as the cycling of carbon from dead organic matter back into food webs and the atmosphere. “Bacteria are fundamentally single cells, but often what they accomplish in nature is done through cooperation. We have much to uncover about what bacteria can accomplish together and how that differs from their capacity as individuals,” adds Cordero.

    Co-authors include Julia Schwartzman and Ali Ebrahimi, former postdocs in the Cordero Lab. Other co-authors are Gray Chadwick, a former graduate student at Caltech; Yuya Sato, a senior researcher at Japan’s National Institute of Advanced Industrial Science and Technology; Benjamin Roller, a current postdoc at the University of Vienna; and Victoria Orphan of Caltech.

    Funding was provided by the Simons Foundation. Individual authors received support from the Swiss National Science Foundation, Japan Society for the Promotion of Science, the U.S. National Science Foundation, the Kavli Institute of Theoretical Physics, and the National Institutes of Health. More

  • in

    Kerry Emanuel: A climate scientist and meteorologist in the eye of the storm

    Kerry Emanuel once joked that whenever he retired, he would start a “hurricane safari” so other people could experience what it’s like to fly into the eye of a hurricane.

    “All of a sudden, the turbulence stops, the sun comes out, bright sunshine, and it’s amazingly calm. And you’re in this grand stadium [of clouds miles high],” he says. “It’s quite an experience.”

    While the hurricane safari is unlikely to come to fruition — “You can’t just conjure up a hurricane,” he explains — Emanuel, a world-leading expert on links between hurricanes and climate change, is retiring from teaching in the Department of Earth Atmospheric and Planetary Sciences (EAPS) at MIT after a more than 40-year career.

    Best known for his foundational contributions to the science of tropical cyclones, climate, and links between them, Emanuel has also been a prominent voice in public debates on climate change, and what we should do about it.

    “Kerry has had an enormous effect on the world through the students and junior scientists he has trained,” says William Boos PhD ’08, an atmospheric scientist at the University of California at Berkeley. “He’s a brilliant enough scientist and theoretician that he didn’t need any of us to accomplish what he has, but he genuinely cares about educating new generations of scientists and helping to launch their careers.”

    In recognition of Emanuel’s teaching career and contributions to science, a symposium was held in his honor at MIT on June 21 and 22, organized by several of his former students and collaborators, including Boos. Research presented at the symposium focused on the many fields influenced by Emanuel’s more than 200 published research papers — on everything from forecasting the risks posed by tropical cyclones to understanding how rainfall is produced by continent-sized patterns of atmospheric circulation.

    Emanuel’s career observing perturbations of Earth’s atmosphere started earlier than he can remember. “According to my older brother, from the age of 2, I would crawl to the window whenever there was a thunderstorm,” he says. At first, those were the rolling thunderheads of the Midwest where he grew up, then it was the edges of hurricanes during a few teenage years in Florida. Eventually, he would find himself watching from the very eye of the storm, both physically and mathematically.

    Emanuel attended MIT both as an undergraduate studying Earth and planetary sciences, and for his PhD in meteorology, writing a dissertation on thunderstorms that form ahead of cold fronts. Within the department, he worked with some of the central figures of modern meteorology such as Jule Charney, Fred Sanders, and Edward Lorenz — the founder of chaos theory.

    After receiving his PhD in 1978, Emanuel joined the faculty of the University of California at Los Angeles. During this period, he also took a semester sabbatical to film the wind speeds of tornadoes in Texas and Oklahoma. After three years, he returned to MIT and joined the Department of Meteorology in 1981. Two years later, the department merged with Earth and Planetary Sciences to form EAPS as it is known today, and where Emanuel has remained ever since.

    At MIT, he shifted scales. The thunderstorms and tornadoes that had been the focus of Emanuel’s research up to then were local atmospheric phenomena, or “mesoscale” in the language of meteorologists. The larger “synoptic scale” storms that are hurricanes blew into Emanuel’s research when as a young faculty member he was asked to teach a class in tropical meteorology; in prepping for the class, Emanuel found his notes on hurricanes from graduate school no longer made sense.

    “I realized I didn’t understand them because they couldn’t have been correct,” he says. “And so I set out to try to find a much better theoretical formulation for hurricanes.”

    He soon made two important contributions. In 1986, his paper “An Air-Sea Interaction Theory for Tropical Cyclones. Part 1: Steady-State Maintenance” developed a new theory for upper limits of hurricane intensity given atmospheric conditions. This work in turn led to even larger-scale questions to address. “That upper bound had to be dependent on climate, and it was likely to go up if we were to warm the climate,” Emanuel says — a phenomenon he explored in another paper, “The Dependence of Hurricane Intensity on Climate,” which showed how warming sea surface temperatures and changing atmospheric conditions from a warming climate would make hurricanes more destructive.

    “In my view, this is among the most remarkable achievements in theoretical geophysics,” says Adam Sobel PhD ’98, an atmospheric scientist at Columbia University who got to know Emanuel after he graduated and became interested in tropical meteorology. “From first principles, using only pencil-and-paper analysis and physical reasoning, he derives a quantitative bound on hurricane intensity that has held up well over decades of comparison to observations” and underpins current methods of predicting hurricane intensity and how it changes with climate.

    This and diverse subsequent work led to numerous honors, including membership to the American Philosophical Society, the National Academy of Sciences, and the American Academy of Arts and Sciences.

    Emanuel’s research was never confined to academic circles, however; when politicians and industry leaders voiced loud opposition to the idea that human-caused climate change posed a threat, he spoke up.

    “I felt kind of a duty to try to counter that,” says Emanuel. “I thought it was an interesting challenge to see if you could go out and convince what some people call climate deniers, skeptics, that this was a serious risk and we had to treat it as such.”

    In addition to many public lectures and media appearances discussing climate change, Emanuel penned a book for general audiences titled “What We Know About Climate Change,” in addition to a widely-read primer on climate change and risk assessment designed to influence business leaders.

    “Kerry has an unmatched physical understanding of tropical climate phenomena,” says Emanuel’s colleague, Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies at EAPS. “But he’s also a great communicator and has generously given his time to public outreach. His book ‘What We Know About Climate Change’ is a beautiful piece of work that is readily understandable and has captivated many a non-expert reader.”

    Along with a number of other prominent climate scientists, Emanuel also began advocating for expanding nuclear power as the most rapid path to decarbonizing the world’s energy systems.

    “I think the impediment to nuclear is largely irrational in the United States,” he says. “So, I’ve been trying to fight that just like I’ve been trying to fight climate denial.”

    One lesson Emanuel has taken from his public work on climate change is that skeptical audiences often respond better to issues framed in positive terms than to doom and gloom; he’s found emphasizing the potential benefits rather than the sacrifices involved in the energy transition can engage otherwise wary audiences.

    “It’s really not opposition to science, per se,” he says. “It’s fear of the societal changes they think are required to do something about it.”

    He has also worked to raise awareness about how insurance companies significantly underestimate climate risks in their policies, in particular by basing hurricane risk on unreliable historical data. One recent practical result has been a project by the First Street Foundation to assess the true flood risk of every property in the United States using hurricane models Emanuel developed.

    “I think it’s transformative,” Emanuel says of the project with First Street. “That may prove to be the most substantive research I’ve done.”

    Though Emanuel is retiring from teaching, he has no plans to stop working. “When I say ‘retire’ it’s in quotes,” he says. In 2011, Emanuel and Professor of Geophysics Daniel Rothman founded the Lorenz Center, a climate research center at MIT in honor of Emanuel’s mentor and friend Edward Lorenz. Emanuel will continue to participate in work at the center, which aims to counter what Emanuel describes as a trend away from “curiosity-driven” work in climate science.

    “Even if there were no such thing as global warming, [climate science] would still be a really, really exciting field,” says Emanuel. “There’s so much to understand about climate, about the climates of the past, about the climates of other planets.”

    In addition to work with the Lorenz Center, he’s become interested once again in tornadoes and severe local storms, and understanding whether climate also controls such local phenomena. He’s also involved in two of MIT’s Climate Grand Challenges projects focused on translating climate hazards to explicit financial and health risks — what will bring the dangers of climate change home to people, he says, is for the public to understand more concrete risks, like agricultural failure, water shortages, electricity shortages, and severe weather events. Capturing that will drive the next few years of his work.

    “I’m going to be stepping up research in some respects,” he says, now living full-time at his home in Maine.

    Of course, “retiring” does mean a bit more free time for new pursuits, like learning a language or an instrument, and “rediscovering the art of sailing,” says Emanuel. He’s looking forward to those days on the water, whatever storms are to come. More

  • in

    Making hydrogen power a reality

    For decades, government and industry have looked to hydrogen as a potentially game-changing tool in the quest for clean energy. As far back as the early days of the Clinton administration, energy sector observers and public policy experts have extolled the virtues of hydrogen — to the point that some people have joked that hydrogen is the energy of the future, “and always will be.”

    Even as wind and solar power have become commonplace in recent years, hydrogen has been held back by high costs and other challenges. But the fuel may finally be poised to have its moment. At the MIT Energy Initiative Spring Symposium — entitled “Hydrogen’s role in a decarbonized energy system” — experts discussed hydrogen production routes, hydrogen consumption markets, the path to a robust hydrogen infrastructure, and policy changes needed to achieve a “hydrogen future.”

    During one panel, “Options for producing low-carbon hydrogen at scale,” four experts laid out existing and planned efforts to leverage hydrogen for decarbonization. 

    “The race is on”

    Huyen N. Dinh, a senior scientist and group manager at the National Renewable Energy Laboratory (NREL), is the director of HydroGEN, a consortium of several U.S. Department of Energy (DOE) national laboratories that accelerates research and development of innovative and advanced water splitting materials and technologies for clean, sustainable, and low-cost hydrogen production.

    For the past 14 years, Dinh has worked on fuel cells and hydrogen production for NREL. “We think that the 2020s is the decade of hydrogen,” she said. Dinh believes that the energy carrier is poised to come into its own over the next few years, pointing to several domestic and international activities surrounding the fuel and citing a Hydrogen Council report that projected the future impacts of hydrogen — including 30 million jobs and $2.5 trillion in global revenue by 2050.

    “Now is the time for hydrogen, and the global race is on,” she said.

    Dinh also explained the parameters of the Hydrogen Shot — the first of the DOE’s “Energy Earthshots” aimed at accelerating breakthroughs for affordable and reliable clean energy solutions. Hydrogen fuel currently costs around $5 per kilogram to produce, and the Hydrogen Shot’s stated goal is to bring that down by 80 percent to $1 per kilogram within a decade.

    The Hydrogen Shot will be facilitated by $9.5 billion in funding for at least four clean hydrogen hubs located in different parts of the United States, as well as extensive research and development, manufacturing, and recycling from last year’s bipartisan infrastructure law. Still, Dinh noted that it took more than 40 years for solar and wind power to become cost competitive, and now industry, government, national lab, and academic leaders are hoping to achieve similar reductions in hydrogen fuel costs over a much shorter time frame. In the near term, she said, stakeholders will need to improve the efficiency, durability, and affordability of hydrogen production through electrolysis (using electricity to split water) using today’s renewable and nuclear power sources. Over the long term, the focus may shift to splitting water more directly through heat or solar energy, she said.

    “The time frame is short, the competition is intense, and a coordinated effort is critical for domestic competitiveness,” Dinh said.

    Hydrogen across continents

    Wambui Mutoru, principal engineer for international commercial development, exploration, and production international at the Norwegian global energy company Equinor, said that hydrogen is an important component in the company’s ambitions to be carbon-neutral by 2050. The company, in collaboration with partners, has several hydrogen projects in the works, and Mutoru laid out the company’s Hydrogen to Humber project in Northern England. Currently, the Humber region emits more carbon dioxide than any other industrial cluster in the United Kingdom — 50 percent more, in fact, than the next-largest carbon emitter.   

    “The ambition here is for us to deploy the world’s first at-scale hydrogen value chain to decarbonize the Humber industrial cluster,” Mutoru said.

    The project consists of three components: a clean hydrogen production facility, an onshore hydrogen and carbon dioxide transmission network, and offshore carbon dioxide transportation and storage operations. Mutoru highlighted the importance of carbon capture and storage in hydrogen production. Equinor, she said, has captured and sequestered carbon offshore for more than 25 years, storing more than 25 million tons of carbon dioxide during that time.

    Mutoru also touched on Equinor’s efforts to build a decarbonized energy hub in the Appalachian region of the United States, covering territory in Ohio, West Virginia, and Pennsylvania. By 2040, she said, the company’s ambition is to produce about 1.5 million tons of clean hydrogen per year in the region — roughly equivalent to 6.8 gigawatts of electricity — while also storing 30 million tons of carbon dioxide.

    Mutoru acknowledged that the biggest challenge facing potential hydrogen producers is the current lack of viable business models. “Resolving that challenge requires cross-industry collaboration, and supportive policy frameworks so that the market for hydrogen can be built and sustained over the long term,” she said.

    Confronting barriers

    Gretchen Baier, executive external strategy and communications leader for Dow, noted that the company already produces hydrogen in multiple ways. For one, Dow operates the world’s largest ethane cracker, in Texas. An ethane cracker heats ethane to break apart molecular bonds to form ethylene, with hydrogen one of the byproducts of the process. Also, Baier showed a slide of the 1891 patent for the electrolysis of brine water, which also produces hydrogen. The company still engages in this practice, but Dow does not have an effective way of utilizing the resulting hydrogen for their own fuel.

    “Just take a moment to think about that,” Baier said. “We’ve been talking about hydrogen production and the cost of it, and this is basically free hydrogen. And it’s still too much of a barrier to somewhat recycle that and use it for ourselves. The environment is clearly changing, and we do have plans for that, but I think that kind of sets some of the challenges that face industry here.”

    However, Baier said, hydrogen is expected to play a significant role in Dow’s future as the company attempts to decarbonize by 2050. The company, she said, plans to optimize hydrogen allocation and production, retrofit turbines for hydrogen fueling, and purchase clean hydrogen. By 2040, Dow expects more than 60 percent of its sites to be hydrogen-ready.

    Baier noted that hydrogen fuel is not a “panacea,” but rather one among many potential contributors as industry attempts to reduce or eliminate carbon emissions in the coming decades. “Hydrogen has an important role, but it’s not the only answer,” she said.

    “This is real”

    Colleen Wright is vice president of corporate strategy for Constellation, which recently separated from Exelon Corporation. (Exelon now owns the former company’s regulated utilities, such as Commonwealth Edison and Baltimore Gas and Electric, while Constellation owns the competitive generation and supply portions of the business.) Wright stressed the advantages of nuclear power in hydrogen production, which she said include superior economics, low barriers to implementation, and scalability.

    “A quarter of emissions in the world are currently from hard-to-decarbonize sectors — the industrial sector, steel making, heavy-duty transportation, aviation,” she said. “These are really challenging decarbonization sectors, and as we continue to expand and electrify, we’re going to need more supply. We’re also going to need to produce clean hydrogen using emissions-free power.”

    “The scale of nuclear power plants is uniquely suited to be able to scale hydrogen production,” Wright added. She mentioned Constellation’s Nine Mile Point site in the State of New York, which received a DOE grant for a pilot program that will see a proton exchange membrane electrolyzer installed at the site.

    “We’re very excited to see hydrogen go from a [research and development] conversation to a commercial conversation,” she said. “We’ve been calling it a little bit of a ‘middle-school dance.’ Everybody is standing around the circle, waiting to see who’s willing to put something at stake. But this is real. We’re not dancing around the edges. There are a lot of people who are big players, who are willing to put skin in the game today.” More

  • in

    Evan Leppink: Seeking a way to better stabilize the fusion environment

    “Fusion energy was always one of those kind-of sci-fi technologies that you read about,” says nuclear science and engineering PhD candidate Evan Leppink. He’s recalling the time before fusion became a part of his daily hands-on experience at MIT’s Plasma Science and Fusion Center, where he is studying a unique way to drive current in a tokamak plasma using radiofrequency (RF) waves. 

    Now, an award from the U.S. Department of Energy’s (DOE) Office of Science Graduate Student Research (SCGSR) Program will support his work with a 12-month residency at the DIII-D National Fusion Facility in San Diego, California.

    Like all tokamaks, DIII-D generates hot plasma inside a doughnut-shaped vacuum chamber wrapped with magnets. Because plasma will follow magnetic field lines, tokamaks are able to contain the turbulent plasma fuel as it gets hotter and denser, keeping it away from the edges of the chamber where it could damage the wall materials. A key part of the tokamak concept is that part of the magnetic field is created by electrical currents in the plasma itself, which helps to confine and stabilize the configuration. Researchers often launch high-power RF waves into tokamaks to drive that current.

    Leppink will be contributing to research, led by his MIT advisor Steve Wukitch, that pursues launching RF waves in DIII-D using a unique compact antenna placed on the tokamak center column. Typically, antennas are placed inside the tokamak on the outer edge of the doughnut, farthest from the central hole (or column), primarily because access and installation are easier there. This is known as the “low-field side,” because the magnetic field is lower there than at the central column, the “high-field side.” This MIT-led experiment, for the first time, will mount an antenna on the high-field side. There is some theoretical evidence that placing the wave launcher there could improve power penetration and current drive efficiency. And because the plasma environment is less harsh on this side, the antenna will survive longer, a factor important for any future power-producing tokamak.

    Leppink’s work on DIII-D focuses specifically on measuring the density of plasmas generated in the tokamak, for which he developed a “reflectometer.” This small antenna launches microwaves into the plasma, which reflect back to the antenna to be measured. The time that it takes for these microwaves to traverse the plasma provides information about the plasma density, allowing researchers to build up detailed density profiles, data critical for injecting RF power into the plasma.

    “Research shows that when we try to inject these waves into the plasma to drive the current, they can lose power as they travel through the edge region of the tokamak, and can even have problems entering the core of the plasma, where we would most like to direct them,” says Leppink. “My diagnostic will measure that edge region on the high-field side near the launcher in great detail, which provides us a way to directly verify calculations or compare actual results with simulation results.”

    Although focused on his own research, Leppink has excelled at priming other students for success in their studies and research. In 2021 he received the NSE Outstanding Teaching Assistant and Mentorship Award.

    “The highlights of TA’ing for me were the times when I could watch students go from struggling with a difficult topic to fully understanding it, often with just a nudge in the right direction and then allowing them to follow their own intuition the rest of the way,” he says.

    The right direction for Leppink points toward San Diego and RF current drive experiments on DIII-D. He is grateful for the support from the SCGSR, a program created to prepare graduate students like him for science, technology, engineering, or mathematics careers important to the DOE Office of Science mission. It provides graduate thesis research opportunities through extended residency at DOE national laboratories. He has already made several trips to DIII-D, in part to install his reflectometer, and has been impressed with the size of the operation.

    “It takes a little while to kind of compartmentalize everything and say, ‘OK, well, here’s my part of the machine. This is what I’m doing.’ It can definitely be overwhelming at times. But I’m blessed to be able to work on what has been the workhorse tokamak of the United States for the past few decades.” More

  • in

    Could used beer yeast be the solution to heavy metal contamination in water?

    A new analysis by researchers at MIT’s Center for Bits and Atoms (CBA) has found that inactive yeast could be effective as an inexpensive, abundant, and simple material for removing lead contamination from drinking water supplies. The study shows that this approach can be efficient and economic, even down to part-per-billion levels of contamination. Serious damage to human health is known to occur even at these low levels.

    The method is so efficient that the team has calculated that waste yeast discarded from a single brewery in Boston would enough to treat the city’s entire water supply. Such a fully sustainable system would not only purify the water but also divert what would otherwise be a waste stream needing disposal.

    The findings are detailed today in the journal Nature Communications Earth and Environment, in a paper by MIT Research Scientist Patritsia Statathou; Brown University postdoc and MIT Visiting Scholar Christos Athanasiou; MIT Professor Neil Gershenfeld, the director of CBA; and nine others at MIT, Brown, Wellesley College, Nanyang Technological University, and National Technical University of Athens.

    Lead and other heavy metals in water are a significant global problem that continues to grow because of electronic waste and discharges from mining operations. In the U.S. alone, more than 12,000 miles of waterways are impacted by acidic mine-drainage-water rich in heavy metals, the country’s leading source of water pollution. And unlike organic pollutants, most of which can be eventually broken down, heavy metals don’t biodegrade, but persist indefinitely and bioaccumulate. They are either impossible or very expensive to completely remove by conventional methods such as chemical precipitation or membrane filtration.

    Lead is highly toxic, even at tiny concentrations, especially affecting children as they grow. The European Union has reduced its standard for allowable lead in drinking water from 10 parts per billion to 5 parts per billion. In the U.S., the Environmental Protection Agency has declared that no level at all in water supplies is safe. And average levels in bodies of surface water globally are 10 times higher than they were 50 years ago, ranging from 10 parts per billion in Europe to hundreds of parts per billion in South America.

    “We don’t just need to minimize the existence of lead; we need to eliminate it in drinking water,” says Stathatou. “And the fact is that the conventional treatment processes are not doing this effectively when the initial concentrations they have to remove are low, in the parts-per-billion scale and below. They either fail to completely remove these trace amounts, or in order to do so they consume a lot of energy and they produce toxic byproducts.”

    The solution studied by the MIT team is not a new one — a process called biosorption, in which inactive biological material is used to remove heavy metals from water, has been known for a few decades. But the process has been studied and characterized only at much higher concentrations, at more than one part-per-million levels. “Our study demonstrates that the process can indeed work efficiently at the much lower concentrations of typical real-world water supplies, and investigates in detail the mechanisms involved in the process,” Athanasiou says.

    The team studied the use of a type of yeast widely used in brewing and in industrial processes, called S. cerevisiae, on pure water spiked with trace amounts of lead. They demonstrated that a single gram of the inactive, dried yeast cells can remove up to 12 milligrams of lead in aqueous solutions with initial lead concentrations below 1 part per million. They also showed that the process is very rapid, taking less than five minutes to complete.

    Because the yeast cells used in the process are inactive and desiccated, they require no particular care, unlike other processes that rely on living biomass to perform such functions which require nutrients and sunlight to keep the materials active. What’s more, yeast is abundantly available already, as a waste product from beer brewing and from various other fermentation-based industrial processes.

    Stathatou has estimated that to clean a water supply for a city the size of Boston, which uses about 200 million gallons a day, would require about 20 tons of yeast per day, or about 7,000 tons per year. By comparison, one single brewery, the Boston Beer Company, generates 20,000 tons a year of surplus yeast that is no longer useful for fermentation.

    The researchers also performed a series of tests to determine that the yeast cells are responsible for biosorption. Athanasiou says that “exploring biosorption mechanisms at such challenging concentrations is a tough problem. We were the first to use a mechanics perspective to unravel biosorption mechanisms, and we discovered that the mechanical properties of the yeast cells change significantly after lead uptake. This provides fundamentally new insights for the process.”

    Devising a practical system for processing the water and retrieving the yeast, which could then be separated from the lead for reuse, is the next stage of the team’s research, they say.

    “To scale up the process and actually put it in place, you need to embed these cells in a kind of filter, and this is the work that’s currently ongoing,” Stathatou says. They are also looking at ways of recovering both the cells and the lead. “We need to conduct further experiments, but there is the option to get both back,” she says.

    The same material can potentially be used to remove other heavy metals, such as cadmium and copper, but that will require further research to quantify the effective rates for those processes, the researchers say.

    “This research revealed a very promising, inexpensive, and environmentally friendly solution for lead removal,” says Sivan Zamir, vice president of Xylem Innovation Labs, a water technology research firm, who was not associated with this research. “It also deepened our understanding of the biosorption process, paving the way for the development of materials tailored to removal of other heavy metals.”

    The team also included Marios Tsezos at the National Technical University of Athens, in Greece; John Gross at Wellesley College; Camron Blackburn, Filippos Tourlomousis, and Andreas Mershin at MIT’s CBA; Brian Sheldon, Nitin Padture, Eric Darling at Brown University; and Huajian Gao at Brown University and Nanyang Technological University, in Singapore. More