More stories

  • in

    Explained: Why perovskites could take solar cells to new heights

    Perovskites hold promise for creating solar panels that could be easily deposited onto most surfaces, including flexible and textured ones. These materials would also be lightweight, cheap to produce, and as efficient as today’s leading photovoltaic materials, which are mainly silicon. They’re the subject of increasing research and investment, but companies looking to harness their potential do have to address some remaining hurdles before perovskite-based solar cells can be commercially competitive.

    The term perovskite refers not to a specific material, like silicon or cadmium telluride, other leading contenders in the photovoltaic realm, but to a whole family of compounds. The perovskite family of solar materials is named for its structural similarity to a mineral called perovskite, which was discovered in 1839 and named after Russian mineralogist L.A. Perovski.

    The original mineral perovskite, which is calcium titanium oxide (CaTiO3), has a distinctive crystal configuration. It has a three-part structure, whose components have come to be labeled A, B and X, in which lattices of the different components are interlaced. The family of perovskites consists of the many possible combinations of elements or molecules that can occupy each of the three components and form a structure similar to that of the original perovskite itself. (Some researchers even bend the rules a little by naming other crystal structures with similar elements “perovskites,” although this is frowned upon by crystallographers.)

    “You can mix and match atoms and molecules into the structure, with some limits. For instance, if you try to stuff a molecule that’s too big into the structure, you’ll distort it. Eventually you might cause the 3D crystal to separate into a 2D layered structure, or lose ordered structure entirely,” says Tonio Buonassisi, professor of mechanical engineering at MIT and director of the Photovoltaics Research Laboratory. “Perovskites are highly tunable, like a build-your-own-adventure type of crystal structure,” he says.

    That structure of interlaced lattices consists of ions or charged molecules, two of them (A and B) positively charged and the other one (X) negatively charged. The A and B ions are typically of quite different sizes, with the A being larger. 

    Within the overall category of perovskites, there are a number of types, including metal oxide perovskites, which have found applications in catalysis and in energy storage and conversion, such as in fuel cells and metal-air batteries. But a main focus of research activity for more than a decade has been on lead halide perovskites, according to Buonassisi says.

    Within that category, there is still a legion of possibilities, and labs around the world are racing through the tedious work of trying to find the variations that show the best performance in efficiency, cost, and durability — which has so far been the most challenging of the three.

    Many teams have also focused on variations that eliminate the use of lead, to avoid its environmental impact. Buonassisi notes, however, that “consistently over time, the lead-based devices continue to improve in their performance, and none of the other compositions got close in terms of electronic performance.” Work continues on exploring alternatives, but for now none can compete with the lead halide versions.

    One of the great advantages perovskites offer is their great tolerance of defects in the structure, he says. Unlike silicon, which requires extremely high purity to function well in electronic devices, perovskites can function well even with numerous imperfections and impurities.

    Searching for promising new candidate compositions for perovskites is a bit like looking for a needle in a haystack, but recently researchers have come up with a machine-learning system that can greatly streamline this process. This new approach could lead to a much faster development of new alternatives, says Buonassisi, who was a co-author of that research.

    While perovskites continue to show great promise, and several companies are already gearing up to begin some commercial production, durability remains the biggest obstacle they face. While silicon solar panels retain up to 90 percent of their power output after 25 years, perovskites degrade much faster. Great progress has been made — initial samples lasted only a few hours, then weeks or months, but newer formulations have usable lifetimes of up to a few years, suitable for some applications where longevity is not essential.

    From a research perspective, Buonassisi says, one advantage of perovskites is that they are relatively easy to make in the lab — the chemical constituents assemble readily. But that’s also their downside: “The material goes together very easily at room temperature,” he says, “but it also comes apart very easily at room temperature. Easy come, easy go!”

    To deal with that issue, most researchers are focused on using various kinds of protective materials to encapsulate the perovskite, protecting it from exposure to air and moisture. But others are studying the exact mechanisms that lead to that degradation, in hopes of finding formulations or treatments that are more inherently robust. A key finding is that a process called autocatalysis is largely to blame for the breakdown.

    In autocatalysis, as soon as one part of the material starts to degrade, its reaction products act as catalysts to start degrading the neighboring parts of the structure, and a runaway reaction gets underway. A similar problem existed in the early research on some other electronic materials, such as organic light-emitting diodes (OLEDs), and was eventually solved by adding additional purification steps to the raw materials, so a similar solution may be found in the case of perovskites, Buonassisi suggests.

    Buonassisi and his co-researchers recently completed a study showing that once perovskites reach a usable lifetime of at least a decade, thanks to their much lower initial cost that would be sufficient to make them economically viable as a substitute for silicon in large, utility-scale solar farms.

    Overall, progress in the development of perovskites has been impressive and encouraging, he says. With just a few years of work, it has already achieved efficiencies comparable to levels that cadmium telluride (CdTe), “which has been around for much longer, is still struggling to achieve,” he says. “The ease with which these higher performances are reached in this new material are almost stupefying.” Comparing the amount of research time spent to achieve a 1 percent improvement in efficiency, he says, the progress on perovskites has been somewhere between 100 and 1000 times faster than that on CdTe. “That’s one of the reasons it’s so exciting,” he says. More

  • in

    MIT engineers design surfaces that make water boil more efficiently

    The boiling of water or other fluids is an energy-intensive step at the heart of a wide range of industrial processes, including most electricity generating plants, many chemical production systems, and even cooling systems for electronics.

    Improving the efficiency of systems that heat and evaporate water could significantly reduce their energy use. Now, researchers at MIT have found a way to do just that, with a specially tailored surface treatment for the materials used in these systems.

    The improved efficiency comes from a combination of three different kinds of surface modifications, at different size scales. The new findings are described in the journal Advanced Materials in a paper by recent MIT graduate Youngsup Song PhD ’21, Ford Professor of Engineering Evelyn Wang, and four others at MIT. The researchers note that this initial finding is still at a laboratory scale, and more work is needed to develop a practical, industrial-scale process.

    There are two key parameters that describe the boiling process: the heat transfer coefficient (HTC) and the critical heat flux (CHF). In materials design, there’s generally a tradeoff between the two, so anything that improves one of these parameters tends to make the other worse. But both are important for the efficiency of the system, and now, after years of work, the team has achieved a way of significantly improving both properties at the same time, through their combination of different textures added to a material’s surface.

    “Both parameters are important,” Song says, “but enhancing both parameters together is kind of tricky because they have intrinsic trade off.” The reason for that, he explains, is “because if we have lots of bubbles on the boiling surface, that means boiling is very efficient, but if we have too many bubbles on the surface, they can coalesce together, which can form a vapor film over the boiling surface.” That film introduces resistance to the heat transfer from the hot surface to the water. “If we have vapor in between the surface and water, that prevents the heat transfer efficiency and lowers the CHF value,” he says.

    Song, who is now a postdoc at Lawrence Berkeley National Laboratory, carried out much of the research as part of his doctoral thesis work at MIT. While the various components of the new surface treatment he developed had been previously studied, the researchers say this work is the first to show that these methods could be combined to overcome the tradeoff between the two competing parameters.

    Adding a series of microscale cavities, or dents, to a surface is a way of controlling the way bubbles form on that surface, keeping them effectively pinned to the locations of the dents and preventing them from spreading out into a heat-resisting film. In this work, the researchers created an array of 10-micrometer-wide dents separated by about 2 millimeters to prevent film formation. But that separation also reduces the concentration of bubbles at the surface, which can reduce the boiling efficiency. To compensate for that, the team introduced a much smaller-scale surface treatment, creating tiny bumps and ridges at the nanometer scale, which increases the surface area and promotes the rate of evaporation under the bubbles.

    In these experiments, the cavities were made in the centers of a series of pillars on the material’s surface. These pillars, combined with nanostructures, promote wicking of liquid from the base to their tops, and this enhances the boiling process by providing more surface area exposed to the water. In combination, the three “tiers” of the surface texture — the cavity separation, the posts, and the nanoscale texturing — provide a greatly enhanced efficiency for the boiling process, Song says.

    “Those micro cavities define the position where bubbles come up,” he says. “But by separating those cavities by 2 millimeters, we separate the bubbles and minimize the coalescence of bubbles.” At the same time, the nanostructures promote evaporation under the bubbles, and the capillary action induced by the pillars supplies liquid to the bubble base. That maintains a layer of liquid water between the boiling surface and the bubbles of vapor, which enhances the maximum heat flux.

    Although their work has confirmed that the combination of these kinds of surface treatments can work and achieve the desired effects, this work was done under small-scale laboratory conditions that could not easily be scaled up to practical devices, Wang says. “These kinds of structures we’re making are not meant to be scaled in its current form,” she says, but rather were used to prove that such a system can work. One next step will be to find alternative ways of creating these kinds of surface textures so these methods could more easily be scaled up to practical dimensions.

    “Showing that we can control the surface in this way to get enhancement is a first step,” she says. “Then the next step is to think about more scalable approaches.” For example, though the pillars on the surface in these experiments were created using clean-room methods commonly used to produce semiconductor chips, there are other, less demanding ways of creating such structures, such as electrodeposition. There are also a number of different ways to produce the surface nanostructure textures, some of which may be more easily scalable.

    There may be some significant small-scale applications that could use this process in its present form, such as the thermal management of electronic devices, an area that is becoming more important as semiconductor devices get smaller and managing their heat output becomes ever more important. “There’s definitely a space there where this is really important,” Wang says.

    Even those kinds of applications will take some time to develop because typically thermal management systems for electronics use liquids other than water, known as dielectric liquids. These liquids have different surface tension and other properties than water, so the dimensions of the surface features would have to be adjusted accordingly. Work on these differences is one of the next steps for the ongoing research, Wang says.

    This same multiscale structuring technique could also be applied to different liquids, Song says, by adjusting the dimensions to account for the different properties of the liquids. “Those kinds of details can be changed, and that can be our next step,” he says.

    The team also included Carlos Diaz-Martin, Lenan Zhang, Hyeongyun Cha, and Yajing Zhao, all at MIT. The work was supported by the Advanced Research Projects Agency-Energy (ARPA-E), the Air Force Office of Scientific Research, and the Singapore-MIT Alliance for Research and Technology, and made use of the MIT.nano facilities. More

  • in

    Pursuing progress at the nanoscale

    Last fall, a team of five senior undergraduate nuclear engineering students met once a week for dinners where they took turns cooking and debated how to tackle a particularly daunting challenge set forth in their program’s capstone course, 22.033 (Nuclear Systems Design Project).

    In past semesters, students had free reign to identify any real-world problem that interested them to solve through team-driven prototyping and design. This past fall worked a little differently. The team continued the trend of tackling daunting problems, but instead got an assignment to explore a particular design challenge on MIT’s campus. Rising to the challenge, the team spent the semester seeking a feasible way to introduce a highly coveted technology at MIT.

    Housed inside a big blue dome is the MIT Nuclear Reactor Laboratory (NRL). The reactor is used to conduct a wide range of science experiments, but in recent years, there have been multiple attempts to implement an instrument at the reactor that could probe the structure of materials, molecules, and devices. With this technology, researchers could model the structure of a wide range of materials and complex liquids made of polymers or containing nanoscale inhomogeneities that differ from the larger mass. On campus, researchers for the first time could conduct experiments to better understand the properties and functions of anything placed in front of a neutron beam emanating from the reactor core.

    The impact of this would be immense. If the reactor could be adapted to conduct this advanced technique, known as small-angle neutron scattering (SANS), it would open up a whole new world of research at MIT.

    “It’s essentially using the nuclear reactor as an incredibly high-performance camera that researchers from all over MIT would be very interested in using, including nuclear science and engineering, chemical engineering, biological engineering, and materials science, who currently use this tool at other institutions,” says Zachary Hartwig, Nuclear Systems Design Project professor and the MIT Robert N. Noyce Career Development Professor.

    SANS instruments have been installed at fewer than 20 facilities worldwide, and MIT researchers have previously considered implementing the capability at the reactor to help MIT expand community-wide access to SANS. Last fall, this mission went from long-time campus dream to potential reality as it became the design challenge that Hartwig’s students confronted. Despite having no experience with SANS, the team embraced the challenge, taking the first steps to figure out how to bring this technology to campus.

    “I really loved the idea that what we were doing could have a very real impact,” says Zoe Fisher, Nuclear Systems Design Project team member and now graduate nuclear engineering student.

    Each fall, Hartwig uses the course to introduce students to real-world challenges with strict constraints on solutions, and last fall’s project came with plenty of thorny design questions for students to tackle. First was the size limitation posed by the space available at MIT’s reactor. In SANS facilities around the world, the average length of the instrument is 30 meters, but at NRL, the space available is approximately 7.5 meters. Second, these instruments can cost up to $30 million, which is far outside NRL’s proposed budget of $3 million. That meant not only did students need to design an instrument that would work in a smaller space, but also one that could be built for a tenth of the typical cost.

    “The challenge was not just implementing one of these instruments,” Hartwig says. “It was whether the students could significantly innovate beyond the ‘traditional’ approach to doing SANS to meet the daunting constraints that we have at the MIT Reactor.”

    Because NRL actually wants to pursue this project, the students had to get creative, and their creative potential was precisely why the idea arose to get them involved, says Jacopo Buongiorno, the director of science and technology at NRL and Tokyo Electric Power Company Professor in Nuclear Engineering. “Involvement in real-world projects that answer questions about feasibility and cost of new technology and capabilities is a key element of a successful undergraduate education at MIT,” Buongiorno says.

    Students say it would have been impossible to tackle the problem without the help of co-instructor Boris Khaykovich, a research scientist at NRL who specializes in neutron instrumentation.

    Over the past two decades, Khaykovich has watched as SANS became the most popular technique for analyzing material structure. As the amount of available SANS beam time at the few facilities that exist became more competitive, access declined. Today only the experiments passing the most stringent review get access. What Khaykovich hopes to bring to MIT is improved access to SANS by designing an instrument that will be suitable for a majority of run-of-the-mill experiments, even if it’s not as powerful as state-of-the-art national SANS facilities. Such an instrument can still serve a wider range of researchers who currently have few opportunities to pursue SANS experiments.

    “In the U.S., we don’t have a simple, small, day-to-day SANS instrument,” Khaykovich says.

    With Khaykovich’s help, nuclear engineering undergraduate student Liam Hines says his team was able to go much further with their assessment than they would’ve starting from scratch, with no background in SANS. This project was unlike anything they’d ever been asked of as MIT students, and for students like Hines, who contributed to NRL research his entire time on campus, it was a project that hit close to home. “We were imagining this thing that might be designed at MIT,” Hines says.

    Fisher and Hines were joined by undergraduate nuclear engineering student team members Francisco Arellano, Jovier Jimenez, and Brendan Vaughan. Together, they devised a design that surprised both Khaykovich and Hartwig, identifying creative solutions that overcame all limitations and significantly reduced cost.

    Their team’s final project featured an adaptation of a conical design that was recently experimentally tested in Japan, but not generally used. The conical design allowed them to maximize precision while working within the other constraints, resulting in an instrument design that exceeded Hartwig’s expectations. The students also showed the feasibility of using an alternative type of glass-based low-cost neutron detector to calibrate the scattering data. By avoiding the need for a traditional detector based on helium-3, which is increasingly scarce and exorbitantly expensive, such a detector would dramatically reduce cost and increase availability. Their final presentation indicated the day-to-day SANS instrument could be built at only 4.5 meters long and with an estimated cost less than $1 million.

    Khaykovich credited the students for their enthusiasm, bouncing ideas off each other and exploring as much terrain as possible by interviewing experts who implemented SANS at other facilities. “They showed quite a perseverance and an ability to go deep into a very unfamiliar territory for them,” Khaykovich says.

    Hines says that Hartwig emphasized the importance of fielding expert opinions to more quickly discover optimal solutions. Fisher says that based on their research, if their design is funded, it would make SANS “more accessible to research for the sake of knowledge,” rather than dominated by industry research.

    Hartwig and Khaykovich agreed the students’ final project results showed a baseline of how MIT could pursue SANS technology cheaply, and when NRL proceeds with its own design process, Hartwig says, “The student’s work might actually change the cost of the feasibility of this at MIT in a way that if we hadn’t run the class, we would never have thought about doing.”

    Buongiorno says as they move forward with the project, NRL staff will consult students’ findings.

    “Indeed, the students developed original technical approaches, which are now being further explored by the NRL staff and may ultimately lead to the deployment of this new important capability on the MIT campus,” Buongiorno says.

    Hartwig says it’s a goal of the Nuclear Systems Design Project course to empower students to learn how to lead teams and embrace challenges, so they can be effective leaders advancing novel solutions in research and industry. “I think it helps teach people to be agile, to be flexible, to have confidence that they can actually go off and learn what they don’t know and solve problems they may think are bigger than themselves,” he says.

    It’s common for past classes of Nuclear Systems Design Project students to continue working on ideas beyond the course, and some students have even launched companies from their project research. What’s less common is for Hartwig’s students to actively serve as engineers pointed to a particular campus problem that’s expected to be resolved in the next few years.

    “In this case, they’re actually working on something real,” Hartwig says. “Their ideas are going to very much influence what we hope will be a facility that gets built at the reactor.”

    For students, it was exciting to inform a major instrument proposal that will soon be submitted to federal funding agencies, and for Hines, it became a chance to make his mark at NRL.

    “This is a lab I’ve been contributing to my entire time at MIT, and then through this project, I finished my time at MIT contributing in a much larger sense,” Hines says. More

  • in

    Getting the carbon out of India’s heavy industries

    The world’s third largest carbon emitter after China and the United States, India ranks seventh in a major climate risk index. Unless India, along with the nearly 200 other signatory nations of the Paris Agreement, takes aggressive action to keep global warming well below 2 degrees Celsius relative to preindustrial levels, physical and financial losses from floods, droughts, and cyclones could become more severe than they are today. So, too, could health impacts associated with the hazardous air pollution levels now affecting more than 90 percent of its population.  

    To address both climate and air pollution risks and meet its population’s escalating demand for energy, India will need to dramatically decarbonize its energy system in the coming decades. To that end, its initial Paris Agreement climate policy pledge calls for a reduction in carbon dioxide intensity of GDP by 33-35 percent by 2030 from 2005 levels, and an increase in non-fossil-fuel-based power to about 40 percent of cumulative installed capacity in 2030. At the COP26 international climate change conference, India announced more aggressive targets, including the goal of achieving net-zero emissions by 2070.

    Meeting its climate targets will require emissions reductions in every economic sector, including those where emissions are particularly difficult to abate. In such sectors, which involve energy-intensive industrial processes (production of iron and steel; nonferrous metals such as copper, aluminum, and zinc; cement; and chemicals), decarbonization options are limited and more expensive than in other sectors. Whereas replacing coal and natural gas with solar and wind could lower carbon dioxide emissions in electric power generation and transportation, no easy substitutes can be deployed in many heavy industrial processes that release CO2 into the air as a byproduct.

    However, other methods could be used to lower the emissions associated with these processes, which draw upon roughly 50 percent of India’s natural gas, 25 percent of its coal, and 20 percent of its oil. Evaluating the potential effectiveness of such methods in the next 30 years, a new study in the journal Energy Economics led by researchers at the MIT Joint Program on the Science and Policy of Global Change is the first to explicitly explore emissions-reduction pathways for India’s hard-to-abate sectors.

    Using an enhanced version of the MIT Economic Projection and Policy Analysis (EPPA) model, the study assesses existing emissions levels in these sectors and projects how much they can be reduced by 2030 and 2050 under different policy scenarios. Aimed at decarbonizing industrial processes, the scenarios include the use of subsidies to increase electricity use, incentives to replace coal with natural gas, measures to improve industrial resource efficiency, policies to put a price on carbon, carbon capture and storage (CCS) technology, and hydrogen in steel production.

    The researchers find that India’s 2030 Paris Agreement pledge may still drive up fossil fuel use and associated greenhouse gas emissions, with projected carbon dioxide emissions from hard-to-abate sectors rising by about 2.6 times from 2020 to 2050. But scenarios that also promote electrification, natural gas support, and resource efficiency in hard-to-abate sectors can lower their CO2 emissions by 15-20 percent.

    While appearing to move the needle in the right direction, those reductions are ultimately canceled out by increased demand for the products that emerge from these sectors. So what’s the best path forward?

    The researchers conclude that only the incentive of carbon pricing or the advance of disruptive technology can move hard-to-abate sector emissions below their current levels. To achieve significant emissions reductions, they maintain, the price of carbon must be high enough to make CCS economically viable. In that case, reductions of 80 percent below current levels could be achieved by 2050.

    “Absent major support from the government, India will be unable to reduce carbon emissions in its hard-to-abate sectors in alignment with its climate targets,” says MIT Joint Program deputy director Sergey Paltsev, the study’s lead author. “A comprehensive government policy could provide robust incentives for the private sector in India and generate favorable conditions for foreign investments and technology advances. We encourage decision-makers to use our findings to design efficient pathways to reduce emissions in those sectors, and thereby help lower India’s climate and air pollution-related health risks.” More

  • in

    Tapping into the million-year energy source below our feet

    There’s an abandoned coal power plant in upstate New York that most people regard as a useless relic. But MIT’s Paul Woskov sees things differently.

    Woskov, a research engineer in MIT’s Plasma Science and Fusion Center, notes the plant’s power turbine is still intact and the transmission lines still run to the grid. Using an approach he’s been working on for the last 14 years, he’s hoping it will be back online, completely carbon-free, within the decade.

    In fact, Quaise Energy, the company commercializing Woskov’s work, believes if it can retrofit one power plant, the same process will work on virtually every coal and gas power plant in the world.

    Quaise is hoping to accomplish those lofty goals by tapping into the energy source below our feet. The company plans to vaporize enough rock to create the world’s deepest holes and harvest geothermal energy at a scale that could satisfy human energy consumption for millions of years. They haven’t yet solved all the related engineering challenges, but Quaise’s founders have set an ambitious timeline to begin harvesting energy from a pilot well by 2026.

    The plan would be easier to dismiss as unrealistic if it were based on a new and unproven technology. But Quaise’s drilling systems center around a microwave-emitting device called a gyrotron that has been used in research and manufacturing for decades.

    “This will happen quickly once we solve the immediate engineering problems of transmitting a clean beam and having it operate at a high energy density without breakdown,” explains Woskov, who is not formally affiliated with Quaise but serves as an advisor. “It’ll go fast because the underlying technology, gyrotrons, are commercially available. You could place an order with a company and have a system delivered right now — granted, these beam sources have never been used 24/7, but they are engineered to be operational for long time periods. In five or six years, I think we’ll have a plant running if we solve these engineering problems. I’m very optimistic.”

    Woskov and many other researchers have been using gyrotrons to heat material in nuclear fusion experiments for decades. It wasn’t until 2008, however, after the MIT Energy Initiative (MITEI) published a request for proposals on new geothermal drilling technologies, that Woskov thought of using gyrotrons for a new application.

    “[Gyrotrons] haven’t been well-publicized in the general science community, but those of us in fusion research understood they were very powerful beam sources — like lasers, but in a different frequency range,” Woskov says. “I thought, why not direct these high-power beams, instead of into fusion plasma, down into rock and vaporize the hole?”

    As power from other renewable energy sources has exploded in recent decades, geothermal energy has plateaued, mainly because geothermal plants only exist in places where natural conditions allow for energy extraction at relatively shallow depths of up to 400 feet beneath the Earth’s surface. At a certain point, conventional drilling becomes impractical because deeper crust is both hotter and harder, which wears down mechanical drill bits.

    Woskov’s idea to use gyrotron beams to vaporize rock sent him on a research journey that has never really stopped. With some funding from MITEI, he began running tests, quickly filling his office with small rock formations he’d blasted with millimeter waves from a small gyrotron in MIT’s Plasma Science and Fusion Center.

    Woskov displaying samples in his lab in 2016.

    Photo: Paul Rivenberg

    Previous item
    Next item

    Around 2018, Woskov’s rocks got the attention of Carlos Araque ’01, SM ’02, who had spent his career in the oil and gas industry and was the technical director of MIT’s investment fund The Engine at the time.

    That year, Araque and Matt Houde, who’d been working with geothermal company AltaRock Energy, founded Quaise. Quaise was soon given a grant by the Department of Energy to scale up Woskov’s experiments using a larger gyrotron.

    With the larger machine, the team hopes to vaporize a hole 10 times the depth of Woskov’s lab experiments. That is expected to be accomplished by the end of this year. After that, the team will vaporize a hole 10 times the depth of the previous one — what Houde calls a 100-to-1 hole.

    “That’s something [the DOE] is particularly interested in, because they want to address the challenges posed by material removal over those greater lengths — in other words, can we show we’re fully flushing out the rock vapors?” Houde explains. “We believe the 100-to-1 test also gives us the confidence to go out and mobilize a prototype gyrotron drilling rig in the field for the first field demonstrations.”

    Tests on the 100-to-1 hole are expected to be completed sometime next year. Quaise is also hoping to begin vaporizing rock in field tests late next year. The short timeline reflects the progress Woskov has already made in his lab.

    Although more engineering research is needed, ultimately, the team expects to be able to drill and operate these geothermal wells safely. “We believe, because of Paul’s work at MIT over the past decade, that most if not all of the core physics questions have been answered and addressed,” Houde says. “It’s really engineering challenges we have to answer, which doesn’t mean they’re easy to solve, but we’re not working against the laws of physics, to which there is no answer. It’s more a matter of overcoming some of the more technical and cost considerations to making this work at a large scale.”

    The company plans to begin harvesting energy from pilot geothermal wells that reach rock temperatures at up to 500 C by 2026. From there, the team hopes to begin repurposing coal and natural gas plants using its system.

    “We believe, if we can drill down to 20 kilometers, we can access these super-hot temperatures in greater than 90 percent of locations across the globe,” Houde says.

    Quaise’s work with the DOE is addressing what it sees as the biggest remaining questions about drilling holes of unprecedented depth and pressure, such as material removal and determining the best casing to keep the hole stable and open. For the latter problem of well stability, Houde believes additional computer modeling is needed and expects to complete that modeling by the end of 2024.

    By drilling the holes at existing power plants, Quaise will be able to move faster than if it had to get permits to build new plants and transmission lines. And by making their millimeter-wave drilling equipment compatible with the existing global fleet of drilling rigs, it will also allow the company to tap into the oil and gas industry’s global workforce.

    “At these high temperatures [we’re accessing], we’re producing steam very close to, if not exceeding, the temperature that today’s coal and gas-fired power plants operate at,” Houde says. “So, we can go to existing power plants and say, ‘We can replace 95 to 100 percent of your coal use by developing a geothermal field and producing steam from the Earth, at the same temperature you’re burning coal to run your turbine, directly replacing carbon emissions.”

    Transforming the world’s energy systems in such a short timeframe is something the founders see as critical to help avoid the most catastrophic global warming scenarios.

    “There have been tremendous gains in renewables over the last decade, but the big picture today is we’re not going nearly fast enough to hit the milestones we need for limiting the worst impacts of climate change,” Houde says. “[Deep geothermal] is a power resource that can scale anywhere and has the ability to tap into a large workforce in the energy industry to readily repackage their skills for a totally carbon free energy source.” More

  • in

    Making hydrogen power a reality

    For decades, government and industry have looked to hydrogen as a potentially game-changing tool in the quest for clean energy. As far back as the early days of the Clinton administration, energy sector observers and public policy experts have extolled the virtues of hydrogen — to the point that some people have joked that hydrogen is the energy of the future, “and always will be.”

    Even as wind and solar power have become commonplace in recent years, hydrogen has been held back by high costs and other challenges. But the fuel may finally be poised to have its moment. At the MIT Energy Initiative Spring Symposium — entitled “Hydrogen’s role in a decarbonized energy system” — experts discussed hydrogen production routes, hydrogen consumption markets, the path to a robust hydrogen infrastructure, and policy changes needed to achieve a “hydrogen future.”

    During one panel, “Options for producing low-carbon hydrogen at scale,” four experts laid out existing and planned efforts to leverage hydrogen for decarbonization. 

    “The race is on”

    Huyen N. Dinh, a senior scientist and group manager at the National Renewable Energy Laboratory (NREL), is the director of HydroGEN, a consortium of several U.S. Department of Energy (DOE) national laboratories that accelerates research and development of innovative and advanced water splitting materials and technologies for clean, sustainable, and low-cost hydrogen production.

    For the past 14 years, Dinh has worked on fuel cells and hydrogen production for NREL. “We think that the 2020s is the decade of hydrogen,” she said. Dinh believes that the energy carrier is poised to come into its own over the next few years, pointing to several domestic and international activities surrounding the fuel and citing a Hydrogen Council report that projected the future impacts of hydrogen — including 30 million jobs and $2.5 trillion in global revenue by 2050.

    “Now is the time for hydrogen, and the global race is on,” she said.

    Dinh also explained the parameters of the Hydrogen Shot — the first of the DOE’s “Energy Earthshots” aimed at accelerating breakthroughs for affordable and reliable clean energy solutions. Hydrogen fuel currently costs around $5 per kilogram to produce, and the Hydrogen Shot’s stated goal is to bring that down by 80 percent to $1 per kilogram within a decade.

    The Hydrogen Shot will be facilitated by $9.5 billion in funding for at least four clean hydrogen hubs located in different parts of the United States, as well as extensive research and development, manufacturing, and recycling from last year’s bipartisan infrastructure law. Still, Dinh noted that it took more than 40 years for solar and wind power to become cost competitive, and now industry, government, national lab, and academic leaders are hoping to achieve similar reductions in hydrogen fuel costs over a much shorter time frame. In the near term, she said, stakeholders will need to improve the efficiency, durability, and affordability of hydrogen production through electrolysis (using electricity to split water) using today’s renewable and nuclear power sources. Over the long term, the focus may shift to splitting water more directly through heat or solar energy, she said.

    “The time frame is short, the competition is intense, and a coordinated effort is critical for domestic competitiveness,” Dinh said.

    Hydrogen across continents

    Wambui Mutoru, principal engineer for international commercial development, exploration, and production international at the Norwegian global energy company Equinor, said that hydrogen is an important component in the company’s ambitions to be carbon-neutral by 2050. The company, in collaboration with partners, has several hydrogen projects in the works, and Mutoru laid out the company’s Hydrogen to Humber project in Northern England. Currently, the Humber region emits more carbon dioxide than any other industrial cluster in the United Kingdom — 50 percent more, in fact, than the next-largest carbon emitter.   

    “The ambition here is for us to deploy the world’s first at-scale hydrogen value chain to decarbonize the Humber industrial cluster,” Mutoru said.

    The project consists of three components: a clean hydrogen production facility, an onshore hydrogen and carbon dioxide transmission network, and offshore carbon dioxide transportation and storage operations. Mutoru highlighted the importance of carbon capture and storage in hydrogen production. Equinor, she said, has captured and sequestered carbon offshore for more than 25 years, storing more than 25 million tons of carbon dioxide during that time.

    Mutoru also touched on Equinor’s efforts to build a decarbonized energy hub in the Appalachian region of the United States, covering territory in Ohio, West Virginia, and Pennsylvania. By 2040, she said, the company’s ambition is to produce about 1.5 million tons of clean hydrogen per year in the region — roughly equivalent to 6.8 gigawatts of electricity — while also storing 30 million tons of carbon dioxide.

    Mutoru acknowledged that the biggest challenge facing potential hydrogen producers is the current lack of viable business models. “Resolving that challenge requires cross-industry collaboration, and supportive policy frameworks so that the market for hydrogen can be built and sustained over the long term,” she said.

    Confronting barriers

    Gretchen Baier, executive external strategy and communications leader for Dow, noted that the company already produces hydrogen in multiple ways. For one, Dow operates the world’s largest ethane cracker, in Texas. An ethane cracker heats ethane to break apart molecular bonds to form ethylene, with hydrogen one of the byproducts of the process. Also, Baier showed a slide of the 1891 patent for the electrolysis of brine water, which also produces hydrogen. The company still engages in this practice, but Dow does not have an effective way of utilizing the resulting hydrogen for their own fuel.

    “Just take a moment to think about that,” Baier said. “We’ve been talking about hydrogen production and the cost of it, and this is basically free hydrogen. And it’s still too much of a barrier to somewhat recycle that and use it for ourselves. The environment is clearly changing, and we do have plans for that, but I think that kind of sets some of the challenges that face industry here.”

    However, Baier said, hydrogen is expected to play a significant role in Dow’s future as the company attempts to decarbonize by 2050. The company, she said, plans to optimize hydrogen allocation and production, retrofit turbines for hydrogen fueling, and purchase clean hydrogen. By 2040, Dow expects more than 60 percent of its sites to be hydrogen-ready.

    Baier noted that hydrogen fuel is not a “panacea,” but rather one among many potential contributors as industry attempts to reduce or eliminate carbon emissions in the coming decades. “Hydrogen has an important role, but it’s not the only answer,” she said.

    “This is real”

    Colleen Wright is vice president of corporate strategy for Constellation, which recently separated from Exelon Corporation. (Exelon now owns the former company’s regulated utilities, such as Commonwealth Edison and Baltimore Gas and Electric, while Constellation owns the competitive generation and supply portions of the business.) Wright stressed the advantages of nuclear power in hydrogen production, which she said include superior economics, low barriers to implementation, and scalability.

    “A quarter of emissions in the world are currently from hard-to-decarbonize sectors — the industrial sector, steel making, heavy-duty transportation, aviation,” she said. “These are really challenging decarbonization sectors, and as we continue to expand and electrify, we’re going to need more supply. We’re also going to need to produce clean hydrogen using emissions-free power.”

    “The scale of nuclear power plants is uniquely suited to be able to scale hydrogen production,” Wright added. She mentioned Constellation’s Nine Mile Point site in the State of New York, which received a DOE grant for a pilot program that will see a proton exchange membrane electrolyzer installed at the site.

    “We’re very excited to see hydrogen go from a [research and development] conversation to a commercial conversation,” she said. “We’ve been calling it a little bit of a ‘middle-school dance.’ Everybody is standing around the circle, waiting to see who’s willing to put something at stake. But this is real. We’re not dancing around the edges. There are a lot of people who are big players, who are willing to put skin in the game today.” More

  • in

    Donald Sadoway wins European Inventor Award for liquid metal batteries

    MIT Professor Donald Sadoway has won the 2022 European Inventor Award, in the category for Non-European Patent Office Countries, for his work on liquid metal batteries that could enable the long-term storage of renewable energy.

    Sadoway is the John F. Elliott Professor of Materials Chemistry in MIT’s Department of Materials Science and Engineering, and a longtime supporter and friend of the Materials Research Laboratory.

    “By enabling the large-scale storage of renewable energy, Donald Sadoway’s invention is a huge step towards the deployment of carbon-free electricity generation,” says António Campinos, president of the European Patent Office. “He has spent his career studying electrochemistry and has transformed this expertise into an invention that represents a huge step forward in the transition to green energy.”

    Sadoway was honored at the 2022 European Inventor Award ceremony on June 21. The award is one of Europe’s most prestigious innovation prizes and is presented annually to outstanding inventors from Europe and beyond who have made an exceptional contribution to society, technological progress, and economic growth.

    When accepting the award in Munich, Sadoway told the audience:

    “I am astonished. When I look at all the patented technologies that are represented at this event I see an abundance of excellence, all of them solutions to pressing problems. I wonder if the judges are assessing not only degrees of excellence but degrees of urgency. The liquid metal battery addresses an existential threat to the health of our atmosphere which is related to climate change.

    “By hosting this event the EPO celebrates invention. The thread that connects all the inventors is their efforts to make the world a better place. In my judgment there is no nobler pursuit. So perhaps this is a celebration of nobility.”

    Sadoway’s liquid metal batteries consist of three liquid layers of different densities, which naturally separate in the same way as oil and vinegar do in a salad dressing. The top and bottom layers are made from molten metals, with a middle layer of molten liquid salt.

    To keep the metals liquid, the batteries need to operate at extremely high temperatures, so Sadoway designed a system that is self-heating and insulated, requiring no external heating or cooling. They have a lifespan of more than 20 years, can maintain 99 percent of their capacity over 5,000 charging cycles, and have no combustible materials, meaning there is no fire risk.

    In 2010, with a patent for his invention and support from Bill Gates, Sadoway co-founded Ambri, based in Marlborough, Massachusetts just outside Boston, to develop a commercial product. The company will soon install a unit on a 3,700-acre development for a data center in Nevada. This battery will store energy from a reported 500 megawatts of on-site renewable generation, the same output as a natural gas power plant.

    Born in 1950 into a family of Ukrainian immigrants in Canada, Sadoway studied chemical metallurgy specializing in what he calls “extreme electrochemistry” — chemical reactions in molten salts and liquid metals that have been heated to over 500 degrees Celsius. After earning his BASc, MASc, and PhD, all from the University of Toronto, he joined the faculty at MIT in 1978. More

  • in

    Evan Leppink: Seeking a way to better stabilize the fusion environment

    “Fusion energy was always one of those kind-of sci-fi technologies that you read about,” says nuclear science and engineering PhD candidate Evan Leppink. He’s recalling the time before fusion became a part of his daily hands-on experience at MIT’s Plasma Science and Fusion Center, where he is studying a unique way to drive current in a tokamak plasma using radiofrequency (RF) waves. 

    Now, an award from the U.S. Department of Energy’s (DOE) Office of Science Graduate Student Research (SCGSR) Program will support his work with a 12-month residency at the DIII-D National Fusion Facility in San Diego, California.

    Like all tokamaks, DIII-D generates hot plasma inside a doughnut-shaped vacuum chamber wrapped with magnets. Because plasma will follow magnetic field lines, tokamaks are able to contain the turbulent plasma fuel as it gets hotter and denser, keeping it away from the edges of the chamber where it could damage the wall materials. A key part of the tokamak concept is that part of the magnetic field is created by electrical currents in the plasma itself, which helps to confine and stabilize the configuration. Researchers often launch high-power RF waves into tokamaks to drive that current.

    Leppink will be contributing to research, led by his MIT advisor Steve Wukitch, that pursues launching RF waves in DIII-D using a unique compact antenna placed on the tokamak center column. Typically, antennas are placed inside the tokamak on the outer edge of the doughnut, farthest from the central hole (or column), primarily because access and installation are easier there. This is known as the “low-field side,” because the magnetic field is lower there than at the central column, the “high-field side.” This MIT-led experiment, for the first time, will mount an antenna on the high-field side. There is some theoretical evidence that placing the wave launcher there could improve power penetration and current drive efficiency. And because the plasma environment is less harsh on this side, the antenna will survive longer, a factor important for any future power-producing tokamak.

    Leppink’s work on DIII-D focuses specifically on measuring the density of plasmas generated in the tokamak, for which he developed a “reflectometer.” This small antenna launches microwaves into the plasma, which reflect back to the antenna to be measured. The time that it takes for these microwaves to traverse the plasma provides information about the plasma density, allowing researchers to build up detailed density profiles, data critical for injecting RF power into the plasma.

    “Research shows that when we try to inject these waves into the plasma to drive the current, they can lose power as they travel through the edge region of the tokamak, and can even have problems entering the core of the plasma, where we would most like to direct them,” says Leppink. “My diagnostic will measure that edge region on the high-field side near the launcher in great detail, which provides us a way to directly verify calculations or compare actual results with simulation results.”

    Although focused on his own research, Leppink has excelled at priming other students for success in their studies and research. In 2021 he received the NSE Outstanding Teaching Assistant and Mentorship Award.

    “The highlights of TA’ing for me were the times when I could watch students go from struggling with a difficult topic to fully understanding it, often with just a nudge in the right direction and then allowing them to follow their own intuition the rest of the way,” he says.

    The right direction for Leppink points toward San Diego and RF current drive experiments on DIII-D. He is grateful for the support from the SCGSR, a program created to prepare graduate students like him for science, technology, engineering, or mathematics careers important to the DOE Office of Science mission. It provides graduate thesis research opportunities through extended residency at DOE national laboratories. He has already made several trips to DIII-D, in part to install his reflectometer, and has been impressed with the size of the operation.

    “It takes a little while to kind of compartmentalize everything and say, ‘OK, well, here’s my part of the machine. This is what I’m doing.’ It can definitely be overwhelming at times. But I’m blessed to be able to work on what has been the workhorse tokamak of the United States for the past few decades.” More