More stories

  • in

    Fikile Brushett is looking for new ways to store energy

    Fikile Brushett, an MIT associate professor of chemical engineering, had an unusual source of inspiration for his career in the chemical sciences: the character played by Nicolas Cage in the 1996 movie “The Rock.” In the film, Cage portrays an FBI chemist who hunts down a group of rogue U.S. soldiers who have commandeered chemical weapons and taken over the island of Alcatraz.
    “For a really long time, I really wanted to be a chemist and work for the FBI with chemical warfare agents. That was the goal: to be Nick Cage,” recalls Brushett, who first saw the movie as a high school student living in Silver Spring, Maryland, a suburb of Washington.
    Though he did not end up joining the FBI or working with chemical weapons — which he says is probably for the best — Brushett did pursue his love of chemistry. In his lab at MIT, Brushett leads a group dedicated to developing more efficient and sustainable ways to store energy, including batteries that could be used to store the electricity generated by wind and solar power. He is also exploring new ways to convert carbon dioxide to useful fuels.
    “The backbone of our global energy economy is based upon liquid fossil fuels right now, and energy demand is increasing,” he says. “The challenge we’re facing is that carbon emissions are tied very tightly to this increasing energy demand, and carbon emissions are linked to climate volatility, as well as pollution and health effects. To me, this is an incredibly urgent, important, and inspiring problem to go after.”
    “A body of knowledge”
    Brushett’s parents immigrated to the United States in the early 1980s, before he was born. His mother, an English as a second language teacher, is from South Africa, and his father, an economist, is from the United Kingdom. Brushett grew up mostly in the Washington area, with the exception of four years spent living in Zimbabwe, due to his father’s work at the World Bank.
    Brushett remembers this as an idyllic time, saying, “School ended at 1 p.m., so you almost had the whole afternoon to do sports at school, or you could go home and just play in the garden.”
    His family returned to the Washington area while he was in sixth grade, and in high school, he started to get interested in chemistry, as well as other scientific subjects and math.
    At the University of Pennsylvania, he decided to major in chemical engineering because someone had advised him that if he liked chemistry and math, chemical engineering would be a good fit. While he enjoyed some of his chemical engineering classes, he struggled with others at first.
    “I remember really having a hard time with chemE for a while, and I was fortunate enough to have a really good academic advisor who said, ‘Listen, chemE is hard for some people. Some people get it immediately, for some people it takes a little while for it to sink in,’” he says. Around his junior year, concepts started to fall into place, he recalls. “Rather than looking at courses as self-contained units, the units started coming together and flowing into a body of knowledge. I was able to see the interconnections between courses.”
    While he was originally most interested in molecular biotechnology — the field of engineering proteins and other biological molecules — he ended up working in a reaction engineering lab with his academic advisor, John Vohs. There, he studied how catalytic surfaces influence chemical reactions. At Vohs’ recommendation, he applied to the University of Illinois at Urbana-Champaign for graduate school, where he worked on electrochemistry projects. With his PhD advisor, Paul Kenis, he developed microfluidic fuel cells that could run on a variety of different fuels as portable power sources.
    During his third year of graduate school, he began applying for faculty positions and was offered a job at MIT, which he accepted but deferred for two years so he could do a postdoc at Argonne National Laboratory. There, he worked with scientists and engineers doing a wide range of research on electrochemical energy storage, and became interested in flow batteries, which is now one of the major focus areas of his lab at MIT.
    Modeling new technology
    Unlike the rechargeable lithium-ion batteries that power our cell phones and laptops, flow batteries use large tanks of liquid to store energy. Such batteries have traditionally been prohibitively expensive because they rely on pricey electroactive metal salts. Brushett is working on alternative approaches that use less expensive electroactive materials derived from organic compounds.
    Such batteries could be used to store the power intermittently produced by wind turbines and solar panels, making them a more reliable, efficient, and cost-effective source of energy. His lab also works on new processes for converting carbon dioxide, a waste product and greenhouse gas, into useful fuels.
    In a related area of research, Brushett’s lab performs “techno-economic” modeling of potential new technologies, to help them assess what aspects of the technology need the most improvement to make them economically feasible.
    “With techno-economic modeling, we can devise targets for basic science,” he says. “We’re always looking for the rate-limiting step. What is it that’s preventing us from moving forward? In some cases it could be a catalyst, in other cases it could be a membrane. In other cases it could be the architecture for the device.”
    Once those targets are identified, researchers working in those areas have a better idea of what they need to focus on to make a particular technology work, Brushett says.
    “That’s the thing I’ve been most proud of from our research — hopefully opening up or demystifying the field and allowing a more diverse set of researchers to enter and to add value, which I think is important in terms of growing the science and developing new ideas,” he says. More

  • in

    Researchers decipher structure of promising battery materials

    A class of materials called metal organic frameworks, or MOFs, has attracted considerable interest over the last several years for a variety of potential energy-related applications — especially since researchers discovered that these typically insulating materials could also be made electrically conductive.
    Thanks to MOFs’ extraordinary combination of porosity and conductivity, this finding opened the possibility of new applications in batteries, fuel cells, supercapacitors, electrocatalysts, and specialized chemical sensors. But the process of developing specific MOF materials that possess the desired characteristics has been slow. That’s largely because it’s been hard to figure out their exact molecular structure and how it influences the material’s properties.
    Now, researchers at MIT and other institutions have found a way to control the growth of crystals of several kinds of MOFs. This made it possible to produce crystals large enough to be probed by a battery of tests, enabling the team to finally decode the structure of these materials, which resemble the two-dimensional hexagonal lattices of materials like graphene.
    The findings are described today in the journal Nature Materials, in a paper by a team of 20 at MIT and other universities in the U.S., China, and Sweden, led by W. M. Keck Professor of Energy Mircea Dincă from MIT’s Department of Chemistry.
    Since conductive MOFs were first discovered a few years ago, Dincă says, many teams have been working to develop versions for many different applications, “but nobody had been able to get a structure of the material with so much detail.” The better the details of those structures are understood, he says, “it helps you design better materials, and much faster. And that’s what we’ve done here: We provided the first detailed crystal structure at atomic resolution.”
    The difficulty in growing crystals that were large enough for such studies, he says, lies in the chemical bonds within the MOFs. These materials consist of a lattice of metal atoms and organic molecules that tend to form into crooked needle- or thread-like crystals, because the chemical bonds that connect the atoms in the plane of their hexagonal lattice are harder to form and harder to break. In contrast, the bonds in the vertical direction are much weaker and so keep breaking and reforming at a faster rate, causing the structures to rise faster than they can spread out. The resulting spindly crystals were far too small to be characterized by most available tools.
    The team solved that problem by changing the molecular structure of one of the organic compounds in the MOF so that it changed the balance of electron density and the way it interacts with the metal. This reversed the imbalance in the bond strengths and growth rates, thus allowing much larger crystal sheets to form. These larger crystals were then analyzed using a battery of high-resolution diffraction-based imaging techniques.
    As was the case with graphene, finding ways to produce larger sheets of the material could be a key to unlocking the potential of this type of MOFs, Dincă says. Initially graphene could only be produced by using sticky tape to peel off single-atom-thick layers from a block of graphite, but over time methods have been developed to directly produce sheets large enough to be useful. The hope is that the techniques developed in this study could help pave the way to similar advances for MOFs, Dincă says.
    “This is basically providing a basis and a blueprint for making large crystals of two-dimensional MOFs,” he says.
    As with graphene, but unlike most other conductive materials, the conductive MOFs have a strong directionality to their electrical conductivity: They conduct much more freely along the plane of the sheet of material than in the perpendicular direction.
    This property, combined with the material’s very high porosity, could make it a strong candidate to be used as an electrode material for batteries, fuel cells, or supercapacitors. And when its organic components have certain groups of atoms attached to them that bond to particular other compounds, they could be used as very sensitive chemical detectors.
    Graphene and the handful of other 2D materials known have opened up a wide swath of research in potential applications in electronics and other fields, but those materials have essentially fixed properties. Because MOFs share many of those materials’ characteristics, but form a broad family of possible variations with varying properties, they should allow researchers to design the specific kinds of materials needed for a particular use, Dincă says.
    For fuel cells, for example, “you want something that has a lot of active sites” for reactivity on the large surface area provided by the structure with its open latticework, he says. Or for a sensor to monitor levels of a particular gas such as carbon dioxide, “you want something that is specific and doesn’t give false positives.” These kinds of properties can be engineered in through the selection of the organic compounds used to make the MOFs, he says.
    The team included researchers from MIT’s departments of Chemistry, Biology, and Electrical Engineering and Computer Science; Peking University and the Shanghai Advanced Research University in China; Stockholm University in Sweden; the University of Oregon; and Purdue University. The work was supported by the U.S. Army Research Office. More

  • in

    Powering through the coming energy transition

    Aiming to avoid the worst effects of climate change, from severe droughts to extreme coastal flooding, the nearly 200 nations that signed the 2015 Paris Agreement set a long-term goal of keeping global warming well below 2 degrees Celsius. Achieving that goal will require dramatic reductions in greenhouse gas emissions, primarily through a global transition to low-carbon energy technologies. In the power sector, these include solar, wind, biomass, nuclear, and carbon capture and storage (CCS). According to more than half of the models cited in the Intergovernmental Panel on Climate Change’s (IPCC) Fifth Assessment Report, CCS will be required to realize the Paris goal, but to what extent will it need to be deployed to ensure that outcome?
    A new study in Climate Change Economics, led by the MIT Joint Program on the Science and Policy of Global Change, projects the likely role of CCS in the power sector in a portfolio of low-carbon technologies. Using the Joint Program’s multi-region, multi-sector energy-economic modeling framework to quantify the economic and technological competition among low-carbon technologies as well as the impact of technology transfers between countries, the study assessed the potential of CCS and its competitors in mitigating carbon emissions in the power sector under a policy scenario aligned with the 2 C Paris goal.
    The researchers found that under this scenario and the model’s baseline estimates of technology costs and performance, CCS will likely be incorporated in nearly 40 percent of global electricity production by 2100 — one-third in coal-fired power plants, and two-thirds in those run on natural gas.
    “Our projections show that CCS can play a major role in the second half of this century in mitigating carbon emissions in the power sector,” says Jennifer Morris, an MIT Joint Program research scientist and the lead author of the study. “But in order for CCS to be well-positioned to provide stable and reliable power during that time frame, research and development will need to be scaled up.”
    That would require a considerable expansion of today’s nearly four-dozen commercial-scale carbon capture projects around the globe, about half of which are in development.
    The study also found that the extent of CCS deployment, especially coal CCS, depends on the assumed fraction of carbon captured in CCS power plants. Under a stringent climate policy with high carbon prices, the penalty on uncaptured emissions can make CCS technologies uneconomical and hinder their expansion. Adding options for higher capture rates or offsetting uncaptured emissions (e.g., by co-firing with biomass, which has already captured carbon through its cultivation and so would produce net negative emissions when combusted) can lead to greater deployment of CCS.
    According to the study, CCS deployment will likely vary on a regional basis, with the United States and Europe depending primarily on gas CCS, China on coal CCS, and India embracing both options. Comparing projections of demands for CCS to an assessment of the planet’s capacity to store CO2, the authors found that CO2 storage potential is larger than storage demand at both global and regional scales.
    Finally, in evaluating the comparative costs of competing low-carbon technologies, the study found that nuclear generation, if public acceptance and economic issues are resolved, could substitute for CCS in providing clean dispatchable power. Renewables could also outcompete CCS, depending on how the costs of intermittency (i.e., systems that keep the lights on when the sun doesn’t shine or the wind doesn’t blow) are defined. Progress in resolving technical and economic challenges related to intermittency could reduce the need for accelerated CCS deployment.
    Ultimately, the authors determined that the power sector will continue to rely on a mix of technological options, and the conditions that favor a particular mix of technologies differ by region.
    “This suggests that policymakers should not pick a winner, but rather create an environment where all technologies compete on an economic basis,” says Sergey Paltsev, deputy director of the MIT Joint Program and a co-author of the study. “CCS has great potential to be a competitive option, and that potential can increase with additional research and development related to capture rates, CO2 transport and storage, and applications of CCS technologies to areas outside of power generation.”
    To that end, MIT Joint Program researchers are pursuing an in-depth analysis of the options and costs for the transportation and long-term storage of CO2 emissions captured by CCS technology. They are also assessing the potential of CCS in hard-to-abate economic sectors such cement, iron and steel, and fertilizer production.
    This research was supported by sponsors of the MIT Joint Program and by ExxonMobil through its membership in the MIT Energy Initiative. More

  • in

    Study identifies reasons for soaring nuclear plant cost overruns in the U.S.

    A new analysis by MIT researchers details many of the underlying issues that have caused cost overruns on new nuclear power plants in the U.S., which have soared ever higher over the last five decades. The new findings may help the designers of new plants build in resilience to the factors that tend to cause these overruns, thus helping to bring down the costs of such plants.
    Many analysts believe nuclear power will play an essential part in reducing global emissions of greenhouse gases, and finding ways to curb these rising costs could be an important step toward encouraging the construction of new plants, the researchers say. The findings are being published today in the journal Joule, in a paper by MIT professors Jessika Trancik and Jacopo Buongiorno, along with former students Philip Eash-Gates SM ’19, Magdalena Klemun PhD ’20, Goksin Kavlak PhD ’18, and Research Scientist James McNerney.
    Among the surprising findings in the study, which covered 50 years of U.S. nuclear power plant construction data, was that, contrary to expectations, building subsequent plants based on an existing design actually costs more, not less, than building the initial plant.
    The authors also found that while changes in safety regulations could account for some of the excess costs, that was only one of numerous factors contributing to the overages.
    “It’s a known fact that costs have been rising in the U.S. and in a number of other locations, but what was not known is why and what to do about it,” says Trancik, who is an associate professor of energy studies in MIT’s Institute for Data, Systems and Society. The main lesson to be learned, she says, is that “we need to be rethinking our approach to engineering design.”
    Part of that rethinking, she says, is to pay close attention to the details of what has caused past plant construction costs to spiral out of control, and to design plants in a way that minimizes the likelihood of such factors arising. This requires new methods and theories of technological innovation and change, which the team has been advancing over the past two decades.
    For example, many of the excess costs were associated with delays caused by the need to make last-minute design changes based on particular conditions at the construction site or other local circumstances, so if more components of the plant, or even the entire plant, could be built offsite under controlled factory conditions, such extra costs could be substantially cut.
    Specific design changes to the containment buildings surrounding the reactor could also help to reduce costs significantly, Trancik says. For example, substituting some new kinds of concrete in the massive structures could reduce the overall amount of the material needed, and thus slash the onsite construction time as well as the material costs.
    Many of the reasons behind the cost increases, Trancik says, “suggest that there’s a lack of resilience, in the process of constructing these plants, to variable construction conditions.” Those variations can come from safety regulations that are changing over time, but there are other reasons as well. “All of this points to the fact that there is a path forward to increasing resilience that involves understanding the mechanisms behind why costs increased in the first place.”
    Say overall construction costs are very sensitive to upfront design costs, for example: “If you’re having to go back and redo the design because of something about a particular site or a changing safety regulation, then if you build into your design that you have all of these different possibilities based on these things that could happen,” that can protect against the need for such last-minute redesign work.
    “These are soft costs contributions,” Trancik says, which have not tended to be prioritized in the typical design process. “They’re not hardware costs, they are changes to the environment in which the construction is happening. … If you build that in to your engineering models and your engineering design process, then you may be able to avoid the cost increases in the future.”
    One approach, which would involve designing nuclear plants that could be built in factories and trucked to the site, has been advocated by many nuclear engineers for years. For example, rather than today’s huge nuclear plants, modular and smaller reactors could be completely self-contained and delivered to their final site with the nuclear fuel already installed. Numerous such plants could be ganged together to provide output comparable to that of larger plants, or they could be distributed more widely to reduce the need for long-distance transmission of the power. Alternatively, a larger plant could be designed to be assembled on site from an array of smaller factory-built subassemblies.
    “This relationship between the hardware design and the soft costs really needs to be brought into the engineering design process,” she says, “but it’s not going to happen without a concerted effort, and without being informed by modeling that accounts for these potential ballooning soft costs.”
    Trancik says that while some of the steps to control costs involve increased use of automated processes, these need to be considered in a societal context. “Many of these involve human jobs and it is important, especially in this time, where there’s such a need to create high-quality sustained jobs for people, this should also factor into the engineering design process. So it’s not that you need to look only at costs.” But the kind of analysis the team used, she says, can still be useful. “You can also look at the benefit of a technology in terms of jobs, and this approach to mechanistic modeling can allow you to do that.”
    The methodology the team used to analyze the causes of cost overruns could potentially also be applied to other large, capital-intensive construction projects, Trancik says, where similar kinds of cost overruns often occur.
    “One way to think about it as you’re bringing more of the entire construction process into manufacturing plants, that can be much more standardized.” That kind of increased standardization is part of what has led, for example, to a 95 percent cost reduction in solar panels and in lithium-ion batteries over the last few decades, she says. “We can think of it as making these larger projects more similar to those manufacturing processes.”
    Buongiorno adds that “only by reducing the cost of new plants can we expect nuclear energy to play a pivotal role in the upcoming energy transformation.”
    The work was supported by the David and Lucille Packard Foundation and the MIT Energy Initiative. More

  • in

    Commercializing next-generation nuclear energy technology

    All of the nuclear power plants operating in the U.S. today were built using the same general formula. For one thing, companies made their reactors big, with power capacities measured in the hundreds of megawatts. They also relied heavily on funding from the federal government, which through large grants and lengthy application processes has dictated many aspects of nuclear plant design and development.
    That landscape has had varying degrees of success over the years, but it’s never been particularly inviting for new companies interested in deploying unique technologies.
    Now the startup Oklo is forging a new path to building innovative nuclear power plants that meet federal safety regulations. Earlier this year, the company became the first to get its application for an advanced nuclear reactor accepted by the U.S. Nuclear Regulatory Commission (NRC). The acceptance was the culmination of a novel application process that set a number of milestones in the industry, and it has positioned Oklo to build an advanced reactor that differs in several important ways from the nuclear power plants currently operating in the country.
    Conventional reactors use moderators like water to slow neutrons down before they split, or fission, uranium and plutonium atoms. Oklo’s reactors won’t use moderators, enabling the construction of much smaller plants and allowing neutrons to move faster.
    Faster-moving neutrons can sustain nuclear fission with a different type of fuel. Compared to traditional reactors, Oklo’s fuel source will be enriched with a much higher concentration of the uranium-235 isotope, which fissions more easily than the more common uranium-238. The added proportion of uranium-235 allows Oklo’s reactor to run for longer time periods without having to refuel.
    As a result of these differences, Oklo’s powerhouses will bear little resemblance to conventional nuclear plants. The company’s first reactor, dubbed the Aurora, is housed in an unassuming A-frame building that is hundreds of times smaller than traditional reactors, and it will run on used fuel recovered from an experimental reactor at the Idaho National Laboratory that was shut down in 1994. Oklo says the plant will run for 20 years without having to refuel in its lifetime.
    But perhaps the most unique aspect of Oklo is its approach to commercialization. In many ways, the Silicon Valley-based company has cultivated a startup mindset, eschewing government grants to raise smaller, venture capital-backed funding rounds and iterating on its designs as it moves through the application process much more quickly than its predecessors.
    “Newness was favorable because it shed some of the legacy inertia around how things have been done in the past, and I thought that was an important way of modernizing the commercial approach,” says Oklo CEO Jacob DeWitte SM ’11, PhD ’14, who co-founded the company with Caroline Cochran SM ’10.
    Now Oklo is hoping its progress will encourage others to pursue new approaches in the nuclear power industry.
    “If we can modernize the way we meet these regulations and take advantage of the benefits and characteristics of these next-gen designs, we can start to paint a whole new picture here,” DeWitte says.
    Charting a new path
    DeWitte came to MIT in 2008 and studied advanced reactors during work for his master’s degree. For his PhD, he considered ways to extend the lifetime and power output of the large reactors already in use around the world.
    But while DeWitte studied the big reactors of today, he was increasingly drawn to the idea of commercializing the small reactors of tomorrow.
    “At MIT, through the projects and extracurriculars, I learned more about how the energy ecosystem works, how the startup model works, how the venture finance model works, and with all these different pieces I started to formulate the idea that became the seed for Oklo,” DeWitte says.
    What DeWitte learned about the nuclear power landscape was not particularly encouraging for startups. The industry is plagued with stories of plant construction taking a decade or more, with cost overruns in the billions.
    In the U.S., the Nuclear Regulatory Commission sets design standards for reactors and issues guidance for meeting those standards. But the guidance was created for the large reactors that have been the norm in the industry for more than 50 years, making it poorly suited to help companies interested in building smaller reactors based on different technology.
    DeWitte began thinking about starting an advanced nuclear company while he was still a PhD student. In 2013 he partnered with Cochran and others from MIT, and the team participated in the MIT $100K Entrepreneurship Competition and the MIT Clean Energy Prize, where Oklo got early feedback and validation, including winning the energy track of the $100K.
    Oklo’s reactor design changed considerably over the years as DeWitte and Cochran — the only co-founders to stick with the company — worked first with advisors at MIT, then with industry experts, and eventually with officials at the NRC.
    “The idea was if we take this technology, we start small and use an iterative approach to tech development and a product focused approach, kind of like what Tesla did with the Roadster [electric car model] before moving to others,” DeWitte says. “That seemed to yield an interesting way of getting some initial validation points and could be done at a higher cost efficiency, so less cash needed, and that could incrementally fit with the venture capital financing model.”
    Oklo raised small funding rounds in 2013 and 2014 as the company went through the MassChallenge and Y Combinator startup accelerators.
    In 2016, the Department of Energy (DOE) did some innovating of its own, beginning an industry-led effort to build new approval processes for advanced nuclear reactor applications. Two years later, Oklo piloted the new structure. The process resulted in Oklo developing a novel application and becoming the first company to get a combined license application to build a power plant accepted by the NRC since 2009.
    “We had to look at regulations with a fresh eye and not through the distortion of everything that had been done in the past,” DeWitte says. “In other words, we had to find more efficient ways to meet the regulations.”
    Leading by example
    Oklo’s first reactor will generate 1.5 megawatts of electric energy, although later versions of the company’s reactor could generate much more.
    The company’s first reactor will also use a unique uranium fuel source provided by the Idaho National Laboratory. Natural uranium consists of more than 99 percent uranium-238 and about 0.7 percent uranium-235. In conventional nuclear reactors, uranium is enriched to include up to 5 percent uranium-235. The uranium fuel in Oklo’s reactors will be enriched to include between 5 and 20 percent uranium-235.
    Because Oklo’s reactors will be able to operate for years without refueling, DeWitte says they’re particularly well-suited for remote areas that often rely on environmentally harmful diesel fuel.
    Oklo isn’t committing to an exact timeline for construction, but the co-founders have said they expect the reactor to be operational in the early 2020s. DeWitte says it will serve as a proof of concept. Oklo is already talking with potential customers about additional plants.
    DeWitte has said later versions of its plants could run for 40 years or more without needing to refuel.
    For now, though, DeWitte is hoping Oklo’s progress can inspire the industry to rethink the way it brings new technologies to market.
    “[Oklo’s progress] opens the door up to say nuclear innovation is alive and well,” DeWitte says. “And it’s not just the technology, it’s the full stack: It’s technology, regulations, manufacturing, business models, financing models, etc. So being able to get these milestones and do it in an unprecedented manner is really significant because it shows there are more pathways for nuclear to get to market.” More

  • in

    Power-free system harnesses evaporation to keep items cool

    Camels have evolved a seemingly counterintuitive approach to keeping cool while conserving water in a scorching desert environment: They have a thick coat of insulating fur. Applying essentially the same approach, researchers at MIT have now developed a system that could help keep things like pharmaceuticals or fresh produce cool in hot environments, without the need for a power supply.
    Most people wouldn’t think of wearing a camel-hair coat on a hot summer’s day, but in fact many desert-dwelling people do tend to wear heavy outer garments, for essentially the same reason. It turns out that a camel’s coat, or a person’s clothing, can help to reduce loss of moisture while at the same time allowing enough sweat evaporation to provide a cooling effect. Tests have showed that a shaved camel loses 50 percent more moisture than an unshaved one, under identical conditions, the researchers say.
    The new system developed by MIT engineers uses a two-layer material to achieve a similar effect. The material’s bottom layer, substituting for sweat glands, consists of hydrogel, a gelatin-like substance that consists mostly of water, contained in a sponge-like matrix from which the water can easily evaporate. This is then covered with an upper layer of aerogel, playing the part of fur by keeping out the external heat while allowing the vapor to pass through.
    Hydrogels are already used for some cooling applications, but field tests and detailed analysis have shown that this new two-layer material, less than a half-inch thick, can provide cooling of more than 7 degrees Celsius for five times longer than the hydrogel alone — more than eight days versus less than two.
    The findings are being reported today in a paper in the journal Joule, by MIT postdoc Zhengmao Lu, graduate students Elise Strobach and Ningxin Chen, Research Scientist Nicola Ferralis and Professor Jeffrey Grossman, head of the Department of Materials Science and Engineering.
    The system, the researchers say, could be used for food packaging to preserve freshness and open up greater distribution options for farmers to sell their perishable crops. It could also allow medicines such as vaccines to be kept safely as they are delivered to remote locations. In addition to providing cooling, the passive system, powered purely by heat, can reduce the variations in temperature that the goods experience, eliminating spikes that can accelerate spoilage.
    Ferralis explains that such packaging materials could provide constant protection of perishable foods or drugs all the way from the farm or factory, through the distribution chain, and all the way to the consumer’s home. In contrast, existing systems that rely on refrigerated trucks or storage facilities may leave gaps where temperature spikes can happen during loading and unloading. “What happens in just a couple of hours can be very detrimental to some perishable foods,” he says.
    The basic raw materials involved in the two-layer system are inexpensive — the aerogel is made of silica, which is essentially beach sand, cheap and abundant. But the processing equipment for making the aerogel is large and expensive, so that aspect will require further development in order to scale up the system for useful applications. But at least one startup company is already working on developing such large-scale processing to use the material to make thermally insulating windows.
    The basic principle of using the evaporation of water to provide a cooling effect has been used for centuries in one form or another, including the use of double-pot systems for food preservation. These use two clay pots, one inside the other, with a layer of wet sand in between. Water evaporates from the sand out through the outer pot, leaving the inner pot cooler. But the idea of combining such evaporative cooling with an insulating layer, as camels and some other desert animals do, has not really been applied to human-designed cooling systems before.
    For applications such as food packaging, the transparency of the hydrogel and aerogel materials is important, allowing the condition of the food to be clearly seen through the package. But for other applications such as pharmaceuticals or space cooling, an opaque insulating layer could be used instead, providing even more options for the design of materials for specific uses, says Lu, who was the paper’s lead author.
    The hydrogel material is composed of 97 percent water, which gradually evaporates away. In the experimental setup, it took 200 hours for a 5-millimeter layer of hydrogel, covered with 5 millimeters of aerogel, to lose all its moisture, compared to 40 hours for the bare hydrogel. The two-layered material’s cooling level was slightly less — a reduction of 7 degrees Celsius (about 12.6 degrees Fahrenheit) versus 8 C (14.4 F) — but the effect was much longer-lasting. Once the moisture is gone from the hydrogel, the material can then be recharged with water so the cycle can begin again.
    Especially in developing countries where access to electricity is often limited, Lu says, such materials could be of great benefit. “Because this passive cooling approach does not rely on electricity at all, this gives you a good pathway for storage and distribution of those perishable products in general,” he says. More

  • in

    Pushing the envelope with fusion magnets

    “At the age of between 12 and 15 I was drawing; I was making plans of fusion devices.”
    David Fischer remembers growing up in Vienna, Austria, imagining how best to cool the furnace used to contain the hot soup of ions known as plasma in a fusion device called a tokamak. With plasma hotter than the core of the sun being generated in a donut-shaped vacuum chamber just a meter away from these magnets, what temperature ranges might be possible with different coolants, he wondered.
    “I was drawing these plans and showing them to my father,” he recalls. “Then somehow I forgot about this fusion idea.”
    Now starting his second year at the MIT Plasma Science and Fusion Center (PSFC) as a postdoc and a new Eni-sponsored MIT Energy Fellow, Fischer has clearly reconnected with the “fusion idea.” And his research revolves around the concepts that so engaged him as a youth.
    Fischer’s early designs explored a popular approach to generating carbon-free, sustainable fusion energy known as “magnetic confinement.” Since plasma responds to magnetic fields, the tokamak is designed with magnets to keep the fusing atoms inside the vessel and away from the metal walls, where they would cause damage. The more effective the magnetic confinement the more stable the plasma can become, and the longer it can be sustained within the device.
    Fischer is working on ARC, a fusion pilot plant concept that employs thin high-temperature superconductor (HTS) tapes in the fusion magnets. HTS allows much higher magnetic fields than would be possible from conventional superconductors, enabling a more compact tokamak design. HTS also allows the fusion magnets to operate at higher temperatures, greatly reducing the required cooling.
    Fischer is particularly interested in how to keep the HTS tapes from degrading. Fusion reactions create neutrons, which can damage many parts of a fusion device, with the strongest effect on components closest to the plasma. Although the superconducting tapes may be as much as a meter away from the first wall of the tokamak, neutrons can still reach them. Even in reduced numbers and after losing most of their energy, the neutrons damage the microstructure of the HTS tape and over time change the properties of the superconducting magnets.
    Much of Fischer’s focus is devoted to the effect of irradiation damage on the critical currents, the maximum electrical current that can pass through a superconductor without dissipating energy. If irradiation causes the critical currents to degrade too much, the fusion magnets can no longer produce the high magnetic fields necessary to confine and compress the plasma.
    Fischer notes that it is possible to reduce damage to the magnets almost completely by adding more shielding between the magnets and the fusion plasma. However, this would require more space, which comes at a premium in a compact fusion power plant.
    “You can’t just put infinite shielding in between. You have to learn first how much damage can this superconductor tolerate, and then determine how long do you want the fusion magnets to last. And then design around these parameters.”
    Fischer’s expertise with HTS tapes stems from studies at Technische Universität Wien (Vienna University of Technology), Austria. Working on his master’s degree in the low temperature physics group, he was told that a PhD position was available researching radiation damage on coated conductors, materials that could be used for fusion magnets.
    Recalling the drawings he shared with his father, he thought, “Oh, that’s interesting. I was attracted to fusion more than 10 years ago. Yeah, let’s do that.”
    The resulting research on the effects of neutron irradiation on high-temperature superconductors for fusion magnets, presented at a workshop in Japan, got the attention of PSFC nuclear science and engineering Professor Zach Hartwig and Commonwealth Fusion Systems Chief Science Officer Brandon Sorbom.
    “They lured me in,” he laughs.
    Like Fischer, Sorbom had explored in his own dissertation the effect of radiation damage on the critical current of HTS tapes. What neither researcher had the opportunity to examine was how the tapes behave when irradiated at 20 kelvins, the temperature at which the HTS fusion magnets will operate.
    Fischer now finds himself overseeing a proton irradiation laboratory for PSFC Director Dennis Whyte. He is building a device that will not only allow him to irradiate the superconductors at 20 K, but also immediately measure changes in the critical currents.
    He is glad to be back in the NW13 lab, fondly known as “The Vault,” working safely with graduate and Undergraduate Research Opportunities Program student assistants. During his Covid-19 lockdown, he was able to work from home on programming a measurement software, but he missed the daily connection with his peers.
    “The atmosphere is very inspiring,” he says, noting some of the questions his work has recently stimulated. “What is the effect of the irradiation temperature? What are the mechanisms for the degradation of the critical currents? Could we design HTS tapes that are more radiation resistant? Is there a way to heal radiation damage?”
    Fischer may have the chance to explore some of his questions as he prepares to coordinate the planning and design of a new neutron irradiation facility at MIT.
    “It’s a great opportunity for me,” he says. “It’s great to be responsible for a project now, and see that people trust that you can make it work.” More

  • in

    3 Questions: Fatih Birol on post-Covid trajectories in energy and climate

    As part of the MIT Energy Initiative’s (MITEI) distinguished colloquium series, Fatih Birol, the executive director of the International Energy Agency (IEA), recently shared his perspective on trajectories in global energy markets and climate trends post-Covid-19 and discussed emerging developments that make him optimistic about how quickly the world may shift to cleaner energy and achieve international decarbonization goals. Here, Birol talks to MITEI about key takeaways from his talk.
    Q: How has the Covid-19 pandemic impacted global energy markets?
    A: Covid-19 has already delivered the biggest shock to global energy markets since the Great Depression. Global energy demand is set to decline by 6 percent, which is many times greater than the fall during the 2009 financial crisis. Oil has been hardest hit, with demand set to fall by 8.4 million barrels per day, year-on-year, based on a resurgence of Covid-19 cases, local lockdown measures, and weak aviation. Natural gas and coal have also seen strong declines, and, while renewables have been more resilient, they, too, are under pressure.
    The crisis is still with us, so it’s too early to draw any definitive conclusions about the long-term implications for energy and climate trends. The extent to which governments prioritize clean energy in their economic recovery plans will make a huge difference. The IEA’s Sustainable Recovery Plan, which we released in June, shows how smart policies and targeted investments can boost economic growth, create jobs, and put global greenhouse gas emissions into decline.
    Q: What trends in technology, policy, and economics have the most potential to curb climate change and ensure universal energy access?
    A: Five recent emerging developments are making me increasingly optimistic about how quickly the world may shift to cleaner energy and achieve the kind of structural declines in greenhouse gas emissions that are needed to achieve international climate and sustainable energy goals.
    The first is the way solar is leading renewables to new heights — it has now become the least-expensive option in many economies, and new projects are springing up fast all over the world. Solar also has huge potential to help increase access to energy, especially in Africa, where hundreds of millions of people still lack basic access to electricity.
    The massive easing of monetary policy by central banks in response to the pandemic means that wind, solar, and electric vehicles should benefit from ultra-low interest rates for an extended period in some regions of the world. We need to find ways for all countries to access this cheaper capital.
    At the same time, more governments are throwing their weight behind clean energy technologies, which was made clear by the number of energy ministers (40!) from nations around the world who took part in the IEA Clean Energy Transitions Summit in July.
    More companies are stepping up their ambitions, from major oil firms committing to transform themselves into lower-carbon businesses to leading tech companies putting increasing resources into renewables and energy storage.
    Lastly, I see encouraging momentum in innovation, which will be essential for scaling up the clean energy technologies we need — like hydrogen and carbon capture — quickly enough to make a difference.
    Q: What are the greatest challenges to the clean energy transition, and how can we overcome them?
    A: Getting more countries and companies on board with the promising trends I just mentioned will be vital. Greater efforts need to be devoted to supporting fair, inclusive clean energy futures for all parts of the world.
    One figure highlights the scale of the challenge in the energy industry: the oil companies that have pledged to achieve net-zero carbon emissions produce less than 10 percent of the global oil output. There’s a lot of work to be done there.
    We also have to make sure clean energy transitions don’t leave anyone behind. As I mentioned, energy poverty is still a huge issue in Africa — we need innovative solutions to address this problem, especially since many African economies are now struggling financially, with some even facing full-blown debt crises, as a result of the global recession.
    Perhaps the biggest technological challenge we face is tackling emissions from existing infrastructure — the vast fleets of inefficient coal plants, steel mills, and cement factories. These are mostly young assets in emerging Asia and could continue operating for decades more. Without addressing their emissions, we will have no chance of meeting our climate and energy goals. Our recent report, “Energy Technology Perspectives 2020,” takes a deep dive into this challenge and maps out the clean energy technologies that can overcome it. Innovation will be vital, and governments will need to play a decisive role. More